title
stringlengths 1
827
⌀ | uuid
stringlengths 36
36
| pmc_id
stringlengths 5
8
| search_term
stringclasses 44
values | text
stringlengths 0
2.08M
|
---|---|---|---|---|
Does the duration of ambulatory consultations affect the quality of healthcare? A systematic review | 5efad12f-4264-4a44-9417-0b4b497865a7 | 10603464 | Family Medicine[mh] | The body of experimental evidence about the effect of the duration of ambulatory consultations on care quality warrants very low confidence because the few extant studies were relatively unprotected against bias, and their results are inconsistent, small and imprecise. Also, this body of evidence—mostly studies conducted in the UK over 30 years ago—indirectly applies to today’s ambulatory care.
Policymakers seeking to optimise efficiency and value by optimally investing clinical encounter time cannot base decisions about encounter duration on trustworthy evidence. Research using both experimental designs and in-depth observational methods, taking advantage of routinely collected data, across the range of consultations modalities (eg, face to face, remote) and contexts (e.g., new diagnosis or complication, problem solving in multimorbidity) and measuring intended and unintended consequences of visit duration on quality of care are needed.
In ambulatory clinical encounters, clinicians and patients consider the patient’s problematic situation and develop a plan of care. Time is often noted as a barrier to improve the quality of care. And yet, the duration of ambulatory clinical encounters to optimise access and quality of care remains unclear. Almost 30 years ago, Wilson et al. published the first systematic review about the effect of ambulatory encounter duration on quality of care. They found evidence, mostly developed in the late 1970s in the UK, that longer visits were associated with better care experience and outcomes. That evidence compared visit durations—by which long visits lasted much less than 15 min—in an era when patients were most likely to seek care for one acute concern, could access very limited medical information on their own, electronic health records were not part of the consultation, clinicians had a limited range of tests and treatments to order and prescribe and documentation played minimal or no role in quality assurance or billing. Secular trends show that consultations are becoming longer in the developed world. Estimates from the USA, for example, suggest that the average consultation length increases by 12 s every year, with average consultations lasting between 15 and 25 min. This makes the sparse evidence available, confirmed in a Cochrane review published in 2016, hardly applicable to the ambulatory care of adult patients today, many of whom present to clinical encounters with chronic multimorbidity and psychosocial complexity. Causally linking encounter duration to quality of care is not straightforward. Characteristics of patients, clinicians and healthcare systems associated with longer encounters may also be associated to quality of care, confounding the observational evidence of their association. Also complicating the observational analysis is the simultaneous expansion in the number and complexity of clinical and administrative tasks expected to be completed during consultations. Their completion leaves less time to listen and appreciate the patient’s situation and to co-create plans of care that make intellectual, emotional and practical sense to the patient. Without time, hurried and harried consultations may be more likely to produce generic, burdensome, ineffective, unsafe and unaffordable treatments that may contribute to overwhelmed patients, burned out clinicians and low-quality care. Thus, to reliably estimate the association between the duration of ambulatory visits and quality of care, we must rely on controlled experimental evidence. Hence, this review aims to contribute to address the question, how much consultation time should be allotted to enable the care of adult patients in the ambulatory setting? In particular, this review sought to examine and synthesise the best available experimental evidence about the effect of ambulatory consultation duration on quality of care.
Study design Search strategy and data sources Study selection Data extraction and study quality assessment Data synthesis The paucity and heterogeneity of included studies precluded the planned quantitative synthesis. Instead, we summarised this evidence narratively.
We designed this systematic review based on Cochrane guidelines for the development of systematic reviews and registered the review protocol (OSF Registration DOI: 10.17605/OSF.IO/EUDK8). This report is in adherence to the Preferred Reporting Items for Systematic reviews and Meta-Analyses 2020 statement.
An experienced librarian (LJP) designed and executed a search strategy with input from the study’s principal investigator (VMM). This strategy involved searching multiple databases from their inception until 15 May 2023. The databases searched were Ovid MEDLINE(R) and Epub Ahead of Print; In-Process & Other Non-Indexed Citations, and Daily; Ovid EMBASE; Ovid Cochrane Central Register of Controlled Trials; Ovid Cochrane Database of Systematic Reviews; and Scopus. The strategy used controlled vocabulary, supplemented with keywords . 10.1136/bmjoq-2023-002311.supp1 Supplementary data
Eligible studies experimentally manipulated the length of ambulatory clinical encounters between adult patients and clinicians (ie, therapists, pharmacists, nurses, physicians) to determine its effect on measures of quality of care, regardless of language or date of publication. We excluded studies of simulated encounters, and studies in which most of the encounters considered were with paediatric patients, or in emergency departments or urgent care centres. Although considered in our protocol, for reasons discussed above, we ultimately excluded observational studies in which investigators estimated the correlation between encounter duration and measures of quality of care whether in usual care or within evaluations of an unrelated practice change or experimental intervention (e.g., implementation of a shared decision-making tool). We also excluded studies that manipulated other ‘times’ such as the time needed to get an appointment scheduled, travel time to an appointment or time spent waiting for the encounter to start. Pairs of reviewers worked independently and in duplicate, and after calibration, to assess study eligibility in two phases (title and abstracts followed by full-text assessments). All citations retrieved were imported into bibliographic references software Rayyan.
Details of the study design, population, length of consultation and quality-of-care were extracted independently by two reviewers using a standardised form. Quality of care outcomes included effectiveness (including measures of health outcomes), safety, equity, patient-centredness, efficiency (avoiding unnecessary tests or referrals, patient returns with same problem) and timeliness. Two reviewers independently assessed the validity of the included studies using the Risk of Bias 2 Cochrane instrument.
describes the flow of studies through our systematic search and selection process. This process identified 10 eligible studies reported in 11 publications comprising 9879 participants. describes the study characteristics. Only three studies were conducted outside the UK and only four were published in the last decade. Aside from age and sex, other sociodemographic participant characteristics were not reported. Except for the study by Sohn et al , conducted in internal medicine (we did not consider data they reported from paediatrics or emergency care), all other studies were conducted in general practice. The four studies conducted in the last decade evaluated complex interventions (e.g., economic incentives or arrangements to ensure continuity of care and prompt review after a hospitalisation), with longer visits as one component. Encounters were allocated to different consultation lengths ranging from 3 to 45 min. Ages are reported using means and SD, median and range, proportion in age category or eligible age range. The information about visit time targets for Reed et al was kindly provided by the authors. The main threats to the validity (risk of bias) of the included studies arose from the allocation of encounters to different encounter lengths—three studies reported using random allocation, and all had no or unclear methods to conceal the allocation sequence—and from lack of blinding of participants and outcomes assessors . Encounter duration and quality of care describes the effect of encounter duration on measures of quality of care. No study reported on the effect of encounter duration on safety, equity or timeliness. Longer consultations led in some studies to better patient and clinician satisfaction with the encounters and improvements in some but not all measured outcomes related to efficiency, effectiveness and patient-centredness, with assessments on other outcomes finding either no significant benefits or inconsistent effects across time-defined groups within the same study.
≥20 years ago—indirectly applies to today’s ambulatory care. Thus, at a time in which healthcare systems are focused on efficiency and value and are interested in optimally investing time, the research evidence about this most important resource for care cannot be trusted to offer reliable estimates of the effect of lengthening or abbreviating ambulatory encounters on the quality of care. Limitations of this review Implications for practice and research A study of the average length of primary care consultations across 67 countries found a range from 48 s in Bangladesh to 22.5 min in Sweden. In the absence of reliable evidence linking encounter duration to markers of care quality, this large variation in visit duration across countries can only reflect each system’s choice to allocate time to optimise patient access to care or to improve revenue related to number of visits per unit of time. At the health system level, this range raises issues of equity but also of value in healthcare. Are encounters that are too long wasteful of time, the most precious resource for care? Are encounters that are too brief to meaningfully notice and respond to a patient’s problematic situation cruelly wasteful in that they offer access and apparent efficiency without the possibility of effectiveness? At the very brief end of encounter duration, clinicians who experience moral injury being unable to care for the people before them may demand more time, as would patients who find their agenda setting overtures truncated within 11 s. Unfortunately, care advocates will find limited evidence to support both minimum and optimum visit durations, except for their own observation that caring well takes time. Rigorous experimental studies are needed to assess the extent to which manipulating consultation time can feasibly improve the quality of care, including patient and clinician satisfaction, timely access, safety and outcomes. Clustered randomised trials and interrupted time-series designs may be appropriate methods to reliably estimate the effect of different encounter durations on measures of quality and partially blind encounter participants and outcome assessors to trial hypotheses. These designs can be large enough to enable the efficient study of interactions between visit factors and the duration-quality link. Routinely collected measures of patient experience and outcomes, supplemented by clinician-reported measures and healthcare record review could enable large scale studies at low cost. In-depth analyses of a random sample of clinical encounters, including audiovisual recording of the encounters with or without video-reflexive ethnography, could further enrich the body of evidence with qualitative insights. These trials should evaluate minimal and optimal durations given the extent of continuity of care, asynchronous (e.g., messaging via text or through medical record portals) communication, remote consultations (especially relevant during COVID-19 pandemic) and visit intervals. The finding that about 40% of the consultation is spent working with the electronic health record and that, increasingly, the encounter is about meeting guideline-directed care rather than responding to the patient’s agenda, may require that research focus not only on the scheduled or the actual encounter durations, but also on the time spent caring for each patient. This is particularly important in the care of patients with multimorbidity, particularly mental health comorbidity, trauma and contextual complexity. Visits of appropriate length may reduce disparities of care across gender, race and ethnicity, language and other causes of discrimination and bias. Studies should explore interactions between visit factors and the association between encounter duration and quality of care. Some examples of these factors are patient (sex, race, ethnicity and migratory status, frailty, multimorbidity, polypharmacy), clinician (sex, race, ethnicity), type of clinician (e.g., physician, advance practice nurse, nurse, therapist, pharmacist), country of training, specialty, years of experience, training in patient–clinician communication) and encounter characteristics (planned or unplanned; diagnostic, therapeutic or prognostic; primary care or specialty care; consultation vs ongoing care, with or without continuity of care; electronic health record use; participation of interpreters or learners; face-to-face vs remote; with or without asynchronous communication). These factors interact in complicated ways with encounter duration and measures of care quality. For example, some authors reported that compared with encounters with male physicians, encounters with female physicians tend to be longer as patients bring up more psychosocial concerns, which likely enables more responsive visits. Visits with black patients with psychiatric concerns tend to be 4.4 min shorter than similar visits with white patients, likely introducing disparities in health outcomes. Financial, equity, safety and access trade-offs need to be estimated if longer visits prove necessary to improve healthcare quality. This line of research may find that accelerating the practice—rather than increasing the number of available clinicians and supporting continuity of care, for example—may give people more access to care but in a form that fails to notice and respond well to their problematic situation. Simply lengthening clinical encounters may reduce access, be wasteful and, when implemented reactively, may translate into longer work days, extended patient wait times and staff dissatisfaction. On the other hand, improvements in the quality of care brought about within unhurried (not longer) consultations may reduce subsequent healthcare demand and, therefore, improve access to care.
Although we included four additional studies, our results are concordant with the Cochrane review published in 2016. Because this Cochrane review already offered quantitative summaries that represent 60% of the studies and 80% of the encounters summarised here and because, in our judgement, the updated results are insufficiently reliable or applicable, we decided to not present the data quantitatively or to conduct meta-analyses. Of note, our work in line with the summary of findings of the Cochrane review, report in all cases the trustworthiness of this evidence as very low.
In conclusion, the evidence for the minimal or optimal duration of an ambulatory consultation is sparse, at risk of bias and, at best, of indirect applicability. Without research into the relationship between duration of consultations and measures of quality of care, further erosion in the time available to care will remain motivated by resource allocation formulae that cannot fully account for how accelerating care may affect its quality.
|
A Rare Cause of Empyema and Bacteremia Due to | bc56b5e6-e3d9-4924-8960-bfd1232dc9a6 | 11022664 | Microbiology[mh] | Shewanella spp . are glucose-nonfermenting gram-negative mo-tile bacilli whose most important phenotypic characteristic is the production of hydrogen sulfide . The first description of this species was provided in 1931 by Derby and Hammer . Shewanella spp. have been reported to cause otitis media , ocular infections , skin and soft tissue infections , and bacteremia. Patients with such infections are considered to have a good prognosis . Cases of death due to empyema caused by Shewanella spp. are rare. In addition, there are only a few reports on Shewanella infection’s risk factors, severity, susceptibility to antibiotics, and prognosis. The present report is of a 78-year-old man with alcoholic cirrhosis presenting with bacteremia and empyema due to infection with Shewanella spp.
In this study, we report a case of empyema and bacteremia due to Shewanella spp. and present a literature review. A 78-year-old man with post-treatment alcoholic cirrhosis (Child-Pugh B) and methicillin-resistant Staphylococcus aureus bacteremia 3 weeks prior to admission presented to our emergency room with high fever and dyspnea. He had eaten raw fish (sashimi) 1 week prior to admission. On admission, vital signs were as follows: clear consciousness; temperature, 37.0°C; blood pressure, 98/68 mmHg; pulse rate, 100/min; respiratory rate, 22/min; and oxygen saturation, 96% on room air. On physical examination, lung sounds were attenuated in the right lower lung field. A systolic murmur was auscultated at the left sternal border of the second intercostal space. The abdomen was slightly distended, with percussion tenderness of the liver and mildly distended veins in the inferior abdominal wall. Edema, sclerosis, erythema, and tenderness bilaterally from the thighs to the dorsum of the feet were noted. Laboratory data were as follows: white blood cell count, 7000/μL (neutrophils, 88.0%; lymphocytes, 6.5%; and monocytes, 5.0%); hemoglobin, 11.9 g/dL; total protein, 7.0 g/dL; albumin, 2.9 g/dL; blood urea nitrogen, 41.0 mg/dL; creatinine, 1.04 mg/dL; total bili-rubin, 4.9 mg/dL; alkaline phosphatase, 587 U/L; lactate dehydrogenase, 1041 U/L; aspartate aminotransferase, 29 U/L; alanine aminotransferase, 23 U/L; γ-glutamyl transpeptidase, 129 mg/dL; sodium, 132 mEq/L; potassium, 3.8 mEq/L; chlorine, 96 mEq/L; ammonia, <15 μg/dL; activated partial thromboplastin time, 38.2 s; and C-reactive protein, 25.2 mg/dL. Chest and abdominal computed tomography (CT) on admission revealed right-sided pleural effusion, pleural thickening, and ascites . Hemorrhagic but non-purulent pleural fluid was aspirated, and a Gram stain revealed the presence of gram-negative rods. The pleural fluid analysis showed a pH of 7.0, 2.2 g/dL of total protein, 1.1 g/dL of albumin, 845 U/L of lactate dehydrogenase, <2 mg/dL of glucose, 18.2 U/L of adenosine deaminase, 16 μg/mL of hyaluronic acid, and 16 850/ μL of white blood cells (neutrophils 99.5%, lymphocytes 0.5%). On the day of admission, we inserted a chest tube to drain the empyema on the right side and started piperacillin/tazobactam (4.5 g every 8 h) and vancomycin. Cultures of blood and pleural effusion were positive for Shewanella algae, as identified by matrix-assisted laser desorption ionization-time-of-flight mass spectrometry (MALDI-TOF-MS; Bruker Daltonics, Germany). Antimicrobial susceptibility is shown in . We switched to ceftriaxone (1 g every 24 h) on hospitalization day 4. On hospitalization day 12, a chest CT revealed a new pleural effusion on the left side; therefore, we performed a left thoracentesis. The left-sided pleural fluid was characterized as a transudative pleural effusion, and the culture was negative. On hospitalization day 17, we performed a right thoracentesis, which was culture negative. We stopped pleural drainage on hospitalization day 21. Subsequently, on hospitalization day 33, the patient developed aspiration pneumonia, and the antibiotic was changed to meropenem (1 g every 8 h). On hospitalization day 53, the patient died of aspiration pneumonia .
We encountered a case of a patient with liver cirrhosis who developed Shewanella spp. empyema and bacteremia. Consumption of raw fish may have been a risk factor for this case. Our patient had a poor prognosis because of his underlying disease and complications; however, Shewanella infections may have a better outcome with effective drainage and appropriate antimicrobial treatment. In general, Shewanella spp. infection is caused by exposure to marine products or seawater and mainly presents as necrotizing soft tissue infection and bacteremia . In particular, immunocompromised patients may develop primary bacteremia with a fulminant course . We conducted a literature review of cases of Shewanella infection with bacteremia. Five authors independently reviewed the relevant titles and abstracts in the database records, retrieved full texts for eligibility assessment, and extracted information from these cases. To account for changes in nomenclature, we performed a search using the keywords “ shewanella ,” “ alteromonas putrefaciens ,” and “ pseudomonas putrefaciens ” in the PubMed, Embase, and Ichushi electronic databases, up to January 2021. We retrieved a total of 330 articles (80 from PubMed, 99 from Embase, and 151 from Ichushi). After removing records not reporting bacteremia due to Shewanella spp., and articles not written in English or Japanese, 158 articles remained. After removing duplicates and performing full-text evaluations of these 158 articles, 66 articles, with data on a total of 124 patients, were included in our review . The results of the literature review and the case list are presented in and . Of the published cases, 35 were from the United States of America, 28 were from South Africa, 16 were from Japan, and 12 were from Taiwan. An analysis of all 124 patients from the studies revealed that 39 (31%) were infected with S. algae , 7 (6%) with Shewanella haliotis , 77 (62%) with Shewanella putrefaciens , and 1 (1%) with Shewanella xiamenensis . The reported cases had a median age of 61.6 (56.0–75.0) years. Where sex was reported, 75/93 (81%) of the infected patients were male. In terms of portal of entry, skin lesions, seen in 21/35 patients (60%), constituted the most common portal, while an oral portal of entry was the second most common, seen in 9/35 patients (26%). The consumption of raw fish was assumed to be the cause of infection in the present case. There have been numerous reports of Shewanella spp. being detected in fish ; however, there are no direct reports confirming Shewanella infection in humans resulting from the consumption of fish. Nevertheless, given that many of the individuals with Shewanella infection reported in the literature had a history of fish consumption, the possibility remains. However, it remains necessary to demonstrate whether Shewanella isolated from infected human specimens and Shewanella identified from fish used for consumption are indeed the same. In this literature review, Shewanella species were identified using the API 20 NE system (bioMerieux, France) in 41/93 patients (44%), 16S rRNA gene sequencing analysis (Macrogen, Korea) in 24/93 patients (26%), VITEK 2 GN (bioMerieux, France) in 10/93 patients (11%), ID 32 GN (bioMerieux, France) in 10/93 patients (11%), and MALDI-TOF-MS (bioMerieux, France) in 8/93 patients (9%). However, previous studies have reported that the API 20 NE system, ID 32 GN, VITEK 2 GN, and MALDITOF-MS cannot distinguish S. algae from S. haliotis or S. putrefaciens . Indeed, in the case reported by Yan et al, an identification of S. putrefaciens was conferred by culture and molecular identification of bacteria in the specimen, while next-generation sequencing technology (NGS, Genskey Medical Technology Co., Ltd, Beijing, China) identified the bacteria as S. algae . Furthermore, since Shewanella chilikensis and Shewanella carassii are phylogenetically related to S. algae , they may also be misidentified . In another study, 16S rRNA gene sequencing analysis was found to be more accurate than MALDI-TOF-MS in distinguishing between S. algae and S. haliotis . In the present case, Shewanella was identified using MALDI-TOF-MS alone, and we could not distinguish between S. algae and S. haliotis . In terms of underlying diseases, hepatobiliary disease may be a result of Shewanella bacteremia . Similarly, in the present literature review, hepatobiliary disease was the most common underlying disease, reported in 38/114 patients (33%). In general, patients with cirrhosis are at risk of bacterial translocation caused by disruption of the intestinal mucosal barrier and suppression of neutrophil function caused by iron overload associated with hepatic dysfunction . In addition, Shewanella spp. are known to possess siderophores that absorb iron and contribute to proliferation . In our case, the patient had a medical history of Child-Pugh B cirrhosis and appeared to be at risk for Shewanella spp. bacteremia. Shewanella spp. are susceptible to third- and fourth-generation cephalosporins, aminoglycosides, carbapenems, erythromycin, and quinolones; however, they are reported to be resistant to penicillins . In this review, although we could not identify the antimicrobial susceptibility in all cases, 39/87 patients were treated with third- or fourth-generation cephalosporins (45%), 35/87 with penicillin (40%), and 31/87 with aminoglycosides (36%). On the other hand, a recent review article reported that the genus of Shewanella contains a variety of drug-resistant genetic elements, showing resistance to many drugs, including beta-lactams, quinolones, aminoglycosides, macrolides, and carbapenems . In our case, after the results of the pleural fluid culture were revealed, the antibiotic was changed to ceftriaxone. Although ceftriaxone was not included in the susceptibility testing panel conducted, the minimum inhibitory concentration (MIC) for the similar third-generation cephalosporin ceftazidime was 2 μg/ml, and the MIC for the fourth-generation cephalosporin cefepime was ≤2 μg/ ml, indicating susceptibility to cephalosporins. Regarding resistance in this strain, extended spectrum beta-lactamase (ESBL)- and AmpC-producing Enterobacterales were screened with a MASTDISCS Combi D68C (Mast Group, Ltd., Bootle, UK) according to the manufacturer’s instructions, which revealed that neither ESBL nor AmpC were produced. Since the strain did not produce the cephalosporinase AmpC, indicating that the strain was not cephalosporin resistant, we considered it was safe to treat with ceftriaxone. Some studies have reported that the prognosis of Shewanella infection is relatively good, and the mortality rate is low . In our review, the mortality rate was approximately 20%, and most types of Shewanella infections were not deep-seated infections but skin and soft tissue infections. Many older patients have multiple underlying diseases. Therefore, prognosis and mortality may depend on the underlying disease, the type of Shewanella infection, appropriate antibiotics, and source control. Shewanella spp. infections rarely cause empyema. In our review, we found 3 cases of empyema (2 S. algae ), and the site of entry of Shewanella infection was unknown for all 3 cases. Two of the 3 patients received drainage and survived. Although our patient died because of aspiration pneumonia, the initial response to treatment with thoracic drainage and antibiotics was favorable. The prognosis of a Shewanella infection, as in this case, may be influenced by the patient’s underlying diseases and overall health condition. To improve treatment outcomes, appropriate antimicrobial therapy and drainage are also important. This study had several limitations. As mentioned above, 16S rRNA gene sequencing analysis is more accurate than MALDITOF-MS in identifying S. algae ; however, 16S rRNA gene sequencing may not be able to separate closely related strains . Therefore, the most appropriate approaches would be qPCR for specific S. algae genes or whole-genome sequencing. Additionally, we could not exclude reporting bias, wherein researchers and clinicians frequently report successfully treated cases rather than mortality in a case series.
We report a case of Shewanella spp. empyema and bacteremia with a history of hepatobiliary disease. Our case highlights that clinicians should recognize Shewanella spp. as a cause of empyema and bacteremia in patients with liver cirrhosis, and that microbiological diagnosis with antibiotic sensitivity testing and treatment should be undertaken urgently to prevent fatal sepsis.
|
Effect of education regarding treatment guidelines for schizophrenia and depression on the treatment behavior of psychiatrists: A multicenter study | 60e5625b-9821-42bb-a146-f23381f05023 | 11488608 | Psychiatry[mh] | Study design Participants Implementation strategy of the Outcomes Statistical analysis This study is a multicenter prospective study. One hundred and seventy‐six medical facilities participated in the EGUIDE project from 2016 to 2019. The psychiatrists belonging to the participating facilities were able to choose whether to participate in the intervention of the EGUIDE project. Prescription data at discharge and treatment during hospitalization in patients with schizophrenia and patients with major depressive disorder in hospitals in the EGUIDE projects were collected. These patients' data were divided into two groups: patients under the care of psychiatrists who participated in the EGUIDE project (EGUIDE (+)), and patients under the care of psychiatrists who did not participate in the EGUIDE project (EGUIDE (−)). The primary outcomes of this study are the treatment behaviors of psychiatrists measured by quality indicators. Quality indicators (QIs) were defined to measure adherence to guideline‐recommended treatments as described previously. , The study was approved by the Ethics Committee of the National Center of Neurology and Psychiatry and each participating institution. The protocol of the EGUIDE project is registered in the University Hospital Medical Information Network Registry (UMIN000022645). This study was carried out in accordance with the World Medical Association's Declaration of Helsinki. See Supplementary Methods for details.
A total of 782 psychiatrists belonging to 176 medical facilities participated in the EGUIDE project and attended two courses about guidelines for schizophrenia and major depressive disorder, separately. Participation in the EGUIDE project was voluntary among the psychiatrists at participating EGUIDE facilities. The demographic information of the psychiatrists was as follows: proportion of males: 72.4%; mean age (standard deviation): 33.7 (7.2) years; mean duration of professional experience in psychiatry: 4.8 (6.2) years at the time of the course. Written informed consent was obtained from each psychiatrist prior to participation. Patients were diagnosed based on DSM‐5 diagnostic criteria. Eligible patients were individuals with schizophrenia (n = 7405) or major depressive disorder (n = 3794) who were discharged from participating facilities (Tables and ). We collected the medical record information of patients at each institution with opt‐out consent. See Supplementary Methods for details.
EGUIDE project In the EGUIDE project, after participating in the guideline lecture, clinicians receive instruction from certified psychiatrists who have been trained in the EGUIDE project directly at each medical institution (Fig. . Standard implementation strategy). There is a new implementation strategy in the EGUIDE project (Fig. , New implementation strategy). See Supplementary Methods for details.
Quality indicators ( QIs ) QIs are employed to assess and improve the quality of care in many health care settings. A higher QI means that the proportion of the recommended treatment is higher. See Tables and for details.
Statistical analysis was performed using SPSS version 26 (IBM Corp., Armonk, NY). The chi‐square test was used to examine the effect of EGUIDE on QIs and to examine differences in the demographic information of patients based on the EGUIDE participation status of their psychiatrists. As age, sex, and institution attributes in patients have been previously reported to be associated with QIs, , we performed logistic regression analysis to examine the effect of EGUIDE on QIs after adjusting for confounding factors, age, sex, and institution attributes in patients. General linear model analysis was additionally performed to examine the effect of EGUIDE on the total numbers of drugs being used in patients with schizophrenia and patients with major depressive disorder, adjusting for confounding factors, age, sex, and institution attributes in patients. As 15 tests (eight SQIs in schizophrenia, five DQIs in major depressive disorder and number of drugs in patients with schizophrenia or major depressive disorder) were performed to examine the effect of participation in the EGUIDE project, the Bonferroni correction for multiple testing was used, and the level of statistical significance was set at P < 0.0033 (0.05/15).
Primary outcomes Sensitivity analysis To examine the improvement in treatment behaviors by the EGUIDE project, we compared the QIs between the EGUIDE (+) and EGUIDE (−) groups in patients with schizophrenia or major depressive disorder. The proportions of eight SQIs and five DQIs are shown in Tables and . Three SQIs were higher in the EGUIDE (+) group than in the EGUIDE (−) group: SQI‐1 (antipsychotic monotherapy regardless of whether other psychotropics medication, adjusted odds ratio (AOR), 1.18 [95% CI, 1.07–1.31], P = 1.3 × 10 −3 ), SQI‐2 (antipsychotic monotherapy without other psychotropics, AOR, 1.42 [95% CI, 1.25–1.62], P = 1.2 × 10 −7 ), and SQI‐3 (no prescription of anxiolytics or hypnotics, AOR, 1.24 [95% CI, 1.12–1.38], P = 6.5 × 10 −5 ) (Table ). On the other hand, there was no effect of the EGUIDE project on the remaining five SQIs among patients with schizophrenia (Table ). The proportions of two DQIs were higher in the in EGUIDE (+) group than in the EGUIDE (−) group among patients with major depressive disorder: DQI‐2 (antidepressant monotherapy without other psychotropics, AOR, 1.63, [95% CI, 1.28–2.08], P = 7.6 × 10 −5 ) and DQI‐3 (no prescription of anxiolytics or hypnotics, AOR, 1.36, [95% CI, 1.16–1.59], P = 9.8 × 10 −5 ) (Table ). The proportion of DQI‐1 (antidepressant monotherapy regardless of whether other psychotropics medication) was higher in the EGUIDE (+) group than in the EGUIDE (−) group; however, the difference was not significant (adjusted P = 0.0055). There was no effect of the EGUIDE project on the remaining two DQIs in patients with major depressive disorder (Table ). To investigate the effect of EGUIDE on total numbers of drugs being used in patients, total numbers of drugs being used in the EGUIDE (+) and EGUIDE (−) group were compared for both disorders (Table ). General linear model analysis was conducted to examine the effect of EGUIDE on total numbers of drugs, and the results showed that patients in the EGUIDE (+) groups used fewer drugs than those in the EGUIDE (−) groups (schizophrenia: F = 30.2, P = 4.0 × 10 −8 , MDD: F = 18.1, P = 2.2 × 10 −5 , respectively).
The psychiatrists in the EGUIDE (+) group wethe differences may exist between the pre‐lecture QIs of EGUIDE (+) psychiatrists and QIs of non‐EGUIDE psychiatrists who had never received EGUIDE lecture in this study period but only belonged to the facilities that participated in the EGUIDE project. To investigate this potential selection bias, we examined whether there are differences in QIs in both schizophrenia and major depressive disorder (SQI‐1, SQI‐2, SQI‐3, DQI‐2, and DQI‐3) between pre‐EGUIDE (+) and non‐EGUIDE psychiatrists (Tables ). Four QIs did not differ significantly between pre‐EGUIDE (+) and non‐EGUIDE; however, SQI‐3 did differ significantly between groups (no prescription of anxiolytics or hypnotics, P = 0.049) (Tables and ). We used logistic regression analysis to examine the effects of EGUIDE participation on QIs over time (i.e., pre‐EGUIDE (+), 1‐year, 2‐year, and 3‐year periods) after adjusting for confounding factors, age, sex, and type of facilities. We also analyzed the effect of time on QIs in the non‐EGUIDE group. The proportion of three SQIs in schizophrenia over the 3‐year period were significantly increased in the EGUIDE (+) group (SQI‐1: antipsychotic monotherapy regardless of whether other psychotropics medication, P = 0.015; SQI‐2: antipsychotic monotherapy without other psychotropics, P = 2.5 × 10 −4 ; and SQI‐3: no prescription of anxiolytic or hypnotic, P = 0.022) (Table and Fig. ). On the other hand, the proportion of SQI‐1 significantly decreased over the 3‐year period and no changes in the proportions of SQI‐2 or SQI‐3 were observed in non‐EGUIDE32) (Table and Fig. ). A significant increase in the proportions of two DQIs in major depressive disorder was observed over the 3‐year period in the EGUIDE (+) group (DQI‐2: antidepressant monotherapy without other psychotropics, P = 3.4 × 10 −3 and DQI‐3: no prescription of anxiolytic or hypnotic, P = 5.4 × 10 −3 ), while no such trend was observed in the non‐EGUIDE group (Table , and Fig. ).
In this study, we found that the proportions of antipsychotic monotherapy regardless of whether other psychotropic medications were used, antipsychotic monotherapy without other psychotropics, and no prescription of anxiolytics or hypnotics in schizophreniaWe also found similar results that the proportions ofin major depressive disorder were significantly higher in the patients treated by psychiatrists who participated in the EGUIDE project. These results suggested that participation in the EGUIDE project might have an effect on adherence to guidelines among psychiatrists. A systematic review of the effectiveness of guideline implementation strategies for schizophrenia spectrum disorders in six small studies demonstrated that it is not possible to arrive at definitive conclusions because of the very low‐quality evidence among these studies. The EGUIDE project has demonstrated the real‐world effectiveness in disseminating guidelines for schizophrenia and major depressive disorder in many psychiatric institutions nationwide. Higher QIs in the EGUIDE (+) group were shown; however, this effect was found in some QIs. The reason for the discrepancy in the effect of the EGUIDE project among QIs should be discussed. All higher QIs in the EGUIDE (+) group were related to drug treatments, which can be implemented without the special medical environment of hospitals or the skills of psychiatrists. Therefore, it is conceivable that all psychiatrists who took the EGUIDE training course could easily implement the guidelines. On the other hand, QIs that were not affected by the EGUIDE project, such as mECT, clozapine, and cognitive behavioral therapy, can only be implemented in a limited number of medical facilities. Even psychiatrists require special training and qualifications to implement such treatments. Therefore, it is difficult to achieve a higher QI only by receiving training on the guidelines. The increase in the number of medical facilities where mECT, clozapine, and cognitive‐behavioral therapy can be implemented as well as separate training on these skills for psychiatrists and an increase in the number of psychiatrists who become qualified might be necessary in addition to training on the guidelines. On the other hand, the EGUIDE project was found to be equally effective on antipsychotic or antidepressant monotherapy without other psychotropics and no prescription of anxiolytics or hypnotics (SQI‐2, SQI‐3, DQI‐2, and DQI‐3) in schizophrenia and major depressive disorder. Furthermore, the overall number of psychotropics in the EGUIDE (+) group was lower than that in the EGUIDE (−) group in schizophrenia and major depressive disorder. These outcomes were common in the nonrecommendation of polypharmacy despite differences in their recommended treatments among the two guidelines, which may have resulted in a higher learning effect of participating in the EGUIDE project. In a previous systematic review of existing guidelines for major depressive disorder or bipolar disorder, it was noted that the guidelines themselves need to provide implementation strategies for their recommendations. This study found that the improvement in the evidence‐practice gap due to educational effects was replicated not only in the Guideline for Pharmacological Therapy of Schizophrenia but also in major depressive disorder treatment guidelines. These results suggest that the EGUIDE project may be a versatile and effective strategy for guideline‐based practice quality improvement. Therefore, it seems likely that the implementation strategies of the EGUIDE project could be reflected in other guidelines to meet the need for promoting implementation. Gallego et al . reviewed the global and regional trends of antipsychotic polypharmacy across 147 studies (published from 1970 to 2009) including one million and forty‐two thousand participants (83%) diagnosed with schizophrenia. The proportion of antipsychotic polypharmacy was not different between decades; however, regarding regions, the prevalence was higher in Asia and Europe than in North America. Research on Asian Psychotropic Prescription Patterns in 2016 examined the proportion of polypharmacy in patients with schizophrenia among 15 countries and areas in Asia. These data demonstrated a continued declining trend of antipsychotic polypharmacy in Japan; however, Japan continued to demonstrate the highest proportion of polypharmacy and highest dosages of psychotropic prescription drugs in Asia. Although a polypharmacy reduction policy by the government, which reduces the reimbursement of medical costs, was introduced in 2014 and 2016, it seems to be insufficient. National mental health policies based on hospitals and financing systems might be obstacles to reducing polypharmacy in Japan. The severity of major depressive disorder and polypharmacy in 44 000 patients in Europe from 2001 to 2017 suggests a trend toward polypharmacy depending on severity; however, changes in polypharmacy over time were not reported. Changes in psychotropic polypharmacy in patients with schizophrenia or major depressive disorder have not been reported from 2016 to 2019. A more recent study reported no statistically significant effect of the new Japanese policies for appropriate hypnotic use on long‐term prescriptions of hypnotics. However, the proportion of antipsychotic or antidepressant polypharmacy might have been reduced after 2016 in Japan, and the reduction in QI values in this study could be biased. This study has several limitations. The fundamental limitation of this study is the lack of randomization. In the current analysis, it is not possible to rule out that the observed changes are due to selection bias. The psychiatrists who joined the project are expected to have strong motivation to adhere to the treatment guidelines. Thus, it is possible that differences in the QIs may exist even before the training sessions. Sensitivity analysis showed that four QIs out of five QIs before the training sessions were not different, while one QI (SQI‐3, no prescription of anxiolytics or hypnotics) in the EGUIDE (+) group was significantly higher than that in the non‐EGUIDE group ( P = 0.049). These results suggested that parts of the effect of the EGUIDE project could be due to selection bias. As the primary outcome is the comparison of QIs between EGUIDE (+) and EGUIDE (−), sensitivity analysis should be performed to compare the QIs before and after the training sessions. The proportion of QIs tended to increase over time in the five QIs in the EGUIDE (+) group; however, no increasing trend in the proportion of QIs was observed in the non‐EGUIDE group. These data supported the effectiveness of EGUIDE in increasing the proportion of QIs; however, further research using a randomized control design should be performed to confirm these results. It cannot be ruled out that the patient characteristics between the EGUIDE (+) and EGUIDE (−) groups affected the results. It has been reported that differences in age, sex and institution attributes in patients can affect QIs. , These confounding factors should be controlled. After controlling for these factors, significant effects of the EGUIDE project on five QIs out of six QIs were observed, while no significant effects were observed for one QI (DQI‐1, antidepressant monotherapy regardless of whether other psychotropics medication). These results suggested that parts of the effect of the EGUIDE project could be due to confounding factors. Other possible confounders, such as the characteristics of participating and nonparticipating psychiatrists, were not controlled in this study. Basic sociodemographic data for the psychiatrists in the participation group were available; however, no sociodemographic data were available for the psychiatrists in the nonparticipation group who did not provide consent to participate in the study. Thus, no comparison could be made. There are limitations due to the design of this project, which was not randomized. The results should be interpreted with caution, and the results need to be proven in the future through randomized controlled trials. Comorbidity with schizophrenia or major depressive disorder has not been evaluated. The present study focused on a single psychiatric diagnosis, while similar previous studies have examined a mixture of various disorders. As comorbid illnesses such as substance use disorders could have poorer prognoses, further studies considering comorbidities are warranted. All data in this study were collected from inpatients. Therefore, it may be difficult to fully adapt the results to outpatients. In addition, the participants were relatively young psychiatrists, and thus, the findings may not be generalizable to older psychiatrists. The implementation effects were observed; however, this was conducted in institutions that participated in the EGUIDE project, and thus, the results may not be generalizable to nonparticipating institutions. Moreover, many quality indicators would cause type I error: something would be significant, even though using the Bonferroni correction for multiple testing. QI is an indicator of guideline‐recommended treatment of the unit of the patient group, such as a hospital, a ward and a psychiatrist. There are certainly cases that do not follow the guidelines in actual clinical situations, such as clozapine treatment in schizophrenia. For this case, the QI scores to aim for may be lower than other QIs. A low QI score does not necessarily indicate a clinical problem, and it is considered necessary to establish appropriate target values for each QI. We also recently developed an individual fitness score formula that expresses the degree to which prescribers adhere to the guidelines for schizophrenia and major depressive disorder. , An individual fitness score could be useful to visualize the degree to which current prescriptions conform to the guidelines for each patient. However, there is an important limitation. When the QI or individual fitness score is low, the cause of the low QI or individual fitness score can be examined and individualized by the supervising physician in actual clinical practice. A recent study published in 2023 suggested that polypharmacy is better than monotherapy in reducing admission due to physical or cardiovascular problems in patients with schizophrenia. The treatment guideline in this study recommend monotherapy. This recommendation was based on the systematic review of several outcomes, including adverse effects, at the time of 2015 when the guideline was issued. The treatment guideline should be revised in the near future to reflect this kind of new evidence. The EGUIDE project started with 22 facilities and is now implemented in more than 280 facilities. It is expected that the participation of psychiatrists who have not yet taken the course will lead to more education, dissemination, and verification of its effectiveness in actual clinical practice. The EGUIDE project revised the “Guideline for Pharmacological Therapy of Schizophrenia 2022” based on dissemination, education, and validation activities. The EGUIDE project is now in a new phase of dissemination activities by redesigning training materials to match the 2022 edition and creating new knowledge questions, behavior questions, and QIs. The 2022 edition of the guideline has been developed together with the patients, their families, and diverse supporters and was designed to be a guideline to be used not only by psychiatrists but also with them. The user's point of view should be respected in developing guidelines, as in the development of standard medical treatment. However, this is only the accomplishment of one cycle of the feedback loop; further evidence is necessary. Evidence will change with time, and society will change as well. To make continuous improvements for patients, human resource development to support this system is necessary. This study might succeed in providing patients with guidelines that are based on clinical and research work done by all psychiatrists for the benefit of their patients. We, the psychiatrists of the present, will bring the thoughts of all the psychiatrists of the past to the patients and pass them on to the psychiatrists of the future. Through this study, we hope to convey these thoughts and principles to all psychiatrists.
R Hashimoto, Hasegawa, and Yasuda have full access to all study data and take responsibility for the integrity of the data and the accuracy of the data analysis. Hasegawa and Yasuda are co‐first authors. R Hashimoto made substantial contributions to the conception and design of the work. Each author is expected to have performed the acquisition, analysis, or interpretation of data and critical revision of the manuscript for important intellectual content . Hasegawa, Yasuda, and R Hashimoto drafted the work. Hasegawa, Miura, Yasuda, and R Hashimoto developed the statistical analysis plan and conducted statistical analyses. Hasegawa, Watanabe, and R Hashimoto obtained f unding . Hasegawa contributed to project administration and technical or material support . Watanabe and R Hashimoto helped supervise the project.
We have no conflict of interest with any commercial or other association connected with the submitted article. Relevant financial activities outside the submitted work in the past 36 months are as follows: Grants or contracts: Mochida Pharmaceutical Co., Ltd.: F.K., K.W.; Terumo Life Science Foundation and Nishikawa Medical Promotion Foundation: R.F.; Sumitomo Pharma Co., Ltd.: T.K.; the Yamaguchi University of Medicine FOCS project, Patents of JPB 7108281: Hirotaka Y.; Daiichi Sankyo Company, Eisai Co., Ltd., Meiji‐Seika Pharma, Mitsubishi Tanabe Pharma Corp., MSD K.K., and Pfizer, Sumitomo Pharma Co., Ltd.: K.W.; Otsuka Pharmaceutical Co., and Takeda Pharmaceutical Co., Ltd.: K.W., R.H.; Japan Tobacco Inc.: R.H. Royalties or licenses: Sumitomo Pharma Co., Ltd.: T.K. Consulting fees: Boehringer Ingelheim, Otsuka Pharmaceutical Co. Ltd., and Sumitomo Pharma Co., Ltd.: T.K., K.W.; Chugai Pharmaceutical Co., Ltd.: T.K.; Daiichi Sankyo Company, Eisai Co., Ltd., Eli Lilly Japan K.K., Kyowa Company, Ltd., Lundbeck Japan, Luye Life Sciences Group Japan Co., Ltd., Mitsubishi Tanabe Pharma Corp, Pfizer, and Taisho Pharma Co., Ltd.: K.W.; Janssen Pharmaceutical K.K. and Takeda Pharmaceutical Co., Ltd.: M.U., K.W.; Mochida Pharmaceutical Co., Ltd.: N.Y.F.; Santen pharmaceutical Co., Ltd.: F.K.; Shionogi & Co., Ltd.: M.U. Payment or honoraria: AstraZeneca plc: Kenta I.; Boehringer Ingelheim: Naoki H.; Daiichi Sankyo Company: J.I., S.N., Tatsuya N., M.T.; EA Pharma Co., Ltd.: H.I.; Eisai Co., Ltd.: Hisashi Y., H.H., Y.T., H.I., H.M., S.N., T.K., Toshinori N., E.K., Hirotaka Y., M.T., T.O., Ken I., K.W.; Eli Lilly Japan K.K.: H.I., K.F., Ken I., K.W.; Janssen Pharmaceutical K.K.: N.Y.F., Hisashi Y., H.H., H.I., H.M., F.K., J.I., Naoki H., K.F., S.N., T.K., Toshinori N., E.K., Tatsuya N., M.T., Ken I., K.W., R.H.; Kowa Company, Ltd.: H.I.; Kyowa Company, Ltd.: Y.T., H.I., J.I., K.F., E.K., K.W.; Lundbeck Japan; F.K., K.F., S.N., E.K., Ken I., K.W.; Meiji‐Seika Pharma: N.Y.F., Hisashi Y., H.H., Y.T., H.I., H.M., F.K., J.I., Naoki H., K.F., S.N., E.K., H.K., M.T., Ken I., K.W., R.H; Mitsubishi Tanabe Pharma Corp.: S.N., T.K., E.K., Ken I., K.W.; Mochida Pharmaceutical Co., Ltd.: N.Y.F., Hisashi Y., J.I., S.N., T.K., Hirotaka Y., Ken I.; MSD K.K.: Hisashi Y., Y.T., H.I., H.M., J.I., K.F., S.N., T.K., Toshinori N., E.K., Tatsuya N., M.T., Ken I., K.W.; Nippon Shinyaku Co., Ltd.: E.K.; NIPRO CORPORATION: Ken I.; Nobel pharma Co., Ltd.: J.I., M.U.; Novartis: T.K., Ken I.; Otsuka Pharmaceutical Co., Ltd.: N.Y.F., Hisashi Y., H.H., Y.T., H.I., H.M., F.K., J.I., Naoki H., K.F., S.N., T.K., Toshinori N., E.K., S.O., Tatsuya N., H.K., Hirotaka Y., Kenta I., M.T., T.O., Ken I., K.W., R.H.; Pfizer: T.K., Ken I., K.W.; Shionogi & Co., Ltd.: Y.T., H.I., J.I., Tatsuya N., Ken I., K.W.; Sumitomo Pharma Co., Ltd.: N.Y.F., Hisashi Y., H.H., Y.T., H.I., H.M., J.I., Naoki H., K.F., S.N., T.T., T.K., Toshinori N., E.K., S.O., Tatsuya N., H.K., Hirotaka Y., M.T., T.O., Ken I., K.W., R.H.; Takeda Pharmaceutical Co., Ltd.: Y.Y., N.Y.F., Hisashi Y., H.H., Y.T., H.I., H.M., J.I., Naoki H., K.F., S.N., T.T., M.U., Toshinori N., E.K., Tatsuya N., H.K., M.T., T.O., Ken I., K.W., R.H.; TEIJIN PHARMA LIMITED: S.N., E.K.; TOWA PHARMACEUTICAL Co., Ltd.: E.K.; Tsumura & Co.: N.Y.F.; Viatris: N.Y.F., Hisashi Y., H.I., H.M., J.I., K.F., T.T., E.K., Tatsuya N., Hirotaka Y., M.T., Ken I.; Yoshitomiyakuhin Co.: N.Y.F., Hisashi Y., Y.T., H.I., J.I., Naoki H., K.F., S.N., E.K., M.T., Ken I. Leadership or fiduciary role: The Japanese Society of Psychiatry and Neurology, Commissioned Secretary and Guidelines Review Committee: K.F.
Table S1. Categorization of the drugs in this study. Table S2. Definitions of quality indicators according to clinical practice guidelines. Table S3. Characteristics of the patients for the numbers of drugs being used. Table S4. Characteristics of the patients with schizophrenia in the pre‐EGUIDE (+) and non‐EGUIDE groups. Table S5. Characteristics of the patients with major depressive disorder in the pre‐EGUIDE (+) and non‐EGUIDE groups. Table S6. Longitudinal changes in QI values and characteristics in patients with schizophrenia treated by psychiatrists in the EGUIDE (+) and non‐EGUIDE groups. Table S7. Longitudinal changes in QI values and characteristics in patients with major depressive disorder treated by psychiatrists in the EGUIDE (+) and non‐EGUIDE groups.
|
Evaluation of ophthalmic surgical simulators for continuous curvilinear capsulorhexis training | bd1fbc68-7115-4bfb-a9ef-1d4a16305870 | 9018214 | Ophthalmology[mh] | The study was approved by the Institutional Review Board of the Albert Einstein College of Medicine and was conducted in association with the Office of Clinical Trials and the Henkind Eye Institute at the Montefiore Medical Center in the Bronx, New York. Funding for the project was provided by a restricted educational grant from the Manhattan Eye and Ear Foundation. Three commonly used capsulorhexis simulators were chosen and sourced based on experience and availability, namely the Kitaro DryLab model (Frontier Vision Co., Ltd.), SimulEYE SimuloRhexis model (InsEYEt, LLC), and the Bioniko Rhexis (Bioniko Consulting LLC). The Kitaro DryLab model (Figure , a) has a central pupil diameter of 14.0 mm with an open-sky configuration and prefabricated openings that simulate clear corneal incisions. The simulated capsule is composed of a 5-micron-thick, polyester bilayer that comes on a roll allowing multiple attempts. The capsule is placed slightly taut on a reusable artificial resin clay nucleus that simulates a cataract. In this study, the simulated eye was mounted within a rubber face to simulate human facial contours. As recommended by the manufacturer, ophthalmic viscosurgical device was placed on the surface of the capsular film to simulate an anterior chamber. The SimuloRhexis model (Figure , b), with a physiological central pupil diameter of 8.0 mm, features an anterior chamber that can be filled with ophthalmic viscosurgical device and an artificial cornea that requires a standard keratome incision prior to the CCC, as is performed in actual cataract extraction. This model suctions directly onto a flat surface and allows the user to simulate variable posterior pressure by mechanically adjusting the base of the simulator. The Bioniko Rhexis model (Figure , c), with a central pupil diameter of 9.0 mm, was stabilized with the recommended Mini Holder prior to use in this study. Similar to Kitaro DryLab, this model has an open-sky configuration but, by contrast, features a limbal corneal ridge that can be incised with a standard keratome blade. To maintain proper consistency of the material, the entire surface was moistened with water prior to use as per recommendations. Expert cataract surgeons (N = 7), defined as having performed greater than 1000 primary cases, were identified, and informed consent was obtained. Each surgeon was tasked to create a 5.5-mm CCC on all three simulators, which were presented in a randomized sequence for a total of three trials on each model. The study was performed under standard operating room conditions at the Hutchinson Metro Center Operating Suite in Bronx, New York. With a sample size of 7 surgeons performing a total of 63 total trials, the study had 80% power with a 2-sided type I error rate of 5% to detect a minimum effect size of 1.3 in the measured outcomes among simulators. The surgeons were instructed to position themselves as they would for an actual procedure, and foot-pedal controlled Zeiss Lumera microscopes with recording capabilities were used for each trial. The unmarked and previously prepared simulators were each placed directly in front of the surgeons on a raised metal tray table in randomized fashion. The standardized materials used included the following: a dual-bevel, 2.75-mm microkeratome blade to make the clear corneal incisions for SimulEYE and Bioniko, dispersive ophthalmic viscosurgical device (VISCOAT) for Kitaro and SimulEYE, a standard bent cystotome needle on a 1-mL syringe to make the initial anterior capsular rent, and a pair of standard titanium Utrata forceps to create the CCC. The primary measured outcomes included the size of the completed CCC (millimeters), the number of capsular forceps manipulations (number of grabs) required, and the task duration (seconds). Immediately after each CCC attempt, surgeons were asked to subjectively rate on a modified Likert scale (1 to 7) how closely the model simulated human tissue using the following question: “On a scale from 1 to 7, with 1 signifying the least realistic simulation experience and 7 signifying the most realistic simulation experience, how well does this kit simulate performing a CCC on real tissue?” The names of the simulators were not revealed to the surgeons until after all trials were completed. Outcome measures were summarized for each kit and trial by computing means and standard deviations. In addition, multiple linear regression models that included kit, trial, and surgeon as predictor variables were fit to the data to assess the independent effects of each factor on each of the outcomes. A 2-sided P value less than 0.05 was considered statistically significant. All analyses were performed using SAS v. 9.4 (SAS Institute Inc.).
A total of 63 trials (7 surgeons completing three trials on each of the 3 simulators) were performed in a randomized fashion. The results for each primary outcome are presented. There were statistically significant differences among the simulators and across the 3 trials for all outcome measures. Regarding size (maximum diameter in millimeters), surgeons created the 5.5-mm CCC most accurately on the Bioniko and SimulEye models. Surgeons performed the largest average CCC on the Kitaro model (8.00 ± 0.84) compared with both Bioniko (5.24 ± 0.60, P < .0001) and SimulEYE (5.11 ± 0.41, P < .0001). Across all simulators, CCC size was overall larger in the third trials (6.29 ± 1.56) compared with the first trials (5.94 ± 1.39, P = .003, Figure ). Surgeons spent more time (seconds) performing the CCC on Bioniko (41.95 ± 26.70) than on both Kitaro (32.05 ± 14.99, P = .02) and SimulEYE (28.90 ± 15.18, P = .002) and more time on average on trial 1 (42.24 ± 25.23) than that on trials 2 (28.48 ± 15.87, P = .001) and 3 (32.19 ± 16.44, P = .01, Figure ). Bioniko required a greater number of grabs (6.53 ± 3.14) than both Kitaro (4.90 ± 2.47, P = .01) and SimulEYE (3.90 ± 1.34, P < .0001). Trial 1 (6.19 ± 3.57) had a greater number of grabs than both trials 2 (4.33 ± 2.01, P = .002) and 3 (4.81 ± 1.63, P = .02, Figure ). The Kitaro (4.56 ± 0.84, P < .0001) and SimulEYE models (4.19 ± 0.92, P < .0001) were rated as more realistic by the surgeons than the Bioniko model (1.38 ± 0.80) on a 7-point modified Likert scale (Figure ). The highest numbers on the modified Likert scale represent the most realistic simulation experience.
Ophthalmic surgical simulators are in popular use by residency training programs and offer novice surgeons the opportunity to practice complex maneuvers in preparation for actual surgery in a safe and controlled environment. Studies demonstrated improved performance by students and residents after practicing either on simulator devices or in the wet lab. – Specifically, Belyea et al. showed that surgeons who trained on EYESi had shorter phacoemulsification times, lower phacoemulsification power, fewer intraoperative complications, and a shorter learning curve on average than those who were not trained on EYESi. Pokroy et al. also found that ophthalmic surgical simulators shortened the learning curve for the first 50 cataract cases, with less adept residents benefiting the most from the training. It is imperative to note that both of these studies involved virtual reality surgical simulation through the EYESi module; neither used any of the 3 models that were used in this study. The Kitaro model has been studied for steps including the CCC; however, this was performed using the Da Vinci Robotic Surgical System on the Kitaro WetLab model. In our analysis, we chose the Kitaro DryLab model with manual CCC creation as this is the more commonly used training tool for this task. To the authors' knowledge, no studies have been reported on the Bioniko Rhexis or SimulEYE SimuloRhexis models. The advertised cost of materials to perform 100 CCCs, not accounting for institutional discounts, was $970 for Bioniko, $995 for Kitaro, and $715 for SimulEYE. Of note, the Kitaro kit uses a roll of replaceable capsular film that allows for multiple additional practice opportunities. From the perspective of the expert surgeons who participated in this study, the experience of creating the CCC on the SimulEYE SimuloRhexis and Kitaro DryLab simulator kits were believed to most closely approximate the experience of creating the CCC in a real-life cataract surgery. Surgeons also tended to perform the CCC faster on average with both of these simulators compared with the Bioniko model. This result is reasonable given the Bioniko model is designed to tear in a manner that allows more capsular grabs attempts. Regarding size, surgeons created a 5.5-mm diameter CCC most precisely on the Bioniko and SimulEYE models compared with the Kitaro model. We surmise that this is due to the naturally larger pupil diameter on the Kitaro model, which may have led to a tendency for surgeons to create a larger CCC. In general, surgeons performed faster CCCs over the three trials on the Kitaro and Bioniko models, suggesting a learning curve on these simulators with practice. Of interest, there was no significant learning curve with the SimulEYE model across the three trials, and surgeons' overall performance was the most consistent among the three trials on this model. Beyond the formal survey, extemporaneous comments from the surgeons regarding each of the models were also recorded in real-time during each CCC trial (Table ). Regarding task difficulty, it was noted that the Kitaro DryLab model was oversimplified relative to the SimulEye and Bioniko simulators, which incorporate the creation of a triplanar clear corneal incision. Furthermore, a distinct advantage of the SimulEYE SimuloRhexis model noted by the surgeons was the ability of the capsular tissue to remain everted between grabs. Some surgeons did find the SimulEye capsule to be overly brittle and tear more easily than a true capsule, however. Regarding the clear corneal incision, it was noted by some that the Bioniko Rhexis felt the most realistic as the consistency and memory of the wound felt similar to that of a true cornea. However, surgeons overwhelmingly found that the capsular tissue of the Bioniko model was overly friable and did not tear naturally. Of note, the Bioniko Rhexis model is purposefully designed to promote frequent capsular regrasping and allow for the assessment of the amount of corneal wound manipulation. This pilot study was designed to formally analyze both subjective and objective differences among the three simulators. The underlying assumption was that highly experienced surgeons can provide the most nuanced feedback comparing the simulators to human tissue. These results, however, do not necessarily validate the efficacy of these simulators in training novice surgeons. Larger case–control studies designed to formally evaluate learning curves, surgical complication rates, and possibly ergonomics are necessary to make broader conclusions and recommendations. To the authors’ knowledge, this is the first study to systematically evaluate CCC training simulators from the perspective of expert cataract surgeons. Although the SimulEYE SimuloRhexis was found in our study to have an advantage when looking at the overall performance and fidelity across the studied metrics, each of 3 capsulorhexis simulators tested have their own unique advantages and disadvantages. Each residency training program should make decisions on which simulator best suits their training needs based on an individual assessment and resources available. Further validation studies are needed to determine the effect the simulation training has on actual surgical outcomes for novice surgeons. WHAT WAS KNOWN WHAT THIS PAPER ADDS To the authors' knowledge, this is the first study to formally compare the experience of creating the CCC on a variety of ophthalmic surgical simulators from the perspective of expert cataract surgeons. This study presented objective and subjective feedback of CCC creation on surgical simulators, allowing residency programs to determine which simulators best suit their training needs.
Ophthalmic surgical simulators allow surgeons of all skill levels to practice specific steps of ophthalmic surgery in preparation for the operating room. The continuous curvilinear capsulorhexis (CCC) is a fundamental step of cataract surgery and one of the most challenging maneuvers for surgeons to master.
|
Pathologists’ first opinions on barriers and facilitators of computational pathology adoption in oncological pathology: an international study | 6cc86735-e8cb-4754-abf4-855906b25ba4 | 10504072 | Internal Medicine[mh] | Over the past decade, advances in scanning and storage hardware have resulted in widespread use of whole slide images (WSI) in pathology, often referred to as ‘digital pathology’. Digital pathology opens the door for applying machine learning techniques capable of extracting diagnostic information from scanned slides. An example can be seen supporting oncological diagnostics . The most widely used machine learning techniques for WSI are convolutional neural networks (CNN), which are a type of deep learning models that are extremely powerful for analyzing image data . Successfully developed CNN can automatically detect, segment, or classify cancer in WSI. Their capabilities approach or even exceed the accuracy of pathologists for specific tasks primarily within oncological pathology . Using deep learning for WSI (computational pathology; CPath) can increase efficiency by potentially reducing pathologists’ workload and automating repetitive task of low complexity such as screening for metastases within lymph nodes of breast cancer patients . It may also be helpful in evaluating biomarkers that are hampered by significant inter-observer variability to increase accuracy, speed and objectivity of diagnoses , thereby facilitating accurate treatment decisions. Examples are Gleason grading of prostate cancer and the detection of tumor buds within early colorectal tumors . In addition, CPath can also potentially yield new diagnostic clues which have not been recognized by pathologists before . Despite the promising results of CPath, several challenges have to be explored and addressed before it can be used in clinical practice: 1) building trust in using of CPath within medical practice (presuming deep learning models are represented as black boxes); 2) developing robust and trustworthy CPath trained with high-quality data from various sources to increase generalizability and prevent selection bias; 3) conducting large-scale (preferably prospective) peer-reviewed validation studies showing impact on patient care; 4) deciding on how to incorporate CPath into daily routine practice, including the assignment of responsibility; 5) finding solutions to ethical concerns; 6) certifying CPath to acquire a legal basis . Implementation is often only considered after an innovation is already widely available in clinical practice. However, concerning the future use and of CPath applications in clinical practice, early involvement of potential end-users is critical for gaining wider clinical usage and tailoring future implementation strategies to the needs of the potential end-users . Current literature entails many influencing factors from a CPath developer perspective, but perspectives on CPath adoption from the end-users are limited . As challenges of CPath clinical use are present at a global level, multiple countries should be involved in this explorative process. Therefore, the objective of this study is to explore international perspectives on the future role of CPath in clinical practice by focusing on opinions and first experiences regarding barriers and facilitators. These opinions and first experiences will inform the development of validation studies, implementation trajectories and communication activities for creating widespread stakeholder acceptance.
Literature study eSurvey Semi-structured interviews Innovation factors – CPath algorithms Individual health professional factors – pathologists and pathology residents Professional interactions – laboratory and multidisciplinary team Incentives and resources – Hospital or external laboratory Capacity for organizational change – Hospital or external laboratory Social, political and legal factors – Healthcare regulation We found 14 review studies in total describing barriers and facilitators for CPath clinical use . Strengths of CPath use were by far mostly mentioned in these studies. Other common topics extracted were barriers regarding quality of evidence supporting CPath outcomes and potential lack of trust or acceptance of AI systems by pathologists. Facilitators were clarification on AI training and the need for completely digitized pathology workflows.
The eSurvey yielded 70 responses in total, including 38 pathologists working in the Netherlands and 32 working abroad. Figure shows the replies to the statements, in total and disaggregated for the two subgroups. Respondents’ characteristics are shown in Table . Dutch respondents were represented by more non-academics than with international respondents. Overall, most respondents had a positive attitude towards CPath use in clinical practice: 61 out of 70 (87%) would currently start using CPath algorithms when available as a support tool in oncology diagnostics. A similar percentage of respondents (83%, n = 58) expected to be using CPATH algorithms in clinical practice in 5 years from now. Sixty-seven percent ( n = 47) of respondents perceived CPath as the future promise in clinical pathology, with Dutch pathologists having a more positive view (82%, n = 31 vs. 50%, n = 16). In line with this point, almost all international respondents demanded prospective validations studies (94%, n = 30), whereas only half of Dutch respondents (53%, n = 20) needed this before clinical adoption. Similarly, fewer Dutch pathologists required a full functional explanation of the CPath algorithm (32%, n = 12 vs. 78%, n = 25).
In total, we interviewed 15 pathologists and 1 pathology resident, of which eight were working in the Netherlands. Average years of experience was 14 (range 1–30). Diverse areas of focus were represented in the interview study. Common areas such as breast cancer and gastro enterology were included, but also uncommon ones such as pediatric and endocrine diagnostics. The interviewees’ characteristics can be found in Table . We found opinions and first experiences regarding 65 barriers and 130 facilitators for implementing CPath algorithms in histopathology, of which 29 barriers and 72 facilitators were mentioned in at least two interviews (Tables and ). These influencing factors are illustrated with quotes (Table ). Some quotes were translated from Dutch to English.
Most barriers regarding CPath algorithms related to quality of evidence: Some interviewees doubted the reliability of CPath. One of the reasons shared, is using pathologists’ expertize which is subject to inter-observer variability as the reference standard in supervised learning for CPath development. Concerns were also expressed regarding the actual impact of CPath use in clinical practice and its prospective and local validation. Regarding feasibility, since a large amount of data is required to train CPath algorithms, pathologists expect it will be challenging to develop CPath algorithms for rare cancer types. Pathologists who already used CPath algorithms mentioned the additional effort to manually select an area before applying the mitosis counting CPath algorithm and correcting the CPath output after tissue analysis as barriers for implementing CPath in daily practice. Also effort is needed to implement CPath in daily practice. Another barrier related to CPath’s compatibility is the quality of CPath algorithms being dependent on the quality of the steps in the workflow taken before slide digitization. Another barrier may be CPath being supplied by commercial parties with potential conflicts of interest in scientific publications supporting the CPath algorithms and lacking knowledge regarding the specific medical context in which CPath algorithms will be used. In addition to these barriers, many facilitators were mentioned. Many strengths of using CPath in clinical practice were recognized. Clinical use of CPath will ultimately result in decreasing workload, better treatment choices, finding new prognostic factors and developing more comprehensive CPath algorithms. Corresponding to quality of evidence, proper development and proven reliability by internal and external validation of CPath algorithms were mentioned as facilitators. There is disagreement regarding the methodology required to determine clinical benefit and whether retrospective or non-inferiority studies sufficient or will only prospective clinical trials be valid. Concerning compatibility, interviewees shared different intents for using CPath within their workflow and having a leading or supportive function in the diagnostic process. The preferred function was mainly argued by the type of task and perceived reliability of CPath. In general, CPath should be integrated within existing digital workflows while also being able to run in the background. Regarding feasibility, interviewees asked for fast-analyzing user-friendly CPath for both “standard” and more complex diagnostics. Some interviewees additionally argued the necessity of CPath outcome control. Few interviewees argued that accessibility of CPath should not be limited to the CPath product range of scanner suppliers, while others asked for full open-source CPath algorithms. Sufficient validation, safe data use and ongoing development by reliable CPath suppliers were deemed necessary.
Critical attitudes regarding CPath among potential end-users were present, illustrated by the statement that the additional value of clinical use should first be demonstrated sufficiently, especially before providing a leading function for CPath within the pathology diagnostic workflow. Critically assessing their own awareness and familiarity, many interviewees found that they lacked knowledge and experience with CPath and perceived its technique as a black-box. Even so, opinions differed on whether pathologists should understand the functioning of CPath. Concerns regarding expected outcome focused on the potential impact of rather minor deviations which could still have an impact on clinical outcome. Clinical introduction may also be hindered by the fact that only a small error margin will be acceptable to users for an entirely new technology. Concerns were raised about a loss of domain knowledge and skills within the field of pathology with users becoming too reliant on CPath. Emotions linked to the use of CPath were fear of job loss. Despite these barriers, there was a positive attitude toward CPATH algorithms in general. To become more familiar and gain trust with CPath algorithms, a step-by-step approach was mostly suggested. Some were already involved in CPath research or were already using CPath in daily practice. Regarding intention and motivation, most interviewees intend to start using CPath for applications for quantifying tasks such as Gleason grading and lymph node screening. With CPath algorithms performing solely isolated tasks, interviewees foresaw processing and integrating a wide variety of data from different sources as a key skill when using CPath in clinical practice.
Two facilitators were mentioned with regard to the social setting, namely that clinicians may encourage pathologists if CPath is arguably the better option to use and usage by pathologist-colleagues may lead to wider adoption. Concerning team processes, some interviewees thought clinicians should trust pathologists in considering clinical usage of CPath without consulting them. Others mentioned discussion of CPath use by pathologists with clinicians as a facilitator. Pathologists should inform clinicians by including information on CPath usage in pathology reports. A wide variety of stakeholders ( n = 33) were deemed important by the interviewees as potentially having a role in CPath usage in clinical practice (Fig. ). At both a local and national level, the most important stakeholders were information technology experts, professional pathology associations, auditing organization, clinicians and CPath developers.
Insufficient staining quality was seen as a potential barrier for CPath clinical use. Financial barriers include setting up a digital workflow to enable CPath use within the relatively small budgets of pathology departments. Having one supplier for the entire digital workflow, including CPath, was seen as a facilitator. In some pathology laboratories with digital workflows, CPath applications were already available for quantifying tasks. Educational activities regarding CPath clinical use were mentioned as a facilitator. There was no consensus on the timing of CPath introduction in the training of residents. CPath applications need to be connected to other information systems, such as the laboratory management system (LMS), picture archiving and communication system (PACS), e.g. assigning CPath to cases manually. Furthermore, CPath should be able to automatically fill in templates used for pathology reporting. A quality certification was deemed necessary to guarantee quality assurance.
To determine capacity for organizational change, facilitators regarding mandate, authority and accountability concerning the central guidance of CPath use in clinical practice, by national pathology associations e.g. developing guidelines. In addition, CPath applications should be centrally updated to comply to updated versions of clinical guidelines. Prospective and central monitoring was reviewed as a facilitator, as was sending feedback on CPath clinical performance to the supplier.
Considering legal factors, the uncertainty about the liability position of pathologists, who are currently being responsible for their own output, and uncertainty in case of CPath error were considered barriers. This relates to the barrier of lacking awareness regarding applicable legislation for CPath clinical use. Therefore, a facilitator is the autonomical decision of pathologists to use CPath clinically without interference of a clinician. Having global regulations in place and U.S. Food and Drug Administration (FDA) approved CPath applications were other facilitators.
Our study provides an extensive overview of current opinions and first experiences regarding barriers and facilitators of CPath algorithm clinical use from an international perspective of direct users. Most barriers and facilitators determined by the interviews were categorized within the domain of the innovation itself and mainly concern the quality of evidence of CPath algorithms and their compatibility with current pathology laboratory workflows. The eSurvey study conducted prior to the interviews showed remarkable differences among Dutch and non-Dutch pathologists, particularly regarding their attitude and need to understand the entire functioning of CPath algorithms. Our study shows that pathologists and pathology residents hold different opinions regarding important challenges in CPath clinical adoption, some of which are also presented by other research . Moreover, these opinions may differ between countries and regions. A recent Delphi study showed a lack of consensus about the adoption of AI algorithms even amongst pathologists experienced in developing and evaluating CPath algorithms . This, together with our results, stresses that many different aspects need to be addressed before interviews with end-users and further evaluations. In the review of Van der Laak et al. , validation of CPath algorithms in pathology is stated as a current challenge, with different levels of validation being presented. Our study shows that both internal and external validation are deemed necessary among pathologists. However, various opinions were shared whether prospective validation should be performed before CPath algorithms can be used in clinical practice, thereby also taking into account the time and effort needed to perform these types of studies. Nagendran et al. concluded that only a few randomized controlled trials have been performed on AI in medical imaging. For radiology specifically, Van Leeuwen et al. assessed the efficacy of 100 CE marked AI algorithms and arrived at a similar conclusion, debating that the level of evidence should be associated with the intended use in clinical practice, distinguishing AI algorithms that are aimed at solely improving efficiency, diagnostic accuracy or also clinical outcomes. Future research should entail appropriate validation studies regarding the effectiveness of intended CPath algorithm use in clinical practice, as these findings can be included in clinical practice guidelines to guide pathologists on appropriate CPath algorithm clinical use. Corresponding with qualitative findings of Chen et al. among radiologists and radiographers, pathologists highlighted their ability as a medical professional to use AI algorithms to improve their diagnostic process in terms of both efficiency, accuracy and quality. However, in line with another study exploring perceptions of AI application use among healthcare professionals, pathologists also experienced a lack of knowledge regarding AI, sharing a need for training . Despite perceiving CPath algorithms as black boxes, opinions varied whether pathologists should gain in-depth knowledge on the functioning of CPath before using it in clinical practice. More interest was shown for a step-by-step relation building approach, potentially facilitated by a real-world simulation digital environment. Several studies demand research into the interaction of humans with AI systems . Therefore, future research should focus on incorporating CPath into digital workflows and educational support which takes into account the differences in intended use and evidence regarding the interaction of humans with AI systems. In line with other studies and also part of the action plan of the FDA and post-market requirements of the CE-IVD , interviewees requested performance monitoring, assuring the safe and reliable clinical use of CPath algorithms and contributing to prospective evaluation: By periodically assessing patient outcomes, trends based on these outcomes can be compared to previous years and confidence intervals can be used to timely retrieve errors. However, such a data infrastructure aims for increased collaboration between regional, national and international pathology aligned associations and is not globally available. Strength of our study is the inclusion of a diverse, international panel of pathologists and pathology residents to gather opinions and experiences regarding barriers and facilitators of a fast-developing innovation within oncology care, specifically pathology. In addition, by using an implementation science framework, a broad range of opinions and first experiences regarding influencing factors was identified. These can be used by researchers, clinicians and policy makers to determine CPath algorithm implementation readiness within their own context. A limitation of this study is the lack of recommendations of the use of CPath algorithms in clinical guidelines. The majority of the interviewees did not even have any experience with CPath algorithms. Therefore, their shared perspectives are mostly based on expectations instead of experiences. In addition, especially compared to radiology, pathology is still in the early phases of digitalization. A digital workflow using whole slide images instead of microscopes for diagnostics should be implemented first. Afterwards, CPath algorithms can be implemented to support diagnostics. Taking into account their knowledge and experience of other digital innovations within pathology and the time needed to develop implementation strategy elements, our study provides an interesting insight into the various opinions among pathologists regarding CPath implementation. These opinions can be used in the next steps toward clinical acceptance and implementation. A limitation may be our recruitment strategy, which is susceptible to selection bias, as we only included a small percentage of the total international pathology community. One of the most important distinctions between the full pathology population and our sample is the level of adoption of a digital pathology workflow, which varies between countries and hospital types. In the Netherlands, digital workflows are common in pathology laboratories, which may explain the more positive opinions on clinical use of CPath algorithms in general. Low rates of digitalization could especially be seen in international non-academic settings, which was represented poorly in this sample. However, this study aimed to explore barriers and facilitators among a diverse group of pathologists and showed important challenges among CPath algorithm development and validation from pathologists’ perspectives. These challenges should first be tackled before a wide scale implementation is considered. To be able to prioritize the most important factors, the results of our study should be quantified first among a representative group of international operating pathologists within a specific field of oncological pathology and other additional stakeholders. To overcome both limitations, in a next step international prospective validation studies could be conducted, using a hybrid design for testing both the effectiveness of the intervention itself (CPath algorithms) while simultaneously gathering quantitative information on implementation .
The extensive overview of barriers and facilitators associated with clinical adoption of CPath reveals a variety of opinions among end users and underlines the complexity of future CPath implementation in oncological pathology. Our results provide the basis for subsequent validation studies and implementation. Quantitative studies are necessary for prioritization, as well as well-defined use cases, with specific CPath algorithms and their target audience, to gain widespread acceptance of these new developments. Combining validation and implementation studies using a highly engaging hybrid format will be necessary to gain widespread stakeholder acceptance and keep up with the high speed developments within the field of computational pathology.
Study design Study population Data collection Data analysis We carried out a narrative literature study to determine barriers and facilitators of the future clinical use of CPath algorithms. Results of this literature study were used to set up both an eSurvey and an interview guide. In the eSurvey, we explored the first reactions of using CPath in histopathology practice. Subsequently, we conducted online semi-structured interviews to more extensively explore pathologists’ perspectives on using CPath.
Aiming to include CPath end users in our explorative study, we recruited a wide variety of Dutch and international pathologists and pathology residents. We shared the eSurvey via two consecutive news items in the Dutch Pathology Association eNewsletter and through a directed e-mail message sent to members of the international tumor budding consortium (ITBCC) . Respondents providing their e-mail addresses were considered candidates for the successive interview study. Based on their eSurvey results, pathologists and pathology residents with varying attitudes regarding CPath were selected for the interview study. Using LinkedIn, we requested additional respondents with a critical attitude towards CPath. The study design and related study population and study instruments is shown in Fig. .
We first conducted a quick literature scan in PUBMED, including key words, medical subject headings (MeSH) terms and synonyms for “pathology”, “algorithms” and “practice”. In addition, we excluded animal studies and included review studies and articles published after 2017. The search strategy is presented in Supplementary File . An eSurvey was established including five statements about CPATH, based on barriers and facilitators often mentioned in literature as well as questions regarding respondents’ age, sex, occupation (pathologist/resident), type of laboratory (academic/non-academic), involvement in artificial intelligence (AI) development, and request to optionally provide their e-mail address for participation in the consecutive interview study. The eSurvey statements are presented in Table . An interview guide based on the literature scan was simultaneously developed (Supplementary File ) regarding opinions and first experiences with CPath for identifying barriers and facilitators of CPATH clinical use. The opinions on factors influencing CPath usage found in 14 review articles were mapped onto the categories of the domains of the implementation theory framework of Flottorp et al. . In addition, questions based on subdomains of Flottorp et al. not yet mentioned in literature were added to the interview guide. The interview guide mainly consisted of questions and statements aiming to encourage participants to actively think about future challenges of CPath. The interview guide was first tested among the researchers themselves (JS and SE) and finally with a pathology resident actively conducting CPath research. The online interviews were conducted via Zoom.us V5.6.1 (560) (Zoom Video Communications, Inc., San Jose, CA, USA) or MS teams V1.4.00.8872 (Microsoft, Redmond, WA, USA), based on the preference of the interviewee. Participants provided written informed consent for participation and audio recording prior to the interviews. For participants preferring MS teams, additional verbal consent was given for visual recording. Each interview started with an introduction including a short demo video with audio shared via the “share screen” option in Zoom or MS teams. The video demonstrated the use of one CPath algorithm for mitosis detection and one for prostate biopsy Gleason grading (Fig. ). Additional information was provided regarding the aim of the study and the interview specifically. After the introduction, the interviewee shared their occupation, experience in pathology, area of focus and experience with both digital pathology and CPath. Participants were then asked about barriers and facilitators regarding CPath on topics related to six of the seven domains of the implementation framework of Flottorp et al, : Innovation factors; Individual professional factors; Professional interactions; Incentives and resources; Capacity for organizational change; and Social, political and legal factors. We did not use the Patient factors domain since pathologists are not in direct contact with patients. Toward the end of the interviews, participants had the opportunity to share relevant thoughts on topics not covered in the interview. The first six interviews were conducted by a PhD student with previous experience both in conducting interviews and focus group discussions, while the remaining interviews were conducted by an MSc student in Biomedical Sciences under the supervision of the PhD student and after receiving a brief interview training as part of the MSc program. The language of the interviews with Dutch pathologists and pathology residents was Dutch. The interviews with pathologists working abroad were only conducted in Dutch when the pathologist was a native Dutch speaker. Otherwise, the interviews were conducted in English. Data was collected until no new information was provided in the interviews on influencing factors. This list of characteristics list included age, years of experience, gender, area of focus within pathology, and type of laboratory. The interviews lasted between 32 and 44 minutes. We used the COREQ checklist to describe the study’s qualitative characteristics .
We analyzed the eSurvey output using descriptive statistics. The interviews were either audiotaped or videotaped in the case of using MS teams and transcribed verbatim for qualitative analysis by ATLAS.ti (version 8.4.20 ATLAS.ti Scientific Software Development GmbH; Berlin, Germany). The transcripts were returned to the respective interviewees for final approval checking for completeness and accuracy. From the accepted transcripts, barriers and facilitators were extracted and coded by two researchers (SE and JS) independently. These codes were then allocated to the domains of the implementation framework of Flottorp et al. . Coding and categorization were discussed until consensus was achieved. A third researcher (RH) was consulted for advice in the event of discrepancy. As a last step, we redefined codes and reorganized coding when needed (i.e. axial coding), resulting in an accurate and concise overview.
Supplementary files
|
RNA-binding protein HuR regulates the transition of septic AKI to CKD by modulating CD147 | 19c27bb6-1309-4f2b-bf1d-13bad0e1fc86 | 11948685 | Pathologic Processes[mh] | Sepsis and septic shock are the leading causes of acute kidney injury (AKI), accounting for more than 50% of AKI cases in critically ill patients . The pathogenesis of AKI in sepsis is multifactorial, involving renal hypoperfusion, parenchymal responses to circulating cytokine storms, microvascular injury, inflammation, and microthrombi . These factors contribute to AKI, resulting in a high hospital mortality rate, especially among senior patients. Additionally, many AKI survivors progress to irreversible kidney injury and even end-stage kidney disease, characterized by ongoing interstitial inflammation and renal fibrogenesis, in the absence of cause-specific treatment . The RNA-binding protein Hu antigen R (HuR), known as embryonic lethal abnormal vision-like protein (ELAVL1), is a ubiquitously expressed post-transcriptional regulator . It binds to adenine- and uridine-rich elements (AREs) located in 3′-untranslated region (3′-UTR) of mRNA in response to various stimuli, facilitating mRNA transport from the nucleus to the cytoplasm and preventing rapid degradation . Notably, most pro-inflammatory transcripts contain conserved or semi-conserved AREs in their 3′-UTR . Both the nuclear transcription of HuR and HuR nucleocytoplasmic transporting can be stimulated by inflammatory signals to stabilize inflammatory mediators [ - ]. This HuR/pro-inflammatory circuit likely initiates and maintains the inflammatory phenotype seen in tissue inflammation. In fact, abnormal elevation of HuR has been observed in kidney diseases, including diabetic nephropathy , hypertension-related nephropathy , glomerulonephritis , and ischemic kidney injury . Recent studies, including our own, have shown that HuR plays a key role in the progression of chronic kidney disease (CKD) and cardiovascular disease (CVD) by upregulating inflammation and mediating tissue fibrosis [ - ]. We have also discovered potent and specific HuR inhibitors, such as KH3 and KH39 (code: compound 1 c), which disrupt the HuR-ARE interaction [ - ]. Testing these inhibitors for the treatment of CKD is ongoing. Thus, we hypothesized that HuR plays a prominent role in the sepsis-associated kidney injury by promoting persistent inflammation and fibrosis, and that inhibition of HuR could rescue septic kidney injury. While new specific HuR inhibitors are undergoing continuous preclinical development, several FDA-approved anthelminthic drugs, such as pyrvinium pamoate (PP), have been reported to inhibit HuR by preventing its nucleocytoplasmic accumulation (DOI: 10.18632/oncotarget.9932 ). We recently identified that niclosamide (NCS), another FDA-approved anthelminthic drug, also inhibits HuR . Importantly, NCS is more tolerable and safer than PP in vivo . Therefore, we included NCS in this study. Validating the effect of NCS on HuR inhibition and its impact on inflammation and fibrogenesis could enable a rapid transformative approach to treat and reverse septic kidney injury. Lipopolysaccharide (LPS) injection-induced AKI model has been extensively used to mimic septic AKI in patients . Especially, repeated administration of LPS to mice has led to persistent renal interstitial inflammation and fibrosis . We used a modified version of this repeated LPS injection-induced septic kidney injury model in mice to investigate whether the HuR/pro-inflammatory circuit contributes to the transition from septic AKI to CKD, while also exploring the underlying molecular mechanisms and the therapeutic potential of HuR inhibition for septic kidney injury.
Study 1. Study 2. Statistical analysis In vivo studies of changes and inhibition of RNA-binding protein HuR in sustained administration of LPS-induced injury and fibrosis in a mouse model Animals and treatment Euthanasia Determination of renal function and albuminuria Histological examination Western blot measurement Male C57BL/6 mice, aged 10–12 weeks, were obtained from the Jackson Laboratory (Bar Harbor, ME, USA) and used for the induction of kidney injury. Fifteen male mice received initial intraperitoneal (i.p.) injection of LPS ( Escherichia coli, serotype O55:B5 , Sigma) dissolved in sterile saline at the dose of 5 mg/kg body weight (BW) on day 1. This dose was determined in our pilot study, where male mice received different single doses (5.0, 7.5, and 10 mg/kg BW) of LPS ( E. coli, serotype O55:B5 ). A low dose of LPS (5 mg/kg BW) for 48 hours induced elevated circulating cytokine release, plasma blood urea nitrogen (BUN) levels, and locally increased renal inflammatory factor expression without causing mouse death. LPS-injected male mice were then randomly assigned to three groups ( n = 5 per group): one group received no additional treatment, while the other two groups were treated with either KH39 (50 mg/kg BW) or niclosamide (NCS, 10 mg/kg BW) i.p. daily for seven days. KH39 was dissolved in 0.9% NaCl solution with 5% DMSO and 5% Tween-80, and the effective dose of KH39 had been previously determined . NCS was dissolved in PBS with 5% ethanol and 5% Tween-80 as described previously . All LPS-injected mice continued to receive LPS injection at the same dose (5 mg/kg BW) i.p. every other day, for a total of four doses. Normal male mice injected with saline severed as controls ( n = 5). All mice were housed in standard cages with a 12-hour light/dark cycle, given water and normal diet ad libitum. On day 6, all mice were placed in the metabolic cages individually, and 24-hour urine samples were collected from day 6 to day 7. This study was initially conducted in male mice, as the optimal dose of LPS-induced kidney injury and the therapeutic doses of NCS and KH39 were first determined in males . While LPS affects both male and female mice, the doses of LPS, NCS, or KH39 required for females may differ. Future studies will include testing in female mice.
All mice were euthanized under isoflurane anesthesia on day 7. Blood and kidney samples were harvested on day 7 as described previously .
Plasma BUN concentrations were measured by using the QuantiChromTM urea assay kit (BioAssay System, Hayward, CA, USA). Urinary creatinine (Cr) levels were measured by using a creatinine liquicolor kit (no. 0420250, Stanbio Laboratory). Urinary albumin levels were determined by a murine microalbuminuria ELISA kit (No. 1011, Exocell), and urinary albumin/creatinine (A/C) ratio was further calculated.
Four-micrometer sections of paraffin-embedded kidney tissues were stained with periodic acid-Schiff (PAS) and Masson’s Trichrome (TRI) by the histology core facility at the University of Utah. Ten random fields from each kidney section were analyzed under ×200 magnification. The deposition of collagen, stained blue, was quantified using imageJ and presented as a percentage of the total analyzed area in a blinded fashion. The average blue staining score for 5 mice in each group was calculated and graphed. This method differs from the one described previously . Immunofluorescent staining (IF) for HuR and CD147 was performed on paraffin-embedded kidney tissues as described previously . The monoclonal mouse anti-HuR IgG and mouse anti-CD147(EMMPRIN) IgG (Santa Cruz Biotechnology, Inc., Santa Cruz, CA, USA) served as the primary antibodies, and Alex Fluor Plus 594-conjugated goat anti-mouse IgG (H + L) (Invitrogen, Carlsbad, CA, USA) served as the secondary antibody. At the same time, fluorescein isothiocyanate (FITC)-conjugated wheat germ agglutinin (WGA) (ThermoFisher Scientific, USA) was used to counterstain the glomeruli and tubules to define the location of HuR and CD147 in the kidney. DAPI-Fluoromount-G (SouthernBiotech, Birmingham, AL, USA) was used to stain the nuclei DNA. Control slides treated with antibody diluent instead of primary antibodies showed no staining. Immunofluorescent staining for α -smooth muscle action ( α -SMA), fibronectin (FN) or type III collagen (Col-III) or F4/80 + positive cells were performed and quantified on paraffin-embedded kidney sections as described previously [ - ]. Either rabbit anti-human FN IgG or goat anti-human type III collagen (Southern Biotechnology Associates, Birmingham, AL) or rat anti-mouse F4/80 IgG (Bio-Rad Laboratories, Inc., Hercules, CA, USA) served as the primary antibody, respectively. FITC-conjugated goat anti-rabbit IgG, Rhodmain-Red X -conjugated donkey anti-goat IgG or CyTM3-conjugated goat anti-rat IgG (Jackson ImmunoResearch Laboratories Inc., West Grove, PA, USA) were used as the secondary antibody. Control slides treated with antibody diluent instead of primary antibodies showed no staining. For immunostaining of α -SMA, FITC-conjugated mouse anti- α -SMA antibody was used directly. Ten random fields from each kidney section were analysed under ×200 magnification. Digital morphometric measurement of α -SMA, or FN, or Col-III positive staining, or F4/80 + positive cells was quantified as the percentage of staining positive area occupied in the total analysed area using ImageJ (National Institutes of Health, Bethesda, MD, USA). The average positive staining score for five mice in each group was calculated and graphed.
Kidney protein from each animal of each group was isolated and then immunoblotted on immobilon-P transfer membranes (ThermoFisher Scientific) as described previously . Proteins for HuR, CD147, α -SMA, fibronectin (FN), and GAPDH were assessed on the blots. The antibody information and analysis of the immunostaining bands were described previously [ , , - ]. All blots were run at least two times.
In vitro studies on the effects of KH39 and NCS on cellular HuR and CD147 expression following LPS stimulation Cell culture and reagents Effect of LPS on cellular HuR expression and nucleocytoplasmic translocation Effect of KH39 and NCS on LPS-induced CD147 expression in RAW264.7 cells and TCMK-1 cells Mouse macrophage cell line, RAW264.7 and mouse proximal kidney tubular epithelium cells (TCMK-1) werefetal bovine serum (FBS), 100 μg/ml streptomycin and 100 U/ml penicillin (all from Gibco, Thermo-Fisher Scientific, Waltham, MA, USA) at 37°C in a 5% CO 2 incubator. Sub-confluent cells seeded on six-well plates were made quiescent in serum-free DMEM medium for 24 hours before experimental studies. KH39 and NCS were dissolved in DMSO at 20 mM as stock solutions for in vitro assays. LPS ( E. coli, serotype O55:B5 , Sigma) was dissolved in sterile saline. All cellular treatments were carried out in duplicates in separate wells and repeated three times.
The quiescent cells were first treated with LPS at different doses for 24 hours, and cell viability was then measured to determine the optimal culture dose of LPS. Second, the quiescent cells were treated with an optimal dose of LPS (5 μg/ml for RAW264.7 cells and 10 μg/ml for TCMK-1 cells) and then collected at different time points after treatment for HuR immunocytofluorescent staining as described above. Incubation for 24 hours or 48 hours was chosen as optimal for LPS to induce nucleocytoplasmic translocation of HuR with or without HuR inhibitor in cultured RAW264.7 (24 hours) and TCMK-1 (48 hours) cells, respectively. Third, the dose of HuR inhibitor, KH39 or NCS, was optimized in cultured cells before intervention. Finally, the quiescent cells were incubated in serum-free medium alone or serum-free medium with LPS, LPS plus KH39 (2 μM), or LPS plus NCS (1 μM). HuR inhibitor-treated cells were preincubated with KH39 or NCS for 30 minutes before adding LPS. Cells were harvested at the indicated times for HuR staining (described above) and measurement of cytoplasmic/nuclear/total cellular HuR protein production by Western blotting as described above. Cytoplasmic and nuclear proteins were isolated separately by using the Thermo-Scientific ™ NE-PER ™ Nuclear and Cytoplasmic Extraction Reagent kit (ThermoFisher Scientific) as described previously . The band intensities were measured using ImageJ and normalized to ß -actin (cytoplasmic) or histone 3 (nuclear).
The quiescent RAW264.7 cells, or TCMK-1 cells were treated with LPS with or without KH39 or NCS at the indicated concentration and incubation time. Cells were then harvested for measurement of total protein production of CD147 by Western blotting.
All data are expressed as mean ± SD. Software power for sample size calculation ( www.clincalc.com ) was used for the in vivo study, based on the results of renal tubular injury score in a pilot study. Each group contains five mice, and the study has at least 95% power to detect differences larger than 2.2 units of standard deviation between treated and untreated groups. Statistical analyses of differences among the groups were performed by one-way ANOVA, and subsequent Student-Newman-Keuls or Dunnett’s testing for multiple comparisons. Comparisons with P < 0.05 were considered significantly different.
Elevated HuR is observed in LPS-injured kidneys in a mouse model Inhibition of HuR improves renal function and reduces albuminuria and renal inflammation and fibrosis in a mouse mode of LPS-induced kidney disease Inhibition of HuR modulates renal CD147 expression in LPS-injured mice LPS directly induces cellular expression and nucleocytoplasmic translocation of HuR in macrophages and renal tubular cells Both KH39 and NCS decrease CD147 expression in macrophages and renal tubular cells As shown in and , macrophages mainly expressed LG-CD147, not HG-CD147. LPS stimulation further enhanced LG-CD147 expression in macrophages, which was significantly inhibited by KH39 or NCS treatment. Interestingly, renal tubular cells expressed both HG-CD147 and LG-CD147 after LPS stimulation ( and ). LPS-stimulated cells treated with HuR inhibitors (either KH39 or NCS) showed reduced HG-CD147 and LG-CD147 expression levels. Our data further demonstrates that CD147 is a target of HuR, and HuR inhibition downregulates CD147 expression in both LPS-activated macrophages and injured renal tubular cells.
As shown in and , we observed that HuR protein was weakly expressed in normal kidney tissue in mice (NC). In contrast, significantly increased HuR protein expression was observed in diseased kidneys induced by repeated LPS injections, which was inhibited by HuR inhibitors KH39 and NCS. Immunofluorescent (IF) staining for HuR confirmed increased staining density for HuR and possible nucleocytoplasmic translocation of HuR (stained red) in tubule and tubulointerstitial cells, as well as some glomerular cells at the site of injury ( -LPS, arrows pointed). This indicates increased HuR expression. Consistently, the enhanced HuR staining was not seen in normal mice and was barely detectable in KH39- or NCS-treated LPS-injured mice. These results suggest that renal HuR production was increased in LPS-injured kidneys. Additionally, these findings confirm the inhibitory ability of NCS on LPS-induced elevated HuR production in the kidneys.
All mice survived the experiment period. As shown in and , repeated LPS injection only for seven days induced continued kidney damage in mice, including increased plasma BUN levels and urinary albumin/creatinine (A/C) ratio. Histologically, LPS-injured kidneys showed accumulative inflammatory cell infiltration and tubulointerstitial collagen deposition, as determined by PAS staining and Masson’s trichrome (TRI) staining ( and ). The quantitative analysis of renal collagen deposition is shown in . These results suggest that repeated LPS injections for seven days can induce significant kidney injury in mice, which may initiate the transition from AKI to CKD. In contrast, mice treated with either KH39 or NCS showed a significant reduction in BUN levels and albuminuria compared to untreated LPS-injured mice ( and ), as well as significantly less tubular injury, inflammation, and tubulointerstitial fibrosis ( - and ). The immunofluorescent-stained kidneys for α -smooth muscle actin ( α -SMA), fibronectin (FN), and type III collagen (Col-III) ( - ) and their semi-quantitative analyses in - , confirmed that overexpression of α -SMA was observed not only in vascular smooth muscle cells (VSMCs) but also in LPS-injured kidney tubular and/or tubulointerstitial or peritubular cells. The positive IF staining for FN and Col-III on tubular basement membrane and tubulointerstitial area was dramatically increased in LPS-injured kidneys. However, the IF staining for these fibrotic markers was markedly reduced when LPS-injured mice were treated with either KH39 or NCS. This observation was further supported by Western blot analyses ( ), which showed a striking elevation in renal protein levels of α -SMA and FN in LPS-injured kidneys compared to the normal controls. This elevation was abrogated in LPS-injured mice treated with KH39 or NCS. These data together indicate that treatment with a HuR inhibitor protects the kidney from LPS-induced tubular fibrosis. The F4/80 antibody is known to label macrophages. LPS-injured mice had a substantial increase in the absolute number of F4/80 + cells, mainly in the tubulointerstitial area, while F4/80 + cells were sparse in renal vessels in normal control kidneys, indicating an accumulation of macrophages and inflammation in damaged kidneys ( ). However, the number of F4/80 + cells was largely reduced after treatment with KH39 or NCS, nearing the normal levels observed in uninjured kidneys. These results indicate that sepsis-induced kidney inflammation was substantially reduced in mice by KH39 or NCS treatment.
CD147, also called extracellular matrix metalloproteinase inducer (EMMPRIN) or basigin, is a glycosylated transmembrane . It includes two forms, highly glycosylated CD147 (HG-CD147, ~50 kDa) and lowly glycosylated CD147 (LG-CD147, ~30 kDa). Interestingly, both forms of CD147 in the kidney increased by 2.28-fold after LPS injury, and this increase was inhibited by either KH39 or NCS ( , ). IF staining for CD147, using the same primary mouse-anti CD147 used in the Western blot analysis, confirmed the enhanced staining of CD147 (stained red) in tubular basement membrane and tubulointerstitial cells at the site of injury ( -LPS, arrows pointed), compared to normal mice. KH39 or NCS treated LPS-injured mice had much less CD147 staining in the kidneys. From both measurements of CD147, NCS was less effective than KH39 in reducing CD147, which may be related to the different doses of the drugs used.
As expected, administration of LPS for 24 hours directly induced total cellular ( and ) and cytoplasmic ( and ) HuR production levels in cultured macrophages, and these effects were inhibited by KH39 or NCS. Surprisingly, administration of LPS for 48 hours induced cellular HuR density (shown in red staining) and nucleocytoplasmic translocation of HuR (red staining was shown in cytoplasm) in cultured renal tubular cells, and these effects were similarly abrogated by KH39 or NCS treatment ( ). Western blot assays further confirmed the observation of IF staining for HuR in renal tubular cells. LPS similarly induced total cellular ( and ) and cytoplasmic ( and ) HuR levels in renal tubular cells, and these effects were inhibited by KH39 or NCS. These results indicate that HuR is increased in both LPS-stimulated macrophages and kidney cells, but macrophages respond to LPS in less time and at lower doses than tubular cells do. In addition, NCS effectively blocks HuR nucleocytoplasmic translocation, acting as a HuR inhibitor.
The present study utilized a mouse model induced by repeated low-dose LPS administration over one week, which results in impaired renal function, albuminuria, and persistent renal interstitial inflammation and fibrosis. This model effectively mimics the transition of AKI to CKD in humans, particularly in the context of sepsis, and aligns with observations in endotoxemia-associated kidney injury and fibrogenesis . Notably, we observed a significant increase in HuR expression following LPS stimulation in both inflammatory and kidney cells in in vivo and in vitro settings. Inhibition of HuR alleviated renal interstitial inflammation and fibrosis, improving renal function and reducing albuminuria. These findings suggest that HuR plays a crucial role in LPS-induced kidney disease, potentially through its downstream target CD147, underscoring the therapeutic potential of HuR inhibitors for septic kidney injury. The inflammatory effects of LPS are well-documented and involve both circulating immune cells and local organ cells. Upon entering the bloodstream, LPS activates immune cells such as macrophages and monocytes, promoting the release of pro-inflammatory cytokines and chemokines. These mediators amplify the inflammatory response by recruiting additional immune cells to the site of infection or injury. Simultaneously, LPS directly affects local organ cells, including renal epithelial cells, stimulating them to produce inflammatory cytokines and chemokines, exacerbating organ inflammation and tissue damage. LPS-induced vascular endothelial dysfunction further increases vascular permeability, enhancing leukocyte infiltration into tissues. These processes create a complex interplay between circulating immune cells and local organ cells, driving tissue-specific inflammation and injury. Emerging evidence suggests that LPS upregulates a variety of HuR-bound transcripts in circulating immune cells, which are involved in innate immunity, cytokine activity, and chemotaxis . These transcripts likely contribute to the overproduction of proinflammatory cytokines during critical infections. However, there is limited research on LPS-induced HuR dysfunction in local organ cells, particularly in the kidney. In this study, we found that LPS stimulates multiple kidney cell types, including glomerular, tubular, and tubulointerstitial cells, to overexpress HuR. Elevated HuR expression in glomeruli, particularly in podocytes, has been linked to podocyte injury and proteinuria . Similarly, increased HuR levels in kidney tubular cells—observed here and in our previous ischemia/reperfusion-injured model —may sensitize these cells to TGFß-induced proinflammatory and profibrotic signaling, promoting tubulointerstitial fibrosis. We also identified that LPS upregulates CD147 expression in renal tubular cells via a HuR-mediated mechanism. Previous studies suggest that elevated tubular CD147 expression promotes profibrotic tubular epithelial differentiation by inducing MMP generation and enhancing proinflammatory responses via STAT3 signaling . These mechanisms may underline septic kidney injury. Although we did not directly confirm increased HuR expression in macrophages within LPS-injured kidneys, we observed that LPS stimulates HuR expression in cultured murine macrophages, similar to its effects on renal tubular cells. This suggests that injected LPS may upregulate HuR expression in both circulating and resident macrophages. HuR has been shown to mediate CD147 expression in macrophages, enhancing cytokine and chemokine production. Additionally, elevated CD147 levels may promote macrophage infiltration into LPS-injured kidneys through interactions with E-selectin ligands on the renal endothelium [ , , ]. These findings propose a novel mechanism in which endotoxemia activates and recruits macrophages to kidney injury sites via the HuR-CD147 axis. Together, our findings demonstrate that LPS-induced HuR expression in both immune and kidney cells drives tubulointerstitial inflammation and fibrosis. This study expands on our previous work , emphasizing the critical role of HuR-mediated post-transcriptional regulation in initiating and sustaining renal inflammation and fibrosis. We previously identified KH3 as a potent HuR inhibitor through high-throughput screening. KH3 effectively inhibits HuR targets, ameliorating renal glomerulosclerosis, tubular interstitial fibrosis, and cardiac fibrosis [ - , ]. However, KH3’s low solubility in buffer raised concerns regarding clinical applicability. To address this, we developed KH39, a more potent derivative with enhanced inhibitory activity, as demonstrated by a lower Ki value in fluorescence polarization assays . While KH39’s solubility was not significantly improved, it has demonstrated efficacy in multiple in vitro and in vivo models, including tumor suppression via disruption of the HuR-mRNA interactions . In this study, KH39 effectively blocked LPS-stimulated HuR expression in cultured macrophages, renal tubular epithelial cells, and a septic kidney injury model, highlighting the therapeutic potential of HuR inhibitors in inflammatory kidney diseases. We also identified niclosamide (NCS), an FDA-approved anthelminthics drug, as a novel inhibitor of HuR cytoplasmic accumulation . NCS has a well-established safety profile and multifunctional effects in drug repurposing screens, targeting pathways such as Wnt/ ß -catenin, m TORC1, STAT3, and NF-kB [ - ]. Clinical trials are investigating NCS for cancer treatment, and one trial demonstrated its ability to reduce albuminuria in patients with diabetic nephropathy when combined with angiotensin-converting enzyme inhibitors . However, the precise mechanism through which NCS exerts its effects remains unclear. In this study, NCS inhibited HuR expression and nucleocytoplasmic translocation in macrophages and kidney cells, suppressing HuR-targeted transcripts like CD147 in septic kidney disease. Although NCS does not directly bind to HuR like KH3 or KH39 [ - ], it may modulate HuR activity indirectly, potentially through effects on HuR phosphorylation or dimerization, which are crucial for HuR nucleocytoplasmic translocation and function . These HuR-dependent and HuR-independent mechanisms likely contribute to NCS’s renoprotective effects in LPS-induced AKI. This study has limitations that inform future research directions, including the need for longer-term observations of septic kidney injury in both male and female mice to assess the sustained effects and safety of NCS. Monitoring potential off-targeting effects will also be critical before clinical translation. Large cohort studies in patients will be necessary to validate these findings and assess clinical relevance. Nonetheless, our results highlight the therapeutic potential of NCS as a repurposed drug targeting HuR in progressive septic kidney disease. In summary, our study reveals a novel mechanism by which HuR-mediated CD147 expression contributes to septic kidney injury and fibrosis (as illustrated in ). Targeting the HuR-CD147 axis represents a promising therapeutic strategy. By inhibiting HuR and CD147, NCS emerges as a potential repurposed drug with significant clinical promise for the treatment of progressive septic kidney disease or other kidney disorders.
|
Anatomical societies find new ways to come together in a post‐Covid world | 7503aa05-2eeb-4c40-8a03-c2ff36ca7420 | 9298804 | Anatomy[mh] | The world is still whirling under the effects of a global health emergency caused by the uncontrollable spread of the highly contagious SARS‐CoV‐2, which was initially identified in Wuhan city, Hubei province, China, and reported in late December 2019 (WHO, ; Zhu et al., ). As announced by the World Health Organization (WHO) on February 11, 2020, the disease associated with the virus is “Covid‐19,” which stands for CO rona VI rus D isease identified in 2019 (WHO, ). Covid‐19 is often characterized by pneumonia, which may progress to severe acute respiratory syndrome, kidney failure, and even death (Argenziano et al., ; Evans et al., ). On March 11, 2020, the WHO declared COVID‐19 a pandemic (WHO, ), which has been defined as “ an epidemic occurring worldwide, or over a very wide area, crossing international boundaries and usually affecting a large number of people ” (Last, ). Alongside its dramatic effects on health and health systems, the Covid‐19 pandemic has also had salient economic consequences affecting practically all human activities worldwide, including scientific events (Ayittey et al., ; Russell et al., ). It is well established that the coronavirus disease is transmitted through contact with droplets from an infected person's cough or sneeze, and/or by touching objects or surfaces that have been contaminated (Pascarella et al., ). As crowded communities are particularly exposed to viral contagion, social distancing rules, self‐isolation policies, and complete lockdowns have been imposed in many regions and countries to reduce exposure, prevent infections, and limit the spread (Velavan & Meyer ; Cucinotta et al., ; Russell et al., ). In the wake of the first pandemic waves, air travel has gradually resumed as it became clear that stringent travel restrictions can help to prevent the spread of the virus only in the event that there are no or few cases in the destination country (Russell et al., ). At the same time, some studies have demonstrated that individual and community surveillance measures can be used to gradually reopen following stay‐at‐home and business closure protocols (Pernice et al., ; Bendavid et al., ). It has nevertheless also become clear that it is impossible to return to life as it was in the pre‐Covid world, especially with regard to large gatherings (Korbel & Stegle, ; Pung et al., ). As second and third pandemic waves of new variants arise and spread throughout Europe and other continents, government and healthcare officials are striving to investigate, plan, and implement strategies to control the pandemic's spread (Cacciapaglia et al., ; Callaway, ; WHO, ). As the pandemic's effects reverberate, internet and technology have helped the world to communicate in unthinkable ways. In fact, an enormous amount of information including videos, power‐point presentations, and large documents can be shared almost instantly, even from portable devices (Lecueder & Manyari, ). In the field of medicine, partial or entire online events have become commonplace. At first, even before the Covid‐19 emergency, they were used for small specialistic continuing medical education courses but they have become ever more important parts of large, complex national and international medical conferences (Zucconi, ; Cucinotta et al., ; Ng et al., ). Just as dedicated infrastructures and specialized organizing agencies have been created to assist the organizing committees of medical societies to address the many technical, logistic, legislative, financial, and social details that the organization of mega medical congresses entail, numerous online features such as web pages, video conferencing, chat sessions, and mailing lists have become available. The Covid‐19 pandemic has abruptly challenged the traditional modality of organizing medical congresses and forced scientific societies to rethink how they can accomplish the goals of their meetings in virtual formats. During 2020 when only a few in‐person medical conferences were held, two new types of remote congress modalities were trialed: the fully virtual congress and the hybrid congress (in the former case, all aspects are carried out online; in the latter some of the registrants participate in person while others are connected via video conferencing software). These new directions reflect not only technological advancements but also the ever‐greater needs for education and scientific dissemination (Fawcett et al., ) and have opened the way to more accessible conferences (Viglione, ). Needless to say, there are positive and negative implications linked to the many aspects of the traditional and novel virtual congress modalities. This article intends to evaluate the technical issues, social aspects, costs and sustainability, logistics and management, feasibility, and future of these congress modalities both in a general sense and specifically in connection to congresses of anatomical societies. Some tables were prepared to evaluate the real situation of the most significant anatomical societies of the world. Data on national (Table ) and international (Table ) congresses of anatomical societies planned before the Covid‐19 emergency arose were collected using the following criteria. Only national anatomical societies belonging to the International Federation of Associations of Anatomists (IFAA) or the European Federation for Experimental Morphology (EFEM) were considered provided the congress of interest was not planned for a time preceding the Covid‐19 emergency outbreak. For example, the congress of the Dutch Anatomical Society, which was held in January 2020, was not considered here. Only the society's main annual congress and not minor events such as symposia, masterclasses etc. were considered. Data regarding societies for which it was impossible to find online information about their congress were not included. Data were retrieved on July 10, 2021 (last update) via the official website of each anatomical society or the link to the conference website contained therein. In those cases the information reported on the websites was not up‐to‐date about the event, an inquiry message was sent to the contact e‐mail provided. The English translation of the name of each society as registered by the IFAA, EFEM, or found in the society's official website appears here. The congress of a national anatomical society hosted within an international congress was not considered a standing alone event but a part of the international congress, unless it was held in the host country. In that case, the event was registered as both an international and a national event. Local conferences of anatomical societies were not taken into consideration. The authors classified the congresses as follows. Online: the congress maintained its complex organization with minimal adjustments but changed its nature from traditional to online. Hybrid: the congress was held in a mixed form, with an in part onsite and online faculty (e.g., connected via video conferencing software). The classification “canceled” refers to a planned face‐to‐face conference that has been moved to a later date or has been abandoned. Those cases in which the anatomical society had not planned any event for 2020 are marked as “not planned.” Those cases in which the information on the website was not up‐to‐date and the inquiry message to the contact address was unanswered were listed as “supposedly.”
One of the three Covid‐19 clusters detected early in Singapore in February 2020 referred to a company congress attended by 111 participants working at the branches of a worldwide corporation located in 19 countries (Pung et al., ). That cluster of disease probably arose from the circumscribed local transmission during that convention or, in other words, the handshaking, physical contact, meal sharing, participation at presentations and discussions, workshops, and social events typical of that sort of conference (Pung et al., ). That particular cluster referred to a business meeting, but the same dynamics occur at a medical congress. Prior to 2020 and the Covid‐19 pandemic, scientific conferences and meetings played an important role in scientific exchange and progress as it was the medium whereby researchers presented their latest results to the rest of the scientific community. They also represented an opportunity for training sessions, debates, discussions, round tables, and workshops. At the same time, while knowledge was shared, collaborative networks were formed and friendships were cultivated (Achakulvisut et al., ; Weissgerber et al., ). When the emergency first began, many national and international anatomical societies postponed and then canceled their in‐person congresses scheduled for 2020 (Tables and ). Complying with the directives of the Centers for Disease Control and Prevention (CDC) and the WHO (CDC, ; WHO, , ) many scientific societies did likewise (Hermieu et al., ; Porpiglia et al., ; Soriano Sánchez et al., ; Weissgerber et al., ). But after the first uncertain response to the emergency, the scientific community set out to find new avenues and formats for scientific exchange that could minimize or prevent the consequences of in‐person congresses. The arrangement of any conference, convention, seminar, meeting, symposium, forum, or consultation involving even minimal participation must face several critical issues that risk jeopardizing its scientific success if carried out as physical meetings, that is the classically way they were planned up to now (Table ). Factors related to the epidemiology of pandemic Factors related to the educational event Factors related to the speakers/attendees Is there a future for in‐person congresses of anatomical societies in the post‐Covid world? It is well established that Covid‐19 spreads mainly through close contact (typically less than a meter) with an infected person. An individual becomes infected when aerosol particles or fine droplets are inhaled or come into contact with the mouth, nose, or eyes particularly in crowded, indoor, and poorly ventilated settings (WHO, ). Events foreseeing large groups of individuals coming from multiple countries including those with less stringent viral containment policies, can thus become important hubs for the transmission of infectious diseases (Alirol et al., ; Desai & Patel, ). Just as it is difficult to predict the pandemic's evolution (WHO, ), it is expensive, time consuming, and risk‐laden to organize a global event in which hundreds or thousands will be participating, just think to the arrangement of locations and rooms, devoted personnel, materials, and facilities that are challenging considering the differences in epidemic curves between and within continents as well as between and within countries. This is particularly critical for events involving faculty and attendees from all over the world (Porpiglia et al., ). Moreover, the time and expense involved in organizing an in‐person scientific congress could be jeopardized by rapid changes in local conditions and precipitous lockdowns of a region or country or new peaks linked to seasonal changes (Margolis et al., ; Fawcett et al., ; WHO, ). The epidemiological picture continues to change rapidly, and from one moment to the next countries with no access limits can and have introduced restrictions (EU, ; UNECE, ). There are other complications to international travel such as the European Union (EU) Digital Covid Certificate, which went into effect on June 1, 2021 providing digital proof that a traveler has either been vaccinated, has received a negative test result or recovered from the virus, thus allowing Europeans to move freely within the union (EC, ). At the moment, it is difficult to predict the future evolution of the pandemic, nor its duration (Cucinotta et al., ). It is unknown if the waves that may come will be similar to those of the past, nor if environmental factors could worsen the viral effects in some regions or cities (Coccia, ; WHO, ). Evidence has been emerging that strict travel restrictions are probably unjustified in countries that have good international travel connections and a very low local COVID‐19 incidence (Russell et al., ). This is true for the viral forms that are currently widespread, but it may not be valid for the emerging, worrisome variants (Callaway, ; Chung et al., ; Gobeil et al., ; WHO, ). Other variables that are unpredictable are the immunization coverage that each country will achieve over time, the effectiveness of the different vaccines to the virus and the contagiousness and lethality of new variants (WHO, ; Soleimanpour and Yaghoubi, ; WHO, ).
There are also important critical factors concerning the educational events themselves. Crowded, confined, or enclosed settings where people are in close proximity for prolonged periods, in particular if there is poor ventilation, facilitate the circulation of Covid‐19 (WHO, ). In order to be able to ensure safety and guarantee social distancing, the organizers need to arrange for larger spaces, increased hygiene practices, sanitization and cleaning protocols, as well as safe food service and personal hygiene products (WHO, ), all representing extra expenses to the congress budget. Moreover, sponsors and exhibitors that generally cover part of the cost of large‐scale events have the following variables to take into consideration. First of all, management costs, as explained, will be higher. Second, the global economic recession may lead to lower profits for companies investing in conference events. On the other hand, it is also true that some companies specializing in educational technologies are experiencing phenomenal economic growth due to restrictions imposed by the pandemic on travel and in‐person attendance. Third, it has been hypothesized that Covid‐19‐related topics will probably absorb a significant portion of funds allocated to scientific research (Martelletti, ), meaning that there will probably be less funding for research in other fields as well as less availability for educational events. Finally, few sponsors or scientific committees will be willing to take the risk connected to finding the funding necessary to organize an in‐presence event knowing that an infected participant could unknowingly infect others leading to unforeseeable consequences. Probably no medium or large congress can be classified as an event whose risk is acceptable and consciously accepted. Not every conference organizer and scientific society would be willing to take the hazard.
There are also considerations regarding the speakers/attendees who play a central role in the success of an educational event. Laboratory data suggest that infected persons appear to be most infectious 2 days before they develop symptoms and early in their illness (WHO, ). An attendee who becomes symptomatic during a congress, or resulting positive to a swab test or who has had contact with a positive person would need to quarantine as required by the host country's protocol (McDowell et al., ). Moreover, it remains to be seen how the local health‐political authorities would react to an outbreak within the context of a medical conference. Restrictive measures, including quarantining the participants, canceling the rest of the event and even blocking attendees at the airport are all possible measures that public officials could implement (Russell et al., ). The probable consequence of such a scenario would be that the contaminated person(s) would be required to remain in the host country longer than planned with all the personal, professional, and financial inconvenience that such a necessity would imply. Apart from these extraordinary and unpredictable circumstances, there may be increased costs due to the complicated logistics necessary for the event. This can be due to distancing rules and moving delay among rooms, the need for cleaning between scientific sessions or presentations, or the management of staggered meals avoiding the self‐service model. The increased accommodation cost will probably add up to more expensive airline tickets and transfers, as transport companies have to reduce the passengers carried on to respect distances. Everything would weigh on the institutional funds assigned to researchers, which will predictably diminish due to the economic crisis, or worse, take resources away from research. According to a recent study, countries can expect Covid‐19 infected travelers to arrive in the absence of travel restrictions (Russell et al., ). However, such restrictions should be implemented only after the analysis of local incidence, epidemic growth and travel volumes (Russell et al., ). In any case, undeniably, large international medical congresses could be a vehicle for virus transmission and need to be reevaluated not only for the sake of international travelers but also for that of the local community hosting an in‐person conference.
Anatomical societies organize congresses of and for their members to promote and facilitate research, collaboration, and scientific exchange and discussion. At the same time, congresses are an opportunity for policy‐making decisions and for organizing workshops and specialized training sessions. No less important, professional and personal networks are formed and reinforced. Congresses and conferences can also be a vehicle for further funding and academic recognition. Scientists at all stages of their careers consider meetings as opportunities to gain recognition and receive a feedback regarding their research studies as well as to establish networks with other colleagues and collaborators (Weissgerber et al., ). All of the above considerations continue to be valid despite the ongoing pandemic. As far as mitigating the impact of conference and travel cancelations of planned meetings on researchers is concerned, several recommendations have been formulated including that of transforming conferences into virtual events and hosting poster sessions online (Weissgerber et al., ). Others have proposed acknowledging the contribution of speakers' accepted presentations and making abstracts, posters, and other conference materials freely available online (SOT ; Weissgerber et al., ). While these proposals can be applied to planned congresses/meetings, new alternatives need to be formulated to address the challenges anatomical societies are facing as far as future events are concerned. Fortunately, online possibilities seem to be able to fill the gap between old and new solutions, and the digital revolution that is already underway as far as online education and virtual events are concerned seems to be ready for the challenge (Fawns et al., ). Now, it would seem, is the time to further develop the possibility of virtual educational events.
Every systemic crisis can represent an opportunity to rethink positions and strategies and to implement new solutions. Indeed, a crisis or emergency can be an unrepeatable opportunity to kick‐start the reevaluation process, and this is true for the traditional congresses of national and international societies of anatomy during the coronavirus pandemic. Many anatomical societies have been working to find new avenues for congresses by organizing online seminars commonly known as webinars, that is, seminars on the web. Independently of health considerations, webinars can accommodate more participants than a normal conference room can. The chance to manage a virtual congress using ready‐to‐go online technology seems to be at one time attractive and safe, as experienced with webinars (Fadlelmola et al., ). Two types of congressional alternatives can be used instead of the traditional ones, the fully virtual and hybrid congresses which can be held partially or entirely online. In the former case, all of the scientific content of the congress is transmitted to all of the participants via real‐time or delayed streaming. In the latter case, some of the participants attend in person, with all the travel, risk, and inconvenience that it entails, while some participate entirely online via video conference. The residential part of the event has to comply with the local health authority regulatory framework, considering the WHO recommendations. These will encompass the cardinal principles of the fight against transmission: to reinforce frequent handwashing and sanitation, procure needed supplies, regular cleaning and disinfection of buildings, rooms, surfaces, as well as the implementation of social distancing practices and use of personal protective equipment (WHO, ). Both the fully virtual and hybrid alternatives can use live and delayed streaming or a combination of the two and can offer differing degrees of interaction between the attendees and the speakers/moderators. These approaches have already been tested at some conferences that have streamed plenary lectures or selected presentations (Porpiglia et al., ). It is only a matter of extending this option across the full conference content. There are in any case positive and negative implications linked to the transition to (at least partial) online congresses (Table ). Obviously, they are more accentuated for the fully virtual mode than for the hybrid one. Positive implications of the transition to online events Negative implications of the transition to online events Study limitations This study has some limitations. First, this review of educational events organized by anatomical societies was restricted to members of the IFAA or the EFEM societies, although the authors think it provides the information they proposed to disclose. Second, despite the efforts, some data regarding planned congresses are missing as many of the websites were not up‐to‐date and some of authors' inquiries were unanswered. Third, the global situation is continuing to evolve even now, making it difficult for us to draw generalizations.
A traditional congress can last as long as a week or more, and travel time and jet lag also need to be taken into consideration. Indeed, participating in a traditional offline congress is typically time and money consuming due to the costs of flights, airport transfers, hotel accommodations, conference registration fees, and may cause a disruption in professional or family obligations at home (Longhurst et al., ; Viglione, ). Travel can also prevent scientists from participating, as some countries might introduce movement restrictions because of the pandemic or religious and political issues (Weissgerber et al., ). In many cases these costs and the others linked to congress participation may be absorbed by the scientific system, the university administration, or the scientist him/herself (Achakulvisut et al., ). Researchers from countries with limited funding, graduate students, and post‐doctoral researchers, or individuals with disabilities are the ones who would benefit the most from a fully virtual approach (Weissgerber et al., ), and they are also the ones who are less likely to participate in a traditional offline congress (Viglione, ; Cucinotta et al., ). The costs of a virtual event are generally lower than those of a traditional congress, and its carbon footprint is reduced since no flights are involved (Li et al., ). At the same time, the commitment of organizing an online congress is in many ways less demanding (Lecueder & Manyari, ) as no venue needs to be found months or years in advance, no onsite secretariat or hotel booking, catering or sanitation services or incoming/outgoing transportation or specialized onsite technical assistance, or social programs for accompanying people need to be arranged (Achakulvisut et al., ). Most importantly, a virtual format would open the congress to a wider number of participants who could participate with greater ease, less cost, and no risk with respect to a traditional in‐person congress. A virtual congress modality would allow participants to attend at reduced or even canceled congress fees. In fact, some societies have chosen to abolish registration fees to virtual events in order to give more members the possibility to benefit from the educational opportunity a congress represents (Lecueder & Manyari, ). Graduate students or post‐doc fellows could in particular benefit from the stimulating atmosphere of the congress at minimal or no expense (Weissgerber et al., ), likewise for individuals with disabilities or communication limitations (Li et al., ). At the same time, outstanding scientists could accept invitations to online congresses with greater ease because the time commitment and impact on professional/personal commitments would be significantly reduced (Achakulvisut et al., ). Likewise, researchers from institutions or countries with fewer opportunities could benefit from online events and the possibility to interact with senior scientists (Cucinotta et al., ). There was always a limit to the number of rooms that were available for traditional onsite conferences (Lecueder & Manyari, ), which meant that only a limited number of speakers could be invited, and whatever space that was left over was dedicated to poster presentations. More scientists could present their findings at an online congress and posters, could be converted into short presentations, and it is possible to manage a high number of parallel sessions (Achakulvisut et al., ; Price, ). Participants' time management is improved at virtual meetings, and more time can be dedicated to question/answer sessions and discussions as well as to keynote addresses, or plenary sessions (Price, ). What is more, all the oral presentations can be recorded for the attendees' convenience. Participants who live in different time zones or those with professional or personal obligations can view recordings of presentations of interest at their convenience, without missing out on simultaneously presented lectures (Longhurst et al., ; CROI, ; Cucinotta et al., ). Recordings make it possible to view a lecture as many times as desired, in particular, those sections that are more complicated or difficult to understand. In addition, in a virtual modality all the participants can see and hear the speaker clearly (Achakulvisut et al., ) and, in some cases, can ask the speaker questions with greater freedom while avoiding all risk of contagion (McBrien et al., ; Martin‐Gorgojo et al., ). Virtual congresses can usually offer better and wider translation facilities to overcome language barriers (Lecueder & Manyari, ; Margolis et al., ). Those who have language difficulties or need to listen to a presentation several times to grasp its full meaning would be facilitated. As many online solutions can allow participants to pose questions or comments, this would certainly enhance his/her interest and participation (O’Flaherty and Laws, ; Achakulvisut et al., ; McDowell et al., ; Price, ). The only technical requirement to participate in a virtual congress is a computer and an internet connection; more sophisticated, up‐to‐date software and/or hardware may nevertheless be necessary for some sessions. Another appealing aspect relates to sponsors and exhibitors roles, which could be encouraged to participate because of the lower cost. Online congresses would lack physical stands to be set up, dedicated staff, transport of exhibition material, and the cost of gadgets. At the same time, they could use information material prepared for other media campaigns at a not renewed cost. Furthermore, on online platforms, there are no problems of space to be granted to sponsors and exhibitors, nor is there the need to rent broader and more expensive venues to satisfy all requests (Lecueder & Manyari, ).
Despite the appeal of virtual events, they do have some important limitations (Table ). Human contact and the possibility to socialize and discuss common interests with members of the scientific committee, other attendees, and representatives of pharmaceutical companies cannot be reproduced on an online platform. Many formal and informal interactions between the participants of a congress would no longer be possible, and there would be fewer opportunities for networking (Achakulvisut et al., ). Understandably, some will miss the in‐person get togethers and the fortuitous collaborations that can arise over coffee (Price, ). This is a clear limit to human relations development in all contexts, especially in those where even non‐verbal communication takes on importance. In the scientific field, one can compare on specific issues by giving the conversation a different tone and a different approach based on the interlocutor who is faced with. Sometimes, personal feelings about the speaker or listener's feedback can make the difference between collaborating and avoiding each other. This can only be understood in person (Lecueder & Manyari, ). It is nevertheless possible to provide some level of interaction between attendees, moderators, and speakers (Table ). Chat rooms and forums, and break‐in rooms can be created, and algorithms can be used to match participants' interests, skills, and attitudes and thus foster contacts that paradoxically could be more focused and effective than trying to meet an expert in the crowd of a large congress (AHE and APH, ; Achakulvisut et al., ; Mackenzie and Gulati, ; Martin‐Gorgojo et al., ; Sayre et al., ). While attendees who live in different time zones or who have family obligations may not be able to participate in live‐streaming sessions (Longhurst et al., ), lectures can be made available in a recorded mode (Fawns et al., ). From a technical point of view, the uneven distribution in access to communications technologies, that is, the so‐called digital divide, is something to consider as a large percentage of the world's population does not have access to internet (Fawns et al., ). Those without one will be unable to benefit from a virtual format (but they were also unable to travel to a traditional congress) (Wimpenny and Savin‐Baden , Ilgaz and Gülbahar ). The problem could be minimized by recording scientific sessions and keeping them active online well beyond the event so interested individuals could seek access to internet at a later date (Fawns et al., ). It would seem that many of individuals in this type of situation manage to access internet if not immediately or in their homes but via the institutions they attend (Lecueder & Manyari, ). The roles sponsors and exhibitors play in reducing the costs of medical congresses have always been relevant. How will their function change? Hypothetically, they could put pressure on the organizers to place distracting advertising material in live streamed sessions. The problem could be partially solved by agreeing to sponsored sessions with only static advertising (VSLRS, ; EAU, ). In this regard, it must be remembered, of course, that as stated by the International Academy for Continuing Professional Development Accreditation (IACPDA, ), commercial support must be based on fairness, transparency, and the precise separation of promotion/advertising from education. Another aspect that needs to be considered are the companies offering technical assistance to develop events (Evans et al., ). Alongside long‐term professionals, new firms will probably appear. While the former may be expensive, the latter may not produce satisfying results. This will probably be a short‐lived problem, as there will be a natural selection among competitors, and in the end, the most reliable and qualified professionals will prevail. In addition, event management platform software could also be used to simplify the planning process (Mackenzie & Gulati, ; McDowell et al., ). Finally, and most importantly, there are the regulatory and ethical constraints that need to be considered with regard to online events of the anatomical sciences. Digital images of human cadaveric dissections could hypothetically be disclosed and shared to sources beyond congress participants. Tailored agreements and shared ethics code involving both the speakers and attendants need to be formulated. A system of access control to an online platform would make it possible to verify the participants' identities and prevent others from viewing anatomical material and thwart hacker attacks (Artibani et al., ; IFAC, ; Pather et al., ). During this time of crisis some schools of anatomy have been streaming anatomical dissections in collaboration with nearby universities, generally as training sessions for medical students (Fawns et al., ). The Institute of Human Anatomy at the University of Padova has long collaborated with some Institutes of Surgery streaming cadaveric dissection laboratories during which professors of anatomy interact with professors of surgery for teaching purposes. Addressing a few hundred online attendees is, however, quite different from managing an international event. As it was said, some anatomical societies have attempted to produce alternative hybrid or fully virtual online events (Tables and ), and the international anatomical community has been examining these experiments with great attention. Virtual congresses are generally appreciated by most participants, even hypothesizing that they must remain in the current format in the immediate future due to the effects of the pandemic (Martin‐Gorgojo et al., ; McDowell et al., ; Cucinotta et al., ). Innovative communication solutions and ever greater academic cooperation could be unexpected but very positive outcomes of the current crisis. Reinforcing these collaborations by developing an online anatomical network could create an important resource for the entire anatomical community and provide an opportunity for integrating new technologies and social media into congressional events (Mackenzie & Gulati, ; McDowell et al., ; Price, ). In an ambient of collaboration and exchange, the digital competencies that some institutions develop would benefit those centers with limited resources. As the Anatomical Science Education journal has reported, the anatomical community is energetically striving to find solutions to Covid‐19‐related issues such as online and near‐peer teaching formats. As many of the world's people are vaccinated, there could be a light at the end of the tunnel, although new viral variants have cast some shadows. In any case, the anatomical community will need to meet the challenges that continue to arise.
The measures that have been implemented to protect public health and reduce the impact of the Covid‐19 pandemic have had a critical effect on the daily lives of most of the world's people. As far as anatomical congresses are concerned, it is improbable that it will be possible to go back to the traditional format when the pandemic is over. The adjustments that anatomical societies been forced to make in order to be able to continue to provide a vehicle for scientific development and exchange have accelerated a digital revolution that was already in progress. While there are some limitations to the two online congress modalities treated here, they undeniably have many advantages and benefits. At the same time, the hybrid format does give the participant the choice about how he/she wants to participate. Perhaps other congress forms will be developed and evolve, but in any case virtual scientific meetings are here to stay. All medical societies and in particular anatomical ones are searching for innovative ways to continue to promote the education and advancement of its members through scientific exchange and to provide guidelines for educational, technological, and scientific purposes.
|
Reproductive Justice in the U.S. Immigration Detention System | 743ca966-fc41-4a28-8135-801a653ab95e | 10510835 | Gynaecology[mh] | Immigration law was first conceived within a racialized and gendered framework with the goal of excluding specific groups from the national body. These laws were racialized because immigrants, many of whom were not White, were deemed inferior and therefore unworthy of political influence. These laws were gendered because immigrant women carried an immense power in their ability to bear a future generation of U.S. citizens. From the very inception of federal immigration law, with the passage of the Page Act in 1875, Congress banned the entry of Chinese women who had a history of sex work or polygamy. Although Chinese immigrants were unable to naturalize during this period of time, the newly ratified Fourteenth Amendment established birthright citizenship regardless of parental origin. The reproductive power of Chinese women was perceived as the ultimate threat to the cultural and ethnic purity of the nation-state. By banning the entry of Chinese women, the first federal immigration laws were formed to reflect the nation's interest in controlling the entry of people with reproductive capacity in an attempt to maintain its cultural and political hegemony. As eugenic ideas became popularized in the early 1920s, reproductive control was increasingly intertwined with conceptions of White nationhood. Fears about the dwindling White population were then codified into restrictive immigration policies with the passage of the Federal Immigration Restriction Act in 1924. This act contained quota provisions to limit the entry of non-White immigrants, whom Congress viewed as tainting American morality. This act, along with the quota system based on national origin, was in effect for 41 years until the creation of the Immigration and Nationality Act in 1965. Although explicit racialized exclusion was formally repealed when the Immigration and Nationality Act was enacted, the foundation of U.S. immigration law remains firmly cemented in the design for national identity of the quota system. The most egregious instances of reproductive abuse have occurred in settings where racial and economic inequities are prominent and where nativist impulses are empowered. People of color have historically been coerced into sterilization through targeted campaigns and a lack of informed consent. In an era in which White women in the United States had trouble convincing doctors to sterilize them, the government was directly and indirectly funding forced sterilizations of Latina, Black, and Indigenous women. Ties between imperial power and reproductive control are particularly evident in the coerced sterilization of Puerto Rican women. Despite being U.S. citizens, Puerto Ricans do not always experience full citizenship because of Puerto Rico's history as a colony rather than a state. After the U.S. colonization of the island in 1898, the government labeled Puerto Rican poverty as an issue of overpopulation. Over the next century, the government effectively replaced prior community birth control practices with a state-run agenda, resulting in a loss of agency over reproductive decisions. For many women, sterilization was the only option to engage in an economy designed to benefit U.S. corporations; many factories would not hire women unless they received reassurance that a pregnancy would not affect production. The pattern of reproductive oppression as intertwined with nativist motivations is also manifest in the case of Madrigal v Quilligan (1978), in which ob-gyns coercively sterilized Mexican American women in response to an increase in Mexican immigration. Most of the plaintiffs were monolingual Spanish speakers and testified that they had not understood that the tubal ligation surgery would permanently affect their ability to have children. Others were threatened with deportation if they did not consent to the sterilization. A medical student who had observed the coercive behavior described the widespread belief within the hospital that women from Mexico were a threat to society because of their high rates of fertility. The physicians held the stigmatizing belief that the women did not qualify as American, and thus did not require the same consent process, because they came to the United States specifically to give birth and obtain birthright citizenship for their children, which created a burden on society. These physicians had, consciously or unconsciously, taken on the eugenicist language that permeates the legal, cultural, and political fabric of the United States. These examples illustrate how perpetrators of forced sterilization in the United States historically targeted people whom they perceived as noncitizens because of their racial or ethnic identity, incarceration, or institutionalization. Within this history, we can begin to confront a system built around the exclusion of people of color and identify the ways in which these narratives persist within the medical field. The underlying nativism at the heart of immigration policy continues to play out in both covert and explicit attempts to maintain control of the racial demographics of the country. After the September 11, 2001, attacks, the immigration landscape shifted drastically, with a transfer of oversight to three different agencies within the Department of Homeland Security. Thinly veiled by an ideology of national security, today's immigration enforcement is largely responsive to reinvigorated xenophobic anxieties that framed immigrants themselves as a threat to the nation-state. In addition, the legal conception of national sovereignty in this context has resulted in judicial exemption from due process and equal protection standards that would have applied within any other domestic context. This legislative and judicial shift that took place after September 11 created clandestine spaces within detention centers where immigration enforcement authorities could act with impunity and continue practices that infringe on the bodily autonomy of immigrants. Violations of reproductive justice continue to occur throughout the immigration detention system as the current legal landscape allows our history of racialized reproductive control to persist. In addition to the recent reports of forced hysterectomies in Immigration and Customs Enforcement detention, , there are reports of abortion bans for unaccompanied minors in detention, forced separations of families at the border, mistreatment of pregnant immigrants in detention, , and medical neglect during pregnancy and childbirth. Other reports demonstrate that undocumented pregnant women have been increasingly targeted by punitive immigration policies, and the frequency of deportations among pregnant individuals has led to speculation that pregnancy itself has become a “red flag for removal by [Immigration and Customs Enforcement] officials.” In addition, toxic stress from Immigration and Customs Enforcement raids has been associated with increased rates of preterm births among Latines and limited reproductive autonomy. The legally recognized purpose of detention is to ensure compliance with immigration processes, yet rates of compliance are similar among immigrants in community settings and immigration detention. Immigrants in community settings avoid the negative health harms experienced by those detained, especially for those who are pregnant. Although detention may seem far removed from many immigrant communities, the number of detained immigrants in recent years is higher than ever before. In 2021 alone, 1.6 million migrants were detained at the U.S.–Mexico border. Detained individuals are often held longer and in worse conditions in immigration detention facilities operated by for-profit private prison companies as a result of cost-saving and profit-maximizing measures. Ending contracts with private prison firms must be a priority on the path to decarceration and reproductive justice. Reproductive justice concepts are being integrated into medical education, and at least one published curriculum has included immigration barriers in the context of reproductive rights. Yet, gaps remain in applying the reproductive justice framework to noncitizens and understanding the history of immigration law as a structural determinant of reproductive health for immigrants. Some of the most egregious examples of reproductive control arise within the U.S. immigration detention system, which restricts reproductive autonomy of immigrants on multiple fronts. Detained immigrants have been deprived the opportunity to have children in cases of coerced sterilization and forced to have children through significant barriers to abortion and contraception access. Detaining immigrant children strips parents of the right to nurture their family in a safe and healthy environment. When it comes to the reproductive health of noncitizens at the U.S.–Mexico border, the immigration context is ripe for abuse, considering the almost complete absence of judicial oversight in this area. Obstetrician–gynecologists are critical agents of change in promoting reproductive justice for immigrants both individually and collectively. On an individual level, learning about the historical examples of reproductive injustice that shape contemporary realities can lead to reflection, interruption of harmful biases, and improvement in physician–patient interactions. Obstetrician–gynecologists must create professional norms and systems for accountability regarding those who perpetuate abuses of reproductive rights and denounce both individuals and the systems that permit those practices to occur. Building relationships with legal professionals and community organizations to provide the best comprehensive care for immigrants is critical to advance reproductive justice at the individual level. Obstetrician–gynecologists should familiarize themselves with state policies regarding eligibility for insurance coverage and other social programs for immigrants based on legal status because undocumented immigrants, even if released from detention, are excluded from many public services, including insurance coverage under the Affordable Care Act. , On a collective level, ob-gyns and their professional organizations should advocate for federal and state policies that dismantle the structural harms of racism and xenophobia and advance reproductive justice and immigrant justice. For example, the American College of Obstetricians and Gynecologists successfully advocated to abolish shackling of pregnant people during childbirth in the carceral system. Future policy changes relating to immigration detention should ultimately focus on pathways to decarceration and the long-term goal to eliminate detention entirely. Although immigration detention continues to exist, policy changes should prioritize alternatives to detention programs (release on bond), particularly for pregnant people, such that immigrants are integrated into the community rather than imprisoned. In the interim, harm reduction strategies for pregnant individuals in detention include the following: 1) providing comprehensive preventive care, including high-quality prenatal care in accordance with standards supported by medical associations, in a system currently designed to manage acute care ; 2) improving surveillance and transparency of quality measures, particularly around reproductive health and pregnancy; 3) ensuring accountability when quality measures are not upheld, such as terminating contracts with facilities that violate standards; and 4) improving existing mechanisms to report grievances regarding care received in detention that do not lead to retaliation. All immigrants, whether they live in communities or in detention, should have universal access to health care. Expanding Medicaid coverage during the first year after childbirth to ensure that all people, including those in detention, have access to comprehensive care is a policy priority. However, undocumented immigrants are eligible for state Medicaid coverage only in certain states, limiting the effects of Medicaid expansion for those residing in states where they are eligible for coverage. Ultimately, a multilevel approach that addresses the roles of individuals and systems that perpetuate inequities will be critical to ensuring that immigrants are treated with dignity and respect when seeking reproductive health care. Reproductive justice is immigrant justice, and the long histories of both immigration and reproductive rights policies are intertwined, producing specific intersectional vulnerabilities for immigrant communities. Obstetrician–gynecologists hold a unique position in advocating for policies that ameliorate the harms conferred in immigration detention and partnering in solidarity with community organizers to promote reproductive justice. |
Disease-induced changes in bacterial and fungal communities from plant below- and aboveground compartments | ea6288ac-b1de-400c-9c7c-f791ca051d24 | 11061026 | Microbiology[mh] | Potato virus Y (PVY) is the type member of the genus Potyvirus , a major plant virus pathogen that causes significant economic losses worldwide (Faurez et al. ). Researchers have found that PVY infection can lead to foliar and/or tuber disease, with symptoms varying based on the virus strain, host growth stage and susceptibility, and environmental conditions (Fox et al. ). Infection non-persistently of PVY by aphids (e.g., Myzus persicae ) has been found to be a major method between plants through infected sap (Deja-Sikora et al. ). Recently, research has extensively focused on the evolutionary history, functional dissection, infection sources (Coutts and Jones ), transmission modes (da Silva et al. ), and plant resistance (Petrov et al. ). However, infected residues in seeds and plant tissue in soil are the primary infection sources for virus diseases, and the above practices are ineffective in preventing the spread of PVY infection. Therefore, it is now necessary to develop potentially successful strategies to manage the virus from its primary infection sources (e.g., soil) and prevent its spread in plants. Research about the interactions between crop plants and diverse microorganisms provides a potential method through microbiological agents. Plant growth and physiology are influenced by the microbiome, including viruses, bacteria, fungi, insects, and other invertebrates. These diverse biotic interactions (involving antagonistic, protective, exclusive, or symbiont effects) among tripartite members are of particular interest to plant crops due to their impacts on crop production (Deja-Sikora et al. ). It is documented that the extensive microflora presenting in the phyllosphere, endosphere, and rhizosphere forms influence plant growth, soil fertility, and disease (Etesami and Adl ). Bacteria and fungi are two major groups that lead to many biological effects through antibiotics, signaling molecules, physiochemical environment modulation, chemotaxis, cooperative metabolism, protein secretion, and even gene transfer (Frey-Klett et al. ). Various biological strategies based on biotic interactions are carried out to control plant diseases caused by pathogenic microorganisms, such as the application of microbial agents. For decades, huge studies have described the diversity of microbiomes associated with agricultural crop plants, paving the way for the use of microbes as biofertilizers and biopesticides (Lemanceau et al. ). For example, Bacillus amyloliquefaciens FZB42 is a commercially available bacterial strain that could produce bacillomycin D to resist the fungus Fusarium graminearum that infects wheat and barley (Gu et al. ). The wheat microbiota has been shown to reduce the virulence of the plant pathogenic fungus Fusarium graminearum by altering histone acetylation (Chen et al. ). Although many microbial agents present in agricultural ecology have been applied to control microbial diseases, the colonization of functional microorganisms is the key limiting factor. Understanding the complex biotic effects of the plant microbiome provides new insights for the development of more efficient and stable microbial agents in agricultural production. Microbial communities in plant rhizosphere and endosphere habitats are both important for controlling PVY infection and transmission, respectively. Pathogens of PVY primarily exist in the soil, and their infections can be modulated by rhizosphere microbes. Some plant growth–promoting rhizosphere microorganisms have the ability to synthesize various organic compounds, phytohormones, siderophores, and lytic and antioxidant enzymes. Additionally, they can enhance the stress-responsive ability of plants to remove plant pathogens from the soil (Hashem et al. ). Plant species, distinct rhizosphere environments (e.g., soil properties), and exudate blends were reported to be the main factors shaping the microbial assembly around plant roots (Berg et al. ). Endophytes are microbes that inhabit plant tissues and play crucial roles in plant growth, health, and resistance (Hardoim et al. ). They are classified into three types according to ecological functions: (i) The first group is called commensal endophytes, which show no apparent effects on plant performance. (ii) The second group provides beneficial effects for plants, such as plant growth promotion and protection against invading pathogens. (iii) The third group consists of latent pathogens. Endophytes are usually transmitted vertically through seeds and proliferate under local conditions inside the plant. Interestingly, interactions between endophytes and host plants can also affect rhizosphere microorganisms. Previous studies have shown that soil microbial activity can also be inhibited by endophyte infection of aboveground plants (Buyer et al. ; Tong et al. ). Researchers have come to understand the importance of the effects of pathogen infections on rhizosphere and endophyte microbial communities. However, the ascending migration pattern of microbial communities under disease infection remains unclear. We hypothesized that bacterial and fungal communities would exhibit different responses to PVY infection, based on variations in body size, diversity, dispersal potential, ecological function, and correlation with the host and other microorganisms. In addition, bacterial communities may be more susceptible to infection than fungal communities. To test the hypotheses, we investigated the ascending migration pattern of microbes from below- to aboveground (both bacterial and fungal), explored taxonomic differences between healthy and diseased plants using amplicon sequencing, and compared microbial networks of healthy and infected plants.
Sampling All samples were collected from the main tobacco production fields in Changde (29°13′30″–29°59′19″N, 110°28′40″–110°58′30″E), Hunan province, China. The two sites in this study were flooded fields (F) and upland paddy fields (P), respectively. The tobacco cultivars were the same at both sites, and planted using the same agronomic practices. Bulk soil, rhizosphere soil, and plant tissues (including root, stem, and leaf) were sampled in August 2022 from both fields. In each field, tobacco plants that showed significant height reduction and venous necrosis symptoms were classified as PVY-infected plants, while plants showing no significant symptoms were classified as healthy plants (Latorre et al. ). Six replicates of healthy and infected plants were collected from each field. Bulk soil was collected 20 cm away from the root, and rhizosphere soil was collected from soil adhering to the root by hand shaking. All samples were transported to the laboratory on dry ice. Then, soil samples were divided into two parts: one part was stored at − 80 °C for microbial experiments, and the other part was sent to the School of Resources and Environment at Southwest University for soil properties measurement. The pH value, water content (WC), organic matter (OM), total nitrogen (TN), alkali hydrolysable nitrogen (AHtested. Plant tissue samples were washed with plenty of water to remove soil and dust from the tissue surface. Then, plant tissues (about 5g) were successively immersed in 70% ethanol for 10 min, 5.25% sodium hypochlorite solution for 5 min, and 70% ethanol for 1 min, and finally washed with sterile water (Gao et al. ; Liu et al. ). Treated tissues were ground with liquid nitrogen in a sterile mortar and then stored at − 80 °C for further microbial experiments. Overall, there were 18 treatments in this study: nine treatments in flooded fields, including bulk soil (PC), rhizosphere soil (PZ: PHZ for healthy plants and PIZ for infected plants), plant root (PR: PHR and PIR), stem (PS: PHS and PIS), and leaf (PL: PHL and PIL), and nine treatments in paddy fields, including FC, FZ (FHZ for healthy plants and FIZ for infected plants), FR (FHR and FIR), FS (FHS and FIS), and FL (FHL and FIL).
Total DNA was extracted from the soil and plant samples using the FastDNA SPIN Kit for Soil, following the manufacturer’s instructions. The primer 799F (5′-AACMGGATTAGATACCCKG-3′)/1115R (5′-AGGGTTGCGCTCGTTG-3′) was used to amplify the V5-V7 region of the bacterial 16S rRNA gene (Kembel et al. ; Deng et al. ), and primer fITS7/ITS4 was used to amplify the fungal ITS2 region (de Vries et al. ). Sequencing was performed using the Illumina Hiseq2500 platform at MEGIGENE Biotechnology Co., Ltd. (Guangzhou, China). The raw data for bacteria and fungi were all uploaded to the NCBI database in the projects of PRJNA946037 and PRJNA946055. The 16S rRNA and ITS sequences were processed using QIIME2 platform (2020.6) with default parameters (Zhang et al. ). First, primers from reads were removed to obtain clean sequences. Then, DADA2 was used to generate feature tables based on the clean sequences (Callahan et al. ). Finally, taxonomic assignment was conducted according to the SILVA reference database and the UNITE database for bacteria and fungi (Kõljalg et al. ; Quast et al. ), respectively. Singlet reads; bacterial amplicon sequence variants (ASVs) classified as chloroplast, mitochondrion, or Viridiplantae; and fungal ASVs classified as plant or protist were removed. The feature table and taxonomic table for bacteria and fungi were used for downstream analysis.
In order to understand how microbial communities changed under PVY infection, four co-occurrence networks (HB: bacterial network for healthy plants, IB: bacterial network for infected plants, HF: fungal network for healthy plants, IB: fungal network for infected plants) were constructed using SparCC (Friedman and Alm ) by the SpiecEasi package in R (version 4.0.0) and visualized in Gephi (version 0.9.2). ASVs presenting in 1/6 of bacterial samples and 1/8 of fungal samples were used to construct networks with default parameters. SparCC was used to calculate correlations between ASVs with 100 permutations, and corrections with |coefficient|> 0.3 and p < 0.05 were incorporated into the network. Each node indicates a specific ASV, and each edge represents a significant correlation between ASVs. Topological characteristics of the network, such as the number of nodes and edges, proportion of positive edges, average degree, average path length, density, clustering coefficient, betweenness centralization, degree centralization, and modularity, were calculated to characterize the network complexity and stability (Hernandez et al. ) using igraph package. Based on the within-module connectivity ( Zi ) and among-module connectivity ( Pi ), the network nodes were identified as peripherals ( Zi < 2.5 and Pi < 0.62), network hubs ( Zi ≥ 2.5 and Pi ≥ 0.62), module hubs ( Zi ≥ 2.5 and Pi < 0.62), and connectors ( Zi < 2.5 and Pi ≥ 0.62) (Strogatz ). Network hubs, module hubs, and connectors were keystone species of networks (Shi et al. ).
Alpha diversity indexes, including the Shannon index (H), species richness (S), and Pielou’s evenness (J), were calculated using the vegan package (Oksanen et al. ) in R (version 4.0.0). Differences between treatments (different compartments, healthy/infected plants, different field types) were tested using analysis of variance (ANOVA). Principal coordinates analysis (PCoA) based on Bray–Curtis distance was calculated and visualized using the ggplot2 package. Permutational multivariate analysis of variance (PERMANOVA) statistical tests were performed to determine the effects of different factors on microbial community diversity by “ adonis ” in vegan R package, with 999 permutations. PERMANOVA was also used to test the effects of different factors on community structures. Three different complementary non-parametric analyses were used to test microbial community dissimilarity (Zhou et al. ), including an analysis of similarity (ANOSIM) (Clarke ), a multiresponse permutation procedure (MRPP), and a permutational multivariate analysis of variance (Adonis) (Anderson ). The Venn diagram was plotted to exhibit the ASV distribution using the VennDiagram R package. Spearman’s correlation analysis between soil properties and microbial genera and orders was conducted and visualized in a heatmap using the vegan and pheatmap R packages. Partial least squares path model (PLS-PM) was performed to decouple the effects of soil properties and PVY on microbial communities using the plspm R package (Latan et al. ; Jiang et al. ). Comparison of bacterial composition at the genus level and fungal composition at the order level was conducted using Student’s t -test in STAMP (Parks et al. ; Gu et al. ).
Soil physical and chemical properties in flooded and upland paddy fields PVY affected the plant microbial community diversities PVY shifted the plant microbial community compositions PVY affected plant bacterial and fungal networks To investigate how PVY affected the plant microbiome, bacterial and fungal networks of both healthy and infected plants were constructed (Fig. a). The results showed that bacterial networks had a higher number of nodes and edges, modularity, and positive edge proportions (nodes/edges/modularity/proportion, 389/3647/0.452/84.45% in healthy plants, and 496/6133/0.402/80.14% in infected plants) than fungal networks (nodes/edges/modularity/proportion, 83/142/0.192/67.61% in healthy plants, and 100/195/0.275/70.26% in infected plants) (Fig. b). Further, there were 24, 35, 6, and 4 keystones (including module hubs, network hubs, and connectors) identified in HB, IB, HF, and IF networks, respectively (Fig. c). However, all the keystones in fungal networks were connectors. Compared to healthy plants networks, the number of keystones increased in infected plants’ bacterial networks and decreased in infected plants’ fungal networks. Keystones were primarily positive with other nodes but showed more negative correlations in fungal networks (Fig. d). Furthermore, several bacterial taxa, such as Flavobacterium , Pseudomonas , and Sphingobacterium enriched in the diseased plants, were also identified as keystones in IB network (Table ). For fungal communities, the keystones were classified into Cantharellales order in HF network, while they were classified into Sordariales and Hypocreales orders in IF network (Table ). In addition, bacterial and fungal networks in the infected plants were more complex (in terms of the number of nodes, edges, average degree, and average path length) than those in the healthy plants.
Field and rhizosphere soil physical and chemical properties showed significant differences between flooded and upland paddy fields (Table ). WC, pH, AHN, and TK were significantly higher in flooded fields than in upland paddy fields (ANOVA, p < 0.05). Additionally, soil properties also changed significantly in bulk soil, healthy plant rhizosphere soil, and infected plant rhizosphere soil. In flooded fields, soil physical and chemical properties were higher in rhizosphere soil than in bulk soil, but there was no obvious difference between healthy and infected plant rhizosphere soil. In upland paddy fields, the content of AK was significantly higher in rhizosphere soil than in bulk soil, and TP and AP were obviously higher in infected plant rhizosphere soil than in bulk soil. These results suggested that field type and plant were major factors in changing soil properties, and plant health state also played a regulatory role in soil properties.
In total, 32,339 bacterial ASVs and 3611 fungal ASVs were observed from 108 samples. To determine the dimensions in which factors shape the plant microbiome, the relative contribution of multiple factors in terms of compartment (bulk soil, rhizosphere soil, root, stem, and leaf), PVY (health or infection), and field type (flooded field or paddy field) was assessed in shaping bacterial and fungal communities. Microbial community diversity indexes, including Shannon index (H) and Pielou’s evennesses (J), were calculated (Table , ). ANOVA analysis suggested that the bacterial community diversities (H and J) in rhizosphere soil were significantly higher than that in the endophytic bacterial community diversities, while the endophytic bacterial community diversity showed no significant difference between healthy and infected plants. For fungal communities, there was no significant difference between treatments. According to PRRMANOVA, PVY and compartments exerted significant effects on bacterial and fungal community diversity indices (including H, S, and J values) ( p < 0.05) (Table ). Moreover, the effects of PVY and compartments were higher on bacterial communities than on fungal communities, as indicated by higher R 2 values. In addition, the species richness of bacterial and fungal communities was influenced by the combination of field type and PVY. Significant tests based on MRPP, ANOSIM, and ADONIS were performed to analyze changes in microbial community structures using Bray–Curtis distance. The results suggested that the infection of disease caused significant changes in the endophytic bacterial community structure of plant stems and leaves in flooded fields. PERMANOVA analysis further revealed that the greatest effect on total microbial communities is by habitat ( R 2 = 0.244 for bacterial community and R 2 = 0.272 for fungal community, p < 0.001), followed by PVY ( R 2 = 0.049 for bacterial community and R 2 = 0.095 for fungal community, p < 0.001), and field type ( R 2 = 0.035 for bacterial community and R 2 = 0.032 for fungal community, p < 0.001) (Table ). Habitat and PVY explained a higher proportion of the variation in fungal community structure than that of bacterial community structure.
The distributions of ASVs among bulk soil, rhizosphere soil, root, stem, and leaf were described by a Venn diagram (Fig. a). The results indicated that the total number of ASVs in rhizosphere and bulk soils was much higher than that in plant roots, stems, and leaves. For bacterial communities, the numbers of ASVs showed an obvious decreasing trend with the ascending migration from underground to aboveground. Specifically, rhizosphere bacteria had the highest numbers (10,144 for FZ and 109,751 for PZ), followed by endophytic bacterial communities in roots (2412 for FR and 1675 for PR), stems (1768 for FS and 1830 for PS), and leaves (771 for FL and 906 for PL). Moreover, the number and abundance of ASVs shared by field and rhizosphere soils also decreased with the ascending migration in flooded fields (FC 2867/73.55%, FZ 2867/68.45%, FR 214/34.31%, FS 80/6.60%, FL 17/1.60%) and paddy fields (PC 3526/80.24%, PZ 3526/73.84%, PR 151/34.18%, PS 87/37.92%, PL 29/2.40%). For fungal communities, the numbers of ASVs shared by field and rhizosphere continued to decrease with the ascending migration, while their abundance in endophytic communities (88.66–93.54% in flooded fields and 84.81–86.14% in paddy fields) negligibly differed from that in soil fungal communities. Community composition analysis found that Proteobacteria , Actinobacteria , Bacteroidetes , Firmicutes , and Chloroflexi were dominant phyla for rhizosphere and endophytic bacterial communities (Fig. b, c). Student’s T -test analysis between healthy and infected plant bacterial communities showed that PVY infection significantly altered bacterial communities of rhizosphere soil, plant roots, plant stems, and plant leaves at the genus level. Specifically, 15 genera were significantly changed in the rhizosphere bacterial community, and the relative abundance of Flavitalea , Myxococcaceae , Constrictibacter , Lentimicrobium , Neochlamydia , Marinibaculum , and Arenimicrobium was significantly higher in healthy plants. In the root endophytic bacterial community, five genera significantly decreased under infection. In the stem endophytic bacterial community, seven genera were significantly changed, with Enterobacter enriched in the healthy plants. In the leaf endophytic bacterial community, three genera showed significant changes, with Sulfophobococcus enriched in healthy plants. In summary, PVY infection changed the bacterial communities of rhizosphere soil, plant roots, plant stems, and plant leaves at genus level, and the number of genera decreased with ascending migration. The dominant fungi were Ascomycota , Basidiomycota , Chytridiomycota , Glomeromycota , and Zygomycota in different treatments. Different results for the effect of infection on fungal community composition were found by T -test analysis compared to bacterial communities. In the rhizosphere fungal community, the relative abundance of two orders significantly changed, with Paraglomerales being enriched in the healthy plants. Similarly, in the root endophytic fungal community, two orders also showed significant changes, with Mortierellales being enriched in healthy plants. Contributions of biotic and abiotic factors on microbial community under PVY infection. In order to reveal the mechanism by which environmental factors regulate the core taxa of the microbial community, a spearman correlation analysis was conducted (Fig. ). The results suggested that water content and pH played significant roles in the bacterial community. In addition, the core genera of rhizosphere and endophytic bacterial community showed significant relationships with rhizosphere soil water content (Fig. a), while soil pH showed significant correlations with endophytic bacterial community, especially in root tissue. Interestingly, nutrient contents including AP, TP, and TK were negatively correlated with Arenimicrobium , which was the core genus in the rhizosphere bacterial community. Spearman correlation analysis was also performed between the seven core orders in fungal communities and environmental factors (Fig. b). Similar to the results of bacterial communities, water content and pH were also critical environmental factors for fungal communities. However, the water content was positively correlated with Mortierellales and negatively correlated with Glomeromycota and Rhizosphydiales . Moreover, they were all core orders of the rhizosphere fungal community. The value of pH showed a significant correlation with the core orders of rhizosphere and endophytic fungal communities. As discussed above, multi-parameters (i.e., environmental factors and PVY) affected the variations of microbial community. To better integrate the complex interrelationships between PVY, soil properties, specific bacterial, and fungal groups, a PLS-PM model was constructed (Fig. c). After model optimization, the GoF value was 0.520. Results suggested that factors including PVY, pH, and WC had a higher explanation for bacterial community ( R 2 = 0.657) than fungal community ( R 2 = 0.623). PVY showed positive and negative direct effects on bacterial (0.604) and fungal (− 0.757) communities, respectively. In addition, pH positively regulated the bacterial community (0.418), and WC had no direct effects on both bacterial and fungal communities. However, WC positively affected pH (0.649), which meant WC would indirectly affect bacterial community. Overall, PVY showed a larger effect on bacterial communities than fungal communities, and soil properties also affected bacterial and fungal communities.
In this study, we sought to investigate the effects of PVY on plant microbiomes using amplicon sequencing approaches. By profiling both bacterial and fungal communities in below- and aboveground compartments of healthy and PVY-infected plants, we revealed that bacterial networks were more complex and their communities were more sensitive to PVY than fungal communities. pH and WC played important roles in shaping microbial community composition as key soil properties. Moreover, our work found that bacterial and fungal communities in plant organs had different recruitment strategies. Through this work, we have provided evidence that PVY infection not only changed the diversity and composition of microbial communities but also influenced their networks. Below, we discussed how these findings promoted our understanding of disease-induced changes in plant microbial communities and ecological networks. Bacterial and fungal communities showed different recruitment strategies for plant organs Bacterial communities were more sensitive to PVY than fungal communities pH and WC drove microbial community assembly under PVY infection Uncovering how microbes were recruited to plant endophytic communities is of great importance to advance the microbial assembly of plants during disease infection. Accumulating studies on wheat, sugar beet, and Arabidopsis thaliana have found that plant roots could attract beneficial species to resist pathogen infection, which was called as “cry for help” strategy (Berendsen et al. ; Carrión et al. ; Yin et al. ). Plants are hosts of complex endophytic microbial communities, which could colonize both below- and aboveground tissues. Endophytes could be recruited from the surrounding environment horizontally or transmitted through seeds vertically. Colonization of endophytes in roots from soil is the most important transmission route (Frank et al. ). Rhizosphere microbial communities were recruited by plants based on carbon sources, phytochemicals, pH, oxygen, and root exudates, acting as a selective barrier to plant roots. However, microbial communities had a higher number in rhizosphere soil than in bulk soil in our study. We speculate that the complex substances (e.g., amino acids, sugars, organic acids, mucilage, and proteins) secreted from plant roots were not only used to select special taxa, but also to increase soil nutrition in rhizoplanes, such as carbon sources, which would help to increase the rhizosphere microbial community diversity (Zhang et al. ). With the selection pressure from plants, the microbial community drastically decreased in plant tissues compared to that in soils. Interestingly, the ASV numbers of endophytic bacterial communities in root, stem, and leaf were gradually reduced, as well as the number and abundance of ASVs shared in rhizosphere and bulk soils; however, the number and abundance of ASVs had no obvious changes in endophytic fungal communities. Different transmission strategies for endophytes between bacteria and fungi were caused by different sources of species. Endophytic fungi are usually derived from aerial fungal spores, while most endophytic bacteria are derived from rhizosphere soil (Wang et al. ). In addition, changes in field types had no influence on the transmission strategies of bacterial and fungal communities. Understanding the keystone species through the analysis of network hubs and their associations with other species is crucial for leveraging the plant microbiome to improve plant growth and health (Gao et al. ). This study identified Flavobacterium , Pseudomonas , and Sphingobacterium as potential beneficial bacteria and network keystones in plant microbiomes, which were enriched in diseased plants. Prior research has demonstrated the presence of numerous members from the Flavobacterium , Pseudomonas , and Sphingobacterium genera in various plant compartments, highlighting their significant role in influencing host performance, especially in plant pathogens (Etesami and Adl ). For instance, Pseudomonas emerges as the prevailing taxon among plant-beneficial bacteria, playing a crucial role in safeguarding plants against pathogens (Yu et al. ). Moreover, network keystones assume pivotal topological positions and may be deployed to organize favorable plant microbiomes. The convergence observed between biomarker taxa and network keystones also implies that certain bacterial taxa recruited by diseased plants may function as keystone species within plant microbiomes, thereby ensuring the perpetuation of future generations.
Cooperative and competitive interactions between microbial species and network topological properties played important roles in community stability. In this study, bacterial networks, including healthy and infected plants, were characterized by a higher proportion of positive correlations than those in fungal networks. Positive correlations in networks mean ecological cooperation between species, which creates dependency and potential for mutual downfall (Coyte et al. ). Thus, a higher proportion of positive correlations in bacterial networks indicated lower microbiome stability compared to fungal networks. Higher competition in fungal networks would provide more resistance to external stress, such as disease infection (Wagg et al. ). In contrast to fungal communities, bacterial communities were more affected by PVY, due to increased negative correlations between bacterial species in PVY-infected networks compared to healthy networks. In addition, higher modularity and complexity (as indicated by the number of nodes, edges, average degree, and average path length) were observed in healthy bacterial networks, further exacerbating the destabilizing effect. Higher modularity means higher prevalence of cross-module correlations among taxa (Grilli et al. ). Network complexity is strongly correlated with network stability, which supports the central ecological theory that complexity leads to stability (Yuan et al. ). These findings suggested that bacterial communities were more sensitive to PVY than fungal communities, and infection increased community stability. Studies have found that the bacterial community was less stable than the fungal community under environmental disturbances, such as drought stress (de Vries et al. 2018b) and manure application (Wang et al. ). However, a previous study reported that the fungal community was more sensitive to Fusarium wilt disease (FWD) than the bacterial community (Gao et al. ). The possible reason for this discrepancy was that the networks were based on plant roots, stems, and fruits, where the disease affected the microbial community of reproductive organs less than vegetative organs. In our study, however, the network included soil, root, stem, and leaf samples, which caused the contrasting results. Our study indicated that PVY increased bacterial network stability and decreased fungal network stability. The contrasting pattern between bacterial and fungal networks was observed based on network complexity properties such as average degree and average path length. Previous studies have reported that network complexity and keystones are both important for community stability (Toju et al. ; Wagg et al. ). PVY decreased the number of connectors in fungal networks, but increased the number of module hubs in bacterial networks. Moreover, PVY increased competition between keystones and other species. Bacterial networks showed higher modularity, and more competition was established between species within the module responding to disease infection. Conversely, the correlation between modules of fungal networks was decreased, and competition between connectors and other species was increased. Indeed, the negative effect of V. dahliae (a kind of soilborne pathogen) has been demonstrated to be alleviated by decreasing the correlations between modules (Rybakova et al. ). In this study, samples included variations in field type, PVY, and compartment. Analysis of bacterial and fungal community diversity found that PVY and compartment had a higher impact on the fungal community, while field type showed higher effects on the bacterial community. Different field types implied different soil characteristics (e.g., WC, pH, AHN, and TK), which affected microbial community structure and diversity. Microbial communities in different compartments of a plant showed a high degree of organ specificity, with different selective pressures (Guevara-Araya et al. ). PVY has also been found to affect the assembly of microbial community, as a typical plant disease (Chowdhury et al. ; Gao et al. ). In addition, bacteria and fungi had different body sizes (Gu et al. ), diversity, dispersal potential, ecological function (Gao et al. ), and correlation with host and other microorganisms, ultimately affecting the species sorting and community assembly. Higher effects of field type, PVY, and compartment on bacterial community diversity and structure meant that the bacterial community was more sensitive to environmental factors compared to fungal community.
Our results showed that bacterial and fungal community composition was significantly different between healthy and infected plants, with plant growth–promoting microbes enriched in rhizosphere soil, plant root, stem, and leaf. Previous studies have shown that plants lacking genetic resistance to pathogens would enrich particular microbes to obtain pathogen suppression (Santhanam et al. ). For bacterial communities, Flavitalea , Myxococcaceae , Constrictibacter , Lentimicrobium , Neochlamydia , Marinibaculum , Arenimicrobium , Enterobacter , and Sulfophobococcus were enriched in healthy plants. It has been reported that Myxococcaceae could produce antagonistic enzymes and secondary metabolites that maintain plant health (Dror et al. ), and Enterobacter was an endophytic plant growth–promoting bacterium (Taghavi et al. ). For fungal communities, Paraglomerales , Mortierellales , Glomerales , and Rhizophydiales were also enriched in healthy plants. Paraglomerales belonged to arbuscular mycorrhizal fungi, which provided essential nutrients to plants and improved plant health and production (Bano and Uzair ). Soil properties, especially for pH and water content, have been shown to play an important role in changes in microbial composition. Previous studies have found that the infection of plant pathogens could be significantly and directly regulated by soil properties, including pH (Li et al. ), temperature, water content (Jiang et al. ), and resource availability (Yang et al. ). Soil pH has been reported to determine the colonization of plant pathogens by impacting the specific microbial groups (Liu et al. ). Water content of soil could affect the nutrient availability, determining the microbial community composition by increasing Gram-negative bacteria but decreasing Gram-positive bacteria and fungi (Chen et al. ). In our study, the water content showed negative and direct effects on bacterial and fungal communities, and pH had a positive influence on fungal communities directly. Considering the correlations between bacteria and fungi, the bacterial community was also indirectly affected by pH. In this study, we found that bacterial and fungal communities showed different recruitment strategies in plant organs. The number and abundance of shared bacterial ASVs in bulk and rhizosphere soils decreased with ascending migration from below- to aboveground compartments, while the number and abundance of fungal ASVs showed no obvious changes. Field type, plant compartments, and PVY infection all affected microbial community diversity and structures, except for field type on microbial community diversity. In addition, rhizosphere soil pH and WC drove bacterial and fungal community assembly processes directly under PVY infection. Analysis of microbial networks indicated that bacterial communities were more sensitive to PVY than fungal communities, as evidenced by lower network stability of bacterial communities due to a higher proportion of positive edges. PVY infection further increased bacterial network stability, and decreased fungal network stability |
The State of the Art of eHealth Self-Management Interventions for People With Chronic Obstructive Pulmonary Disease: Scoping Review | 778571cb-0240-4fb9-995d-b024a1fb3435 | 11933764 | Medicine[mh] | Background Objectives To summarize, little is known about the actual content and design of self-management eHealth interventions for people with COPD. Therefore, this scoping review aimed to investigate the current state-of-the-art eHealth interventions for COPD self-management and identify potential gaps in the literature, which may provide insight into or serve as inspiration for the development of future eHealth self-management interventions. “State of the art” within the context of this study can be defined as follows: “The collection of all underlying components that form the basis for the eHealth self-management interventions for people with COPD.” shows how the different parts of this review contribute to an overall picture of the current literature and highlights the specific aspects explored in this review. More specifically, we aimed to unravel the state of the art of eHealth self-management interventions by using the following subquestions: What is the “e” in eHealth self-management? What is the “health” in eHealth self-management? Who is the “self” in self-management? What is the “management” in eHealth self-management?
Chronic obstructive pulmonary disease (COPD) is a common disabling lung condition characterized by chronic respiratory symptoms that cause persistent, mostly progressive airflow limitations . It is one of the major issues of public health, and its prevalence, mortality, and morbidity are increasing [ - ]. COPD was listed as the third leading cause of death worldwide in 2019 , and it is estimated that by 2040, deaths from COPD will rise to 4.4 million per year . In addition, people with a lower socioeconomic status are at increased risk of developing COPD . Although COPD may, in some cases, be the result of a genetic risk factor, it is, in most cases, caused by exposure to tobacco smoking and the inhalation of toxic particles and gases from indoor and outdoor air pollution . People with COPD often experience symptoms such as dyspnea, fatigue, chest tightness, activity limitation, and cough that may be accompanied by sputum production . Furthermore, they may experience acute events, known as exacerbations, which can lead to hospitalization. Although COPD is chronic and thus not curable, it is, however, treatable, and disease progression is preventable . Therefore, the treatment of COPD often focuses on reducing symptoms and future risks of exacerbation with the use of pharmacological and nonpharmacological therapies (eg, inhaler use, vaccinations, smoking cessation, and self-management) . Given its chronic nature and the impact of the disease on all facets of one’s life, an important aspect of treating COPD and secondary prevention is chronic disease management. An essential component of chronic disease management is self-management . Owing to the variation in the literature regarding the definition of self-management in COPD, this paper defines self-management as follows: “The ability of an individual to manage one’s symptoms, treatment, physical, social, and emotional consequences, and lifestyle changes. It includes means of empowerment, educating oneself, being autonomous, learning and adapting to new behaviors, acceptance, and adapting to a new balance in life.” It requires patients to take an involved and responsible role in their health, with the aim of becoming active participants . Self-management interventions or programs are developed to help patients engage in self-management, and their effectiveness is investigated in research. Self-management interventions or programs are shown to have positive effects, for example, in supporting patients to develop and improve their self-management skills and disease knowledge [ - ]. Camus-García et al found that self-management interventions may improve clinical outcomes in COPD (eg, improvements in health-related quality of life) and lower the probability of hospital admissions. The actual content of such self-management intervention programs for COPD is diverse , and it remains unclear which specific elements are essential for designing a successful program. In the following sections, some elements that can be considered when designing an intervention program will be briefly described: content for COPD self-management, processes of self-management, and behavior change techniques (BCTs). The diversity of content may be explained by the numerous objectives and end points of self-management intervention programs . Interventions focus on acute exacerbation management and admission avoidance by incorporating exacerbation action plans and often also include education, exercise training, and breathing strategies . However, research suggests that intervention programs that only include education or action plans alone may not result in behavioral change, increased patient confidence, or the acquisition of new skills that patients learn or practice . Besides the content of self-management intervention programs, the design of the intervention program should also reflect that self-management consists of different processes. Schulman-Green et al identified different self-management processes for chronic illnesses, such as “learning,’” “taking ownership of health needs,” and “performing health promotion activities.” All processes are divided into specific self-management tasks (eg, “learning about condition and health needs” and “changing behavior to minimize health impact”) and skills (eg, “acquiring information” and “reducing stress”) . Schulman-Green et al concluded that the identification of such processes may help support and guide future self-management intervention programs. They also demonstrated that these various processes should be viewed within the broader context, as their significance to patients may vary depending on where they are in their patient journey . Therefore, more knowledge about such self-management processes within self-management eHealth intervention programs is needed to support the development of such interventions. Self-management interventions may also aim to change a certain behavior of the patient, so the incorporation of BCTs can be beneficial to designing a successful intervention program. A BCT is “a specific observable, replicable, and irreducible component of an intervention program designed to alter or redirect causal processes that regulate behavior” and can be included in the design of any type of self-management intervention program. By adding these “active ingredients” (eg, “feedback” and “self-monitoring”), chances for achieving behavioral change may be increased . Thus, combining self-management processes and BCTs in intervention programs may lead to positive results for one’s self-management. However, to the best of our knowledge, no research is dedicated to investigating the presence of BCTs and self-management processes in current self-management interventions for COPD. One way to support people with COPD in engaging in self-management is through the use of eHealth interventions. eHealth interventions can be defined as “An eHealth technology specifically focused on intervening in an existing context by changing behaviors and/or cognitions” . eHealth interventions to support self-management may help people with chronic diseases become more independent and empowered by, for example, gaining knowledge about their disease, monitoring and reporting daily symptoms, and learning specific self-management skills [ - ]. Therefore, the use of eHealth interventions in COPD care represents a promising way of delivering health services, such as support in self-management . In the current literature, a diverse range of eHealth interventions aim to support patients in their self-management, and these are increasingly provided to support patients in health communication, self-monitoring, and their medical treatment . Available literature revealed that current eHealth interventions for COPD mainly focused on COPD care, education, smoking cessation, medication adherence, exercise, diet, and symptom management . This indicates a tendency toward managing the physical aspect of COPD in self-management eHealth interventions. However, the physical aspect of one’s disease is only 1 dimension of the positive health paradigm. As conceptualized by Huber et al , “Health includes the ability to adapt, and self manage in the face of social, physical, and emotional challenges,” also referred to as “positive health.” Huber et al stated that positive health as a concept has several important health indicators, categorized into 6 dimensions: “bodily functions,” “mental well-being,” “meaningfulness,” “quality of life,” “social participation,” and “daily functioning” . They stressed the fact that paying attention to these indicators could support shared decision-making and bridge the gap between health care and the social context. Therefore, these dimensions are all important to consider when self-managing one’s disease. However, no research is available regarding the extent of positive health dimensions addressed in current self-management eHealth interventions for COPD. Furthermore, using eHealth to support people with COPD might entail some challenges, as low health literacy is prevalent among people with COPD . In addition, moderate levels of self-reported eHealth literacy are common among people with COPD . Some studies revealed that people with COPD experienced technical barriers when using eHealth interventions for self-management . However, Williams et al indicated a few technical issues experienced by people with COPD when using eHealth to support self-management, leading to uncertainty about whether such eHealth technologies are suitable for the whole COPD population. Although some information about eHealth use for this population is available [ - , , - ], little research is dedicated to investigating whether current eHealth interventions account for the wider population of people with COPD (such as those with eHealth literacy). Therefore, it should be investigated whether there is a difference between the intended population that eHealth interventions aim to target and the eventual included population in those studies. As knowledge and new insights derived from those studies often serve as a starting point for future work, it can be very valuable to look into the representation of the COPD population within studies.
Overview Search Strategy Study Selection Procedure The screening was performed on June 3, 2022, with screening supported by the web-based software Rayyan.ai (Rayyan Systems Inc) . To screen articles for title and abstract, both reviewers (EtB and RV) adhered to the eligibility criteria that were discussed before the start of the screening ( ). One reviewer (EtB) screened all articles for title and abstract. The second reviewer (RV) screened 20% (118/588) of the titles and abstracts of those studies. After this first screening, a discussion took place to compare discrepancies and come to a consensus between reviewers (EtB and RV). Both reviewers had previous experience with performing a systematic review. After this first screening, it was necessary to revise and clarify some of the inclusion criteria to arrive at a satisfactory level of agreement between reviewers. This means that with the use of the revised inclusion criteria, clear and substantiated decisions could be made on whether to include a certain article in this review. For the full-text screening, the same process was applied. In this screening, the level of agreement between reviewers was satisfactory. Reasons for excluding articles during the full-text screening were recorded. Before extracting the data, a data extraction form was developed, discussed, and agreed upon with 3 authors (EtB, CG, and MT). This form was piloted after the full-text screening to reduce errors during data extraction. Data extraction was performed by the first author (EtB) using Atlas.ti (version 9.1.7.0; Lumivero) , based on the data extraction form. Some of the data to answer the subquestions were directly extracted (eg, type of study and year of study), some data were the result of an assessment or categorization of the reviewers (eg, positive health dimensions), and some data were a combination of both (eg, BCTs). After the extraction, data were clustered and charted in various ways (eg, bar charts, tables, and descriptive presentations). Finally, charted data were scrutinized and synthesized by 1 reviewer (EtB) before discussing the results with 2 authors (CG and MT). Thereafter, results were written down to answer the proposed research questions. An overview of how the articles were extracted and charted is provided in .
A scoping review was performed to investigate the currently available literature. According to Munn et al , a scoping review is an ideal tool for providing, for example, an overview of existing literature, identifying key characteristics, and highlighting knowledge gaps, among others. Hence, this was deemed the most suitable method for answering the proposed research questions. Parts of the PRISMA-ScR (Preferred Reporting Items For Systematic Reviews and Meta-Analyses extension for Scoping Reviews) protocol (items 1-7, 9-11, and 13-21), as proposed by Tricco et al , were followed and tailored to this study to ensure a systematic approach for answering the research questions. The PRISMA-ScR checklist is provided in . The protocol for this review was not published.
The first reviewer (EtB) was responsible for identifying relevant articles in the databases PubMed, Scopus, PsycINFO, and Wiley. Combinations of the search terms “self-management,” “COPD,” and “eHealth” were used to generate the search string. The search string that was used for this review is provided in .
Studies were considered eligible if they were original research and portrayed an eHealth intervention supporting the self-management of COPD. The eHealth self-management intervention should actively involve and engage individuals with COPD, ensuring they experience personal benefits from their self-management efforts, supported and encouraged by the intervention. Per definition, self-management aims for patients to become active participants in their care, which means that patient involvement within self-management eHealth interventions is essential to be able to fulfill an active role. Therefore, we did not consider that eHealth interventions actually support self-management when patients themselves are not involved (eg, if the interaction with the eHealth service is limited to collecting data). Furthermore, articles needed to be published between January 1, 2012, and June 1, 2022. As of 2012, eHealth technologies were upcoming, and their relevance for future health care appeared to be promising. For example, national efforts to implement eHealth in current care were presented in 2012 in the Netherlands (eg, ). The complete list of assessment and eligibility criteria is presented in . Assessment and eligibility criteria for studies. Concept Studies describing an eHealth intervention supporting the self-management of chronic obstructive pulmonary disease (COPD) were included. Studies not fulfilling the inclusion criteria were excluded. Population Studies involving adults aged ≥18 years diagnosed with COPD (and other chronic conditions provided that the eHealth technology has a dedicated part toward COPD) were included. Studies using general terms such as “older adults,” “rural patients,” or “communities” or referencing unspecified multimorbidity or chronic conditions without any clarification about the population were excluded. eHealth technology Studies where eHealth technologies were used to support people with COPD in engaging in self-management, involving patients in their intervention were included if they used at least 1 self-management process, as defined by Schulman-Green et al , and in the case of sole monitoring, where patients were able to see their data. Studies collecting data solely for research purposes to train machine learning or artificial intelligence algorithms without any further patient engagement were excluded. Study design Original research studies were included. Reviews, protocols, abstracts, letters, conference proceedings, commentaries, notes, short surveys, and erratum were excluded. Language Studies in English were included. Studies not fulfilling the inclusion criteria were excluded. Year of publication Studies published between January 1, 2012, and June 1, 2022, were included. Studies not fulfilling the inclusion criteria were excluded.
Search Results Study Characteristics The e in eHealth The Health in eHealth Technologies for Self-Management The Self in Self-Management The Management in Self-Management This section describes which self-management processes and BCTs were found within the different eHealth technologies. Details about this section are provided in (overview of self-management processes) and (overview of BCTs). Self-Management Processes BCTs Used shows the detailed flowchart of the inclusion of studies. A total of 893 articles were identified during the initial search, of which 305 (35.1%) duplicates were removed, and the remaining 588 (%) articles were screened on title and abstract; this screening phase resulted in 189 (32.1%) articles that could be assessed for full text. After full-text screening of 189 articles, 88 articles (46.6%) were excluded, resulting in 101 articles (53.4%) being included in this scoping review.
The included papers represented 101 unique studies. Most articles (18/101, 17.8%) were published in 2021, followed by 2020 (15/101, 14.8%) and 2017 (13/101, 12.9%). As shown in , the most common study types were either randomized controlled trials (18/101, 17.8%) or (prospective) pilot studies (16/101, 15.8%).
This section focuses on the functionality, modality, technology readiness level (TRL) , and eHealth development details of the used technologies. Details about the functionality and modality of the eHealth interventions are provided in [ - , , , - ]. Of the 101 included studies, 76 (75.2%) mentioned the name of their eHealth technologies. Some articles reported on studies using the same eHealth technologies (eg, “EDGE” [ , - ], “It’s Life!” [ , , ], “MasterYourBreath” [ - ], and “COMET” ). In some cases, studies using the same eHealth technology together portrayed the process of developing, testing, and evaluating an eHealth intervention. A total of 50 unique eHealth technologies were found in this review. Most articles (91/101, 90.1%) included self-monitoring (eg, monitoring of symptoms) as a function of their technology. Of the 101 articles, 69 (68.3%) included the function of educating or informing (eg, education on COPD) and 27 (26.7%) supported communication (eg, eConsults with HCPs and peer-to-peer support chats). Most articles (68/101, 67.3%) included >1 function within their technology. shows that a (smart) measurement device (eg, wearable or monitoring system) was the most common (39/101, 38.6%) modality used in the studies, followed by a smartphone (27/101, 26.7%) and tablet (25/101, 24.7%). If studies used >1 device, the most common combination was a (smart) measurement device with a tablet (19/101, 18.8%) or smartphone (8/101, 7.9%). This review found no article that explicitly stated their TRL. According to our assessment and categorization, 47 eHealth technologies in the articles were assessed to be in the development phase (TRL 4 to TRL 6), 53 in the deployment phase (TRL 7 to TRL 9), and 0 in the research phase (TRL 1 to TRL 3). Details about the eHealth development process showed that only 14 (13.9%) out of 101 studies explicitly mentioned using either a user-centered design, participatory design, scenario-based methods, reflective lifeworld research, or action research approach. Furthermore, of the 101 studies, 18 (17.8%) reported details about the theories on which their self-management intervention was based. Some of these were targeted toward BCTs independent of technology use, while others were technology related and more targeted toward technological adoption or persuasive design. lists the various theories identified as underlying the eHealth interventions. Theories that were present could be divided into the following categories: behavioral change, technological adoption or persuasive design, and unspecified. This review found 11 different behavior change theories, 3 different technological adoption or persuasive design theories, and 3 unspecified theories. Of all the different theories within the different categories, the social cognitive theory was most often used (5/11, 45%).
shows how many eHealth technologies used in the studies addressed the different positive health dimensions. All the included articles (N=101, 100%) addressed (at least) the dimension of bodily functioning, 45 (44.6%) addressed daily functioning, 13 (12.9%) addressed participation, and 12 (11.9%) addressed mental well-being. We were not able to identify any indications that the dimensions of meaningfulness and quality of life were explicitly addressed in any of the eHealth technologies supporting self-management. Details about the positive health dimensions are provided in . Most studies (48/101, 47.5%) focused on 1 specific dimension, namely, bodily functioning. Other articles (42/101, 41.6%) focused on 2 dimensions, 11 (10.9%) on 3 dimensions, and only 3 (3%) on 4 dimensions within their eHealth technology. The combination of the dimensions of bodily functioning and daily functioning was the most common (33/101, 32.7%), followed by the combinations of bodily functioning, daily functioning, and mental well-being (5/101, 5%); bodily functioning and participation (4/101, 4%); bodily functioning, daily functioning, and participation (3/101, 3%); bodily functioning, mental well-being, and participation (3/101, 3%); body functioning, mental well-being, participation, and daily functioning (3/101, 3%); and bodily functioning and mental well-being (1/101, 1%). To investigate whether there may be an increase or decrease in certain positive health dimensions over time, we compared the presence of certain dimensions with the years of the studies ( ). Such information can be useful to see whether the target dimensions of eHealth interventions are changing with time. When comparing the presence of the dimensions with the years of the studies, we found that in the years 2013 to 2015 and 2017 to 2018, the dimension of bodily functioning is dominantly present, followed by daily functioning. From 2017 to 2021, a small increase in the presence of the dimension of mental well-being could be seen over the years. In the years 2020 and 2021, the presence of the dimensions of daily functioning and participation was almost equal compared to bodily functioning.
Overview Intended Population Included Population Actual Population shows the actual population included in the studies. In 24.7% (25/101) of the articles that mentioned the severity of their participants, most participants had moderate or severe COPD. Out of the 101 articles, only 21 (20.8%) shared a clear description of the education level of their participants, which were then categorized for this study. The educational level of participants could be categorized as low, medium, and high, which were almost equally distributed. Of the 70.3% (71/101) of the articles that shared the mean age of their participants, we calculated the combined mean age, which resulted in 64.85 years. The gender of participants was clearly mentioned in 88 (87.1%) out of 101 articles and was almost equally distributed. In 29.7% (30/101) of the articles that shared the smoking history of their participants, almost half of the participants (51%) were reported as current or former smokers. In 10.9% (11/101) of the articles that described technology-related experience, 89% of the participants had experience with technology.
All 101 included papers (partly) described the intended population for the intervention (as stated in the intervention description), the included population (as stated in the inclusion and exclusion criteria), and the final actual study population (as stated in the demographics of study participants). In some studies, certain inclusion criteria were required to participate, thereby restricting the group of eligible participants (ie, the actual population). This scoping review extracted the following inclusion criteria: disease-specific (needing to have a certain severity of COPD), capability-related (needing to be cognitively capable, able to read and write, understand certain language, willing or able to provide consent), age-related (needing to have a minimum or maximum age), smoking history–related (being a current or former smoker), and technology-related (needing to have digital skills, internet access, own a certain device). More details about the concept of the self in self-management are provided in .
There was some variation in the specific intended populations targeted in the articles. As shown in , most studies (59/101, 58.4%) were targeted at persons with COPD in general, with 23 (22.8%) focusing on ≥1 specific COPD severities and 19 (18.8%) focusing on COPD in combination with other chronic conditions. Some articles included >1 comorbidity.
presents an overview of the identified included population. More studies (50/101, 49.5%) than outlined in the Intended Population section (23/101, 22.8%) had disease-specific inclusion criteria (focusing on ≥1 COPD severities). Of the 101 articles, 50 (49.5%) had capability-related inclusion criteria, requiring participants to, for example, be cognitively capable or able to write and read to be eligible for participation. Furthermore, in 37.6% (38/101) of the articles, participants needed to have a certain minimum age, with 40 years being the most common. In 7.9% (8/101) of the articles, the age needed to be below a certain maximum. The maximum age of 70 years was the most commonly mentioned inclusion criterion, cited 4 times. A total of 12 (11.9%) out of 101 articles had inclusion criteria regarding smoking (history) in which participants needed to be, for example, a former smoker. Finally, 38.6% (39/101) of the articles had technology-related inclusion criteria. Participants needed, for example, to own a smartphone or tablet and have digital skills to participate. Only 1 study explicitly mentioned having no exclusion criteria based on age, comorbidities, and previous participation in pulmonary rehabilitation. Furthermore, in the same study, participants did not need to have previous experience using digital technology.
self-management processes found in the articles. No article explicitly described which self-management processes were reflected in the intervention design. When analyzing how self-management processes were supported within the different studies, we identified that most studies (94/101, 93.1%) addressed the process of taking ownership toward health needs (eg, by including self-monitoring of symptoms or setting goals). Of the 101 included studies, 71 (70.3%) focused on the process of learning (eg, by including education within their technology), 27 (26.7%) on health care resources (eg, by enabling communication with health care professionals within the technology), 23 (22.8%) on performing health promotion activities (eg, by performing exercise or skill training), 17 (16.8%) on social resources (eg, by involving caregiver and family or peer-to-peer support), 1 (1%) on adjusting (eg, ways to cope), and 1 (1%) on integrating illness into daily life (eg, alternating daily lives to conserve energy). We found no eHealth technologies specifically focusing on the self-management processes: meaning making, spiritual resources, psychological resources, processing emotions, or community resources.
shows the BCTs extracted in this study. Only 2 (2%) out of 101 studies explicitly stated which BCTs were used. When analyzing the descriptions in the studies, we identified that feedback and monitoring were mostly used in the different articles (88/101, 87.1%; eg, monitoring activity status). This was followed by shaping knowledge (66/101, 65.3%; eg, receiving education), goals and planning (38/101, 37.6%; eg, action planning), associations (23/101, 22.8%; eg, receiving status updates), social support (14/101, 13.7%; eg, communication with other people with COPD), regulation (11/101, 10.9%; eg, addressing medication adherence), repetition and substitution (10/101, 9.9%; eg, habit formation), rewards and threat (6/101, 5.9%; eg, receiving visual rewards), natural consequences (5/101, 4.9%; eg, information about health consequences), self-belief (5/101, 4.9%; eg, increasing self-efficacy), comparison of behavior (5/101, 4.9%; eg, follow along exercise video), comparison of outcomes (2/101, 2%; eg, information about the effect of physical activity), antecedents (1/101, 1%; eg, adding objects to the environment), and identity (1/101, 1%; eg, prompt identification as a role model). The BCTs of covert learning and scheduled consequences were not observed in the studies.
Principal Findings Self-Management of COPD Inclusiveness and Representation of People With COPD Strengths and Limitations Conclusions This scoping review provides an overview of the state-of-the-art eHealth technologies for COPD self-management interventions. We showed that current eHealth technologies tend to address the physical aspect of COPD self-management. These findings reveal a gap in the available literature, as many dimensions of the positive health paradigm and self-management processes are not addressed in current eHealth interventions for COPD self-management. However, as COPD is a chronic disease and exerts its impact on all aspects of one’s life, the underrepresented dimensions and processes might be very important to include. This might give people with COPD the tools needed to be able to adapt toward a new balance in life, and this would consider the person as a whole instead of only the bodily representation in the context of a disease. Our review also showcases another gap, namely, the effect of inclusion criteria that leads to a subgroup of people with COPD included in eHealth technology studies. Therefore, one should be cautious when interpreting results, as this may give a distorted view of the COPD population within these studies. These gaps demonstrate the need for more inclusive research and design of eHealth self-management interventions for people with COPD, focusing on multiple dimensions of the health paradigm. Future work should, therefore, go beyond the physical dimension and focus on including individuals in research who could benefit most from eHealth self-management interventions.
This scoping review outlines the state of the art of eHealth self-management interventions for COPD. In the current literature, most eHealth technologies for COPD self-management focus on the physical aspect of self-management. eHealth technologies that include other aspects (eg, the social or mental aspects) are currently underrepresented in the literature. Moreover, it appeared that although eHealth interventions often aimed to target the whole COPD population, mostly only a subgroup of the COPD population was represented within the eHealth technology studies.
Underlying Theories, Techniques, and Processes The Physical Aspects of Self-Management Only a few studies (32/101, 31.7%) reported on using underlying theories and specific BCTs supporting their self-management eHealth interventions. No article explicitly mentioned focusing on certain self-management processes. This is surprising, given the fact that all studies aim to improve self-management and thus aim to achieve some sort of behavioral change. As the concept of self-management varies in the literature, reporting on the use of such processes, techniques, and theories may be beneficial for understanding underlying structures and processes that will initiate behavior change to improve self-management. Building on these processes, techniques, and theories and providing more detailed reports on what was perceived as useful, beneficial, and desirable for this target population can help advance the field and contribute to the existing body of work. This may simultaneously be valuable for informing future eHealth self-management initiatives as they can take into account these theories, techniques, and processes in their developments. When looking at the literature, the lack of reporting on underlying theories was prevalent in the review of Heimer et al , in which only 3 of the included studies reported specific BCTs. Furthermore, other studies encountered the problem of low reporting on BCTs; Hardeman et al and Lorencatto et al concluded that fewer than half of the planned BCTs were specified in the final published articles. In addition, a review from de Bruin et al revealed that reporting about the active content of behavioral interventions varies considerably between studies. This limits the readers’ ability to compare, interpret, and generalize the effects of these studies . Thus, including such theories in eHealth interventions and transparency in later reporting may lead to opportunities for achieving sustainable behavioral change.
As we found in this review, there is a tendency toward managing the physical aspect of one’s disease in current eHealth technologies for COPD self-management. This is reflected throughout the different findings of this review. First, the functionality “self-monitoring” and the BCT “feedback and monitoring” were most often addressed. Although self-monitoring is very valuable and exacerbations may be detected at an early stage, this, nonetheless, demonstrates that the main focus lies on what happens with or inside the body. Second, the dominant physical aspect also manifests itself when looking at the self-management processes that are supported by the different technologies: “psychological resources,” “spiritual resources,” and “community resources” were not found to be included. The self-management process of “taking ownership of health needs” was mostly present, followed by “learning.” However, other processes, such as “integrating illness into daily life” and “adjusting,” were only observed once, although the target group had to deal with these aspects every single day . We believe that not addressing these processes is a missed opportunity, given that supporting people with COPD during their day-to-day activities might lead to even better improved outcomes of self-management. Finally, the dimensions of bodily functioning and daily functioning were most frequently used. This illustrates the current underrepresentation of other dimensions within current eHealth technologies for COPD self-management. Other dimensions (eg, participation and mental well-being) were not as dominantly represented or not observed at all (eg, meaningfulness and quality of life). As we observed a small increase in dimensions over the past few years, we might notice a small change of focus. However, this is not as fundamental and still leaves a lot of room for improvement on this matter. When examining other chronic diseases (eg, rheumatoid arthritis), a review by Seppen et al identified 4 different types of eHealth interventions used in the included articles. Although not explicitly stated, interventions were all related to the physical aspect (ie, medication adherence, activity plan, information, disease monitoring, and activity monitoring) . Thus, the tendency of the physical domain may not only be limited to COPD. Therefore, future studies should investigate whether this view is also present in other chronic diseases.
As all eHealth technologies target people with COPD as end users, only 14 (13.9%) out of 101 articles reported involving the patient perspective in their design or development process. This raised the question of whether and how the needs and perspectives of patients were taken into account. As including the perspective of end users leads to a better fit and increases the chances of successful adoption and sustained use , researchers should consider using such design principles when developing future eHealth technologies. This may provide many opportunities for improvements in self-management eHealth technologies for COPD. Furthermore, it appeared that although articles outlined the target group to be the general population of people with COPD, they often recruited a specific subset of people with COPD. Certain inclusion criteria are made within the studies (eg, needing to own a smartphone and needing to have a certain disease severity). The consequence of such inclusion criteria leads to a restriction, in that only a selected group of individuals are included in the studies. While this may be due to practicalities (eg, the complexity of COPD as a progressive lung disease), the question remains whether the intervention is generalizable or applicable to the wider population of people with COPD, especially if they were not part of the studies in the first place. This is particularly relevant as there is no golden standard to determine, for example, when someone is considered too old to participate, and opinions on such matters are likely to vary widely among researchers. Even when the restricting the patient group may be justified for the study purposes, it still affects the generalizability of study results. Therefore, awareness and transparency should be provided regarding these potential restrictions related to the patient group within such studies. Given that the most often used device is a (smart) measurement device (in combination with), a smartphone, or tablet, the group of people eligible to use these technologies in daily life is further limited. Effectively, this means that certain groups of people (eg, those who lack resources to buy these devices) are not included in the research and, therefore, the intervention might not be tested on people who are unfamiliar with smart devices or have low digital literacy. Previous studies showed that moderate levels of eHealth literacy and low levels of health literacy are prevalent among the COPD population . Thus, we cannot assume that this population has access to eHealth technologies and has mastered the skills to manage such interventions without any support. Therefore, guidance should be made available to help those who need support in using these eHealth technologies. Furthermore, although there might be some practical reasoning behind the inclusion criteria (eg, lack of budget to provide devices for all participants), it may simultaneously widen the gap between people included in eHealth technology studies and the group of people who might need the support the most. This, in turn, could make health care accessible only to those with high (digital) literacy and those already equipped with the necessary resources to improve their health. This is most likely an unintentional and undesirable direction to head, but without proper awareness, it may easily become the blind spot in current and future eHealth intervention studies. Future studies must, therefore, be aware of possible subgroups, make efforts to include the underprivileged population, and be transparent in their research regarding the population reached. While we acknowledge the challenge of successfully recruiting participants who are representative of the population as a whole and recognize that this is extremely difficult to achieve, we nevertheless recommend that future studies strive to reach those people who are underrepresented and difficult to reach.
This review provides a very first overview and a diverse insight into different underlying components of self-management eHealth interventions for COPD. It highlights the existing gaps in the literature and uncovers opportunities for the development of eHealth self-management interventions. To the best of our knowledge, no research has yet examined all these various aspects. However, this review also has its limitations. First, data extraction and categorization were challenging due to the style of writing in articles (which comes with certain formats and word count limitations), the lack of explicit reporting on certain aspects, and the overlap between some processes and dimensions. It might be the case that through incomplete reporting in articles, certain self-management processes, BCTs, health dimensions, aspects of the technology, or details about the “self” could not be extracted. However, albeit being a scoping review, we followed a very systematic approach to mitigate this limitation as much as possible. Therefore, we believe that we were able to give a complete picture of the current literature about eHealth technologies for COPD self-management. Second, some dimensions of positive health and self-management processes are closely related, intertwined with, or support each other. For example, the dimension of “quality of life” was not observed to be explicitly addressed within the eHealth interventions. However, studies may have an overarching goal of increasing the quality of life of people through the use of their intervention. One should be aware of this when interpreting these results, as the quality of life of people with COPD might still be affected by the use of eHealth intervention. Third, most articles (470/588, 80%) were screened by 1 reviewer, leaving room for potential subjective judgment. However, by implementing several measures to ensure alignment (eg, extensive discussion of eligibility criteria, 20% of the screening conducted by 2 reviewers, and discussions to resolve discrepancies), we minimized possible implicit biases. Finally, this scoping review investigated a broad range of aspects to grasp the state of the art, but not all. For example, this paper did not assess the effectiveness or impact of the interventions, and it also did not evaluate the quality or strength of the evidence. As this is a relatively new area of research, we should map the existing literature first. Furthermore, this review included only English articles, used COPD as a search term while it included a broad spectrum of lung diseases (such as emphysema and chronic bronchitis), and had a search date limitation. Consequently, some studies might have been missed, or some aspects may not have been investigated. However, to the best of our ability, we tried to provide a first overview while leaving opportunities for future research to focus on the aspects that were not covered within this scoping review. As such, this state-of-the-art overview could serve as a starting point for future systematic reviews and original research that will dive into more specific research areas.
|
A Transmission-Based Dielectric Property Probe for Clinical Applications | 1a185dc2-bd89-4d77-9a39-971bea0966d5 | 6209935 | Physiology[mh] | The reflection based, open-ended coaxial dielectric probe has found wide use in industry and medical applications over nearly four decades and essentially provides ground truth for many studies [ , , , , , , ]. Numerous calibration models have been developed over time which provide accurate results over a large range of frequencies [ , , , , , ]. Their use is particularly beneficial when there is only access to one side of an object. In the case of tissue measurements, their use can dramatically reduce the tissue preparation time which can be critical depending on study needs—in particular because of the substantial change in properties immediately after excision [ , , ]. The most widely used commercial system is that developed by Keysight Technologies (Santa Clara, CA, USA) but more recent products are now marketed by Delfin Technologies (Kuopio, Finland) and SPEAG (Zurich, Switzerland) . While the Keysight technology has proved useful in various settings; however, recent studies have shown that the penetration depth is directly related to the probe diameter which has proven detrimental in various tissue characterization studies since their effective probe diameter is only on the order of 2 mm [ , , , ]. Typical penetration depths are on the order of 1/6th that of the probe diameter . The SPEAG technology has solved some concerns regarding cable bending issues by attaching the probe directly to the microwave electronics device and by introducing larger diameter probes for deeper penetration . The former now allows for a more handheld operation which provides further benefits. This larger probe is very useful in its intended application of placing the probe on the skin surface and determining the level of oedema down to diagnostically useful depths . However, there are critical in vivo applications where having a small probe is useful while also requiring the ability to assess properties over a space of roughly 1cm across or larger—as will be discussed below. The penetration depth is a particular concern for this technology. In the case of measuring heterogeneous tissue, the Keysight probes only sense roughly the top 280 μm into the surface . It is easily conceivable that the top tissue layer might not be representative of the entire excised tissue. This is also precisely where the steepest temperature gradients are for newly excised tissue. Additionally, very often newly excised tissue is covered with a thin layer of blood that can either be fresh or dried depending on the technician tissue preparation processes. With the sensing depth being so shallow, the blood properties could easily confound the recovered composite properties. These factors and others can easily impact the desired measurements. Reflection-based, horn antenna approaches have also been employed typically for characterization of thinly layered materials . These have typically utilized thin layers of material positioned at a known distance in front of the horn and a reflection plate at varying distances behind the sample. Most notably, these have found use in characterizing polymer layers. Finally, reflection-based interferometry and microwave microscopy techniques have recently been introduced by Bakli and Haddadi which were used to very accurately assess the changes in saline properties as a function of sodium chloride concentration at two single frequencies and general determination of the dielectric constants and loss tangents of liquids at 2.45 GHz. Alternatively, a range of transmission techniques have been used to measure the dielectric properties of materials. One of the more common approaches is to insert the sample into either a waveguide or coaxial chamber and assess the properties based on perturbations to the S-parameters. Harris et al. performed a useful comparison of the waveguide and coaxial approaches which concluded that the waveguide techniques were easier to manipulate because of the required size and shape of the sample but that the coaxial technique had bandwidth advantages because it was not constricted by operating cut-off frequencies. Numerous studies by Nicolson and Ross , Weir and Larsson et al. have thoroughly analysed the problem in terms of deriving the complex permittivity and permeability values based on the corresponding measured S-parameters. Correspondingly, researchers have also configured free space set ups for similar electrical property measurements. These typically involve two directive horn antennas oriented towards each other and a known thickness sample placed between them [ , , ]. The changes in the measured transmission phase and amplitude are used to infer what the properties of the sample must have been. One interesting variant of this is the technique developed by Garret and Fear , where they use two opposing aperture antennas directed at each other and slightly compress the breast of a patient between them. Utilizing ultra-wideband transmission, they are able to recover rough estimates of the overall dielectric property distribution which includes both the adipose and fibroglandular tissues. A more recent version of this approach has been described in Garrett et al. utilizing both transmission and reflection S-parameter measurements to recover in vivo properties of the compressed breast. These values are subsequently used as initial estimates for inverse scattering imaging of the breast. While useful for certain structures, it is impractical in many settings. In general, these approaches are not suitable for in vivo surgical settings. Transmission measurement techniques have also been a mainstay of tomographic and inverse scattering imaging approaches. The technique presented by Jacobi and Larsen is noteworthy in that they recovered useful images; however, they introduced chirp measurement strategies which allowed them to directly decrease the impact of multipath signals. We have developed a technique that utilizes some aspects of the transmission concepts but exploits the fact that the open-ended coaxial cables are so small that our measurements are effectively in the far field which has important ramifications for certain in vivo and ex vivo settings. This transmission-based dielectric measurement scheme utilizing two opposing open-ended coaxial cables. Similarly, to the two, large antenna approach, this method measures the amplitudes and phases for signals traversing the full intervening volume to provide a more representative measure of the properties throughout the volume. However, because the receiving antenna is in the far field, simplified equations can be used to extract the associated dielectric properties without a complete S-parameter analysis. The primary challenge is that the signal reception can be weak depending on the probe separation, frequency and the lossiness of the sample. Generally, only fringing fields escape from the ends of the open-ended coaxial cables. The probes are essentially low efficiency antennas. However, beyond the previous benefit just mentioned, there are other critical advantages. First it is significantly less sensitive to probe contact with the sample. Additionally, owing to the fact that the reflection-based probe effectively operates at one of the two more unstable points on the Smith chart (i.e., an open circuit), the transmission based approach is quite stable even with respect to cable bending. Two important drawbacks include the fact that a sample needs to be contacted on opposite sides by the probes and that the thickness cannot be that great. In this case we are generally considering thicknesses on the order of 1–2 cm. While these can be important limitations, we will discuss a specific application where this approach can be a real advantage, even for in vivo settings. Along with these issues is the need for a high dynamic range measurement system. The Rohde and Schwarz ZNBT8 system can readily measure down to -140dBm from a transmitted 1mW signal to easily meet the needs of this approach. Additionally, more compact concepts utilizing evolving software defined radios (SDR’s) are being developed to meet this type of need in a more compact and economical way . One important scenario where we plan to apply this new concept is in the area of bone density measurements of vertebrae during spinal fusion surgeries. Back pain and injuries are significant health problems in North America and northern European countries . In these cases, the spinal column needs to be held immobile for a lengthy period of time until adjoining vertebrae become substantially fused. For these situations, instrumentation is installed over a length of multiple vertebrae to hold the column in place. A very common type of instrumentation involves posterior pedicle screws connected by rods in several continuous vertebrae to supply sufficient support. It is critical that the vertebrae are strong enough to withstand the forces exerted through the screws . While systemic bone density measures such as dual-energy X-ray absorptiometry (DXA) provide an indicator of strength, the overall surgical success rate is decreased in bone with low density . Loosening instrumentation failure can lead to significant patient pain and inconvenience, trauma and high costs of repeated surgeries [ , , , ]. While exact prevalence of and health care costs associated with osteoporotic-caused surgery failures is not known, it is certainly acknowledged as a significant medical problem . Direct measures of the bone tissue during surgery could assist in the decision making and thereby minimize the number of repeated surgeries, reducing health care costs while improving overall patient quality of life. In cases where the dielectric probe determines that the vertebrae are weak, alternative surgical strategies can be used to improve the probability of success. While there are methods to improve the strength of the instrumentation such as by either incorporating more vertebrae or utilizing specialized screws and/or anatomical cements with the screws, each presents their own risks. Adding more bones simply increases the invasiveness of the surgical procedure. In addition, there are risks that small pieces of the dried cement could get into the blood stream leading to embolisms. While DXA scans are generally regarded as a gold standard for bone density determination, there are numerous factors which can confound these results and produce misleading measurement values and lead to inappropriate treatments . A typical DXA reading of the spine is often taken as an average of values from three to four continuous lumbar vertebrae. This immediately implies a level of local variation and, especially if the measurement is performed in the anterior-posterior view, osteoarthritis of the facet joints and local osteophytes may highly influence the values. In addition, there are reports of significant variation between technicians who perform the exams. Equally importantly, because DXA readings are based on single projection X-rays, a variety of factors can influence the readings. These can include artefacts from different calcifications in some of the overlying tissue such as the aorta and pancreas and, for the thoracic spine, the rib cage makes measurement of the vertebra almost impossible. As mentioned above, the most common approach for constructing stabilizing instrumentation of part of the spine is to fixate the vertebra by inserting screws through canals in the pedicles and continuing on into the central mass of the vertebrae . In cases where the bones are insufficiently strong, there are alternatives. While bio-compatible cements can be used to strengthen the bone:screw interface as mentioned above, other approaches include using specially designed screws or involving more vertebrae to reduce the physical load on each screw . This obviously requires a more invasive surgery with concomitant health risks. A more definitive measure of vertebrae suitability on a bone by bone basis directly during the surgical session would be ideal for creating an individually designed surgical plan and performance. The new transmission probe concept may be ideal for this situation. Historically, dielectric probe measurements of bone have been challenging because of the different issues discussed above. These are further complicated by the fact that the measurements would need to be performed in vivo. This setting could prove particularly intriguing for the new probes because the clinician has access to the vertebrae from two sides through the opposing pedicle canals. While the orientation of a pair of standard open-ended coaxial cables would not be aligned perfectly, in a more refined implementation, simple, custom bends in the coax can be added to provide the desired configuration. The remainder of this paper describes the theoretical underpinnings of the approach followed by phantom experiments to demonstrate overall feasibility. discusses the theory and measurement configurations and presents representative results. While this study discusses the theory with respect to the slopes of the signal amplitudes and phases requiring multiple measurements, a final product will ultimately only have access to a single measurement. Strategies for accommodating this challenge are left for further development.
2.1. Solving for Dielectric Properties from Amplitude and Phase Measurements 2.2. Measurement Configuration 2.3. Error Considerations For this implementation, receiving a signal radiated from the open end of a 2.2 mm diameter coaxial cable and received from a similar cable, the threshold for the far field can be approximated as 2 D 2 /λ, where D is the diameter of the aperture and λ is the wavelength. Using a diameter of 1.68 mm (the diameter of the coaxial cable insulator) and a maximum frequency of 8 GHz, the largest far field threshold would be 1.35 mm in the worst case when the medium was water. Given that the intended range is approximately between 10–20 mm, the far field approximation is valid. This substantially simplifies the computations compared with other transmission mode dielectric measurement techniques which required full S-parameter analysis. In general, the far field signals measured at one antenna due to a signal propagating from another are proportional to 1 / r 2 and e − j k r , where r is the separation distance and k is the complex wavenumber. This relationship can be written as: (1) E r e c e i v e d = C ∗ e − j k r r 2 = C ∗ e − j β r − α r r 2 = [ c 1 e − α r r 2 ] ∗ [ c 2 e − j β r ] where C , c 1 and c 2 are constants, β is the phase constant and α is the attenuation coefficient. The constants are included to simplify the equations and for completeness sake but are not included in the latter steps of the derivation. In this representation, the equation has been separated into the real and imaginary parts, respectively. From the first of the last two terms, the amplitude (dB) can be written as: (2) M a g n i t u d e = 20 ∗ l o g 10 ( c 1 ) − 8.68589 α r − 20 ∗ l o g 10 ( r 2 ) = c 1 a − 8.68589 α r − 20 ∗ l o g 10 ( r 2 ) By adding a factor of 20 ∗ l o g 10 ( r 2 ) to this equation which is possible since the distance r is known, the resulting equation is: (3) M a g n i t u d e = c 1 a − 8.68589 α where c 1 a is a constant and the slope is just −8.68589 α . Therefore, α can be computed directly from the measurement data as a function of the magnitude slope once the 20 ∗ l o g 10 ( r 2 ) term has been added. This is performed by measuring the data at multiple spacings, plotting the magnitude as a function of separation distance, adding the 20 ∗ l o g 10 ( r 2 ) factor and using a least squares parameter estimation technique to fit the data to a straight line. Spacings less than 1.5 mm were not included in the fitting process to minimize corruption from data points not within the far field. In addition, spacings greater than 20 mm were not used because of corruption from noise. The resulting slope, α , is used in these calculations. The constant c 1 a is not used but was included in the equation for completeness purposes. Similarly, by taking the natural logarithm of the last term in Equation (1), we get: (4) P h a s e = c 2 a − β r where c 2 a is a constant and β is the phase constant in radians/m. This is performed by measuring the data at multiple spacings, plotting the phase as a function of separation distance and using a least squares parameter estimation technique to fit the data to a straight line. The resulting slope, β , is used in these calculations. The constant c 2 a is not used but was included in the equation for completeness purposes. Once α and β have been determined, the dielectric properties can be solved for directly: (5) β − j α = k 2 = ( ω 2 μ o ε o ) ( ε ′ − j ε ″ ) where ω is the frequency in radians, μ o is the free space magnetic permeability, ε o is the free space electrical permittivity, ε′ is the real permittivity, ε″ is the imaginary permittivity and j is − 1 , respectively. Equation (5) can be expanded to isolate both ε′ and ε″ , respectively: (6) ε ′ = β 2 − α 2 ω 2 μ o ε o and ε ″ = 2 β α ω 2 μ o ε o
a,b shows a photograph and schematic diagram of the transmission probe measurement experimental set-up. One semi-rigid coaxial cable protrudes vertically through the centre of the base of the 15.2 cm diameter tank. A second probe is attached to a calliper which is positioned at the top of the tank. Flexible coaxial cables were attached to both semi-rigid coaxes and fed into the Rohde and Schwarz vector network analyser (model ZNBT8—Munich, Germany). The system is calibrated using the internal device 2-port calibration process with the reference plane set at the interfaces of the two flexible coaxial cables attached to the two semi-rigid coaxial cables comprising the probes. S 21 amplitude and phase data was acquired at 201 frequencies in increments from 9 KHz to 8.5 GHz with the IF bandwidth set to 1 KHz, along with the averaging set to a factor of 1 to maximize the dynamic range. The time for each acquisition was 0.2 s. Data was initially acquired for when the two open-ended coaxial cables were concentrically touching each other and then subsequently at 0.5 mm intervals up to a maximum separation of 4 cm. Data was acquired for three mixtures of glycerin and water because of its ability to produce a wide range of dielectric properties depending on the mixture ratios.
For the clinical implementation, we expect three primary sources or error: (a) possible air gaps and poor coaxial cable/target contact, (b) errors in the distance measurements which will impact the slope calculations and (c) accuracy and consistency for the calibration measurements. For the former, these would be difficult to assess in an experimental configuration because the technique depends on good contact with the tissue (or liquid) along the outside of the coax to attenuate unwanted multipath signals. In such a situation, it would be difficult to accurately quantify the amount of dislocation between the probe and surface for comparison in the measurement perturbations. This type of analysis would be best suited for a numerical study which is beyond the scope of the current paper. Our initial experience with the bone measurements indicates that the transmission configuration is quite stable given that the probes need to be removed at each measurement cycle so that the holes can be drilled deeper into the bone which entails a fair amount of cable bending. In addition, the probes were held in place manually suggesting that the contact with the bone surface incorporates some variability. In spite of these issues, the measurements remain quite stable. These aspects will receive additional consideration during further stages of development. For the second, errors in the distance directly impact the recovered property values. If we consider a distance for the sample measurement to be on the order of 20 mm and the possible error was ±0.25 mm, this would amount to an error of ±1.25%. From the analysis in , the distance error relates directly to the slope values which would also vary ±1.25%. However, in examining Equation (6), the values for ε′ go as the square of both slopes ( β and α ), such that a 1.25% error in the slopes equates to a 2.5% error in ε′ . Likewise, because ε″ goes as the product of β and α , its error would be on the order of 2.5% when the separation distance changed 1.25%. Finally, for the actual implementation of this probe technique, it will only be possible to make a microwave measurement at a set separation distance. It will also require a calibration measurement at a set distance while outside of the bone. A fixture will need to be developed that allows for a reasonably close measurement while still keeping the probes within the far field of each other. The calibration technique will need to be robust and convenient to make the entire process feasible. These errors will need to be evaluated once implemented.
3.1. Dielectric Liquid Testing and Analysis 3.2. Preliminary Bone Measurements a,b show plots of the S 21 amplitudes as a function of frequency for multiple separations for the 80:2mm to 20mm. Overall the signal strengths are increasingly higher for closer separations. In addition, especially for the lower frequencies and spacings less than 4mm, the signal strength drops off monotonically with decreasing frequency. This is a direct consequence of the opposing open-ended coaxial cables nearly approximating a series capacitor which behaves as a high pass filter. More noticeably, at frequencies below 0.5 GHz, there is a large ripple in the amplitude for separations greater than 4 mm which progressively decrease in size as the separation decreases. This is primarily caused by multi-path signals which can travel along the outer surfaces of the coaxial cables and along the tank:liquid and tank:air interfaces . In these cases, the attenuation due to propagation along these paths can be less than that for signals crossing the gap between the probes to the extent that when they recombine with the desired signals, they can add constructively and/or destructively, consequently, accounting for this rippling behaviour. The phenomenon diminishes at higher frequencies because the attenuation along these alternate paths increases due to the increased liquid conductivity as a function of frequency. This feature also diminishes for closer separations because the desired signal propagating directly across the gap is sufficiently strong to easily overwhelm the unwanted signals. These types of multi-path corruptions are typical of near field experiments and have been documented in Meaney et al. . The signals at higher frequencies decrease progressively more rapidly as a function of separation distances than the lower signals. However, within the range of 2 to 6 GHz, the decrease in signal strength as a function of increasing separation is reasonably monotonic with acceptable signal strength even out to roughly 17 mm separation. This is effectively the usable bandwidth for this approach. a,b shows similar plots of the S 21 amplitudes as a function of frequency for multiple separations for the 20:80 glycerin:water bath. The graphs are separated into a set of narrowly spaced probes spanning (a) separation distances from 0 mm to 11 mm and (b) for more broadly spaced distances from 8 mm to 20 mm. Overall, the signal attenuation in this liquid is lower due to the lower water content. This is evident where the attenuation for given spacings is less than that for the corresponding levels for the 80:20 glycerin bath cases. In addition, the attenuation roll-off as a function of frequency is substantially more pronounced. Interestingly, the multi-path ripples extend up to roughly 2 GHz for spacings greater than 4mm compared to only 0.5 GHz for the 80:20 case. Even with these unwanted signals, this approach appears to remain viable from roughly 2 to 6 GHz and up to a spacing of 17 mm before becoming compromised by the noise floor. a,b shows the phases and magnitude of the measured signals as a function of separation distance in the 80:20 bath for several of the frequencies within the 2 to 8.5 GHz band. The phases have all been normalized to 0 degrees when the probes are touching. As can be seen, beyond roughly a 1.5 mm separation, the phases are nearly linear with the exception of progressively increasing variability at greater distances associated with the encroaching noise floor. For analysis purposes, the slopes were determined based on least squares fits of the plots to straight lines while eliminating the data points for distances less than 1.5 mm (because of near field effects) and greater than 15 mm (because of noise corruption). For the magnitude cases, the 1 / r 2 feature is readily evident. Even with this confounding attribute, it is still clear that the attenuation per unit distance still increases with frequency. c shows the same data in b but with the 1 / r 2 term subtracted out as described in Section II.A. Beyond 1.5 mm and below 15 mm, the resulting curves are roughly linear. The slopes for each were determined utilizing a least squares fit to a straight line. a,b shows the recovered real and imaginary permittivity values, respectively, over this frequency range for three different glycerin:water mixtures (80:20, 50:50 and 20:80) utilizing the new transmission-based approach compared with ground truth measured using the standard dielectric probe kit from Keysight Technologies. The match is quite good for both properties over the prescribed bandwidth even for the substantial range in properties for these liquids. Interestingly, it also appears that the new technique accurately mimics the characteristic dispersion curvature of the different liquids which can be quite distinct where some are concave upwards and others concave down.
a shows the test configuration using our transmission probe technique on the trabecular portion of a bovine bone. In this case, the bone was secured with a rod through the cortical portion of the bone. Small diameter holes were drilled into both sides of the bone utilizing guide holes on either side of the fixture. For this experiment, holes were drilled at roughly 1 cm deep from both sides (one goal was to keep portions of the coaxial lines inside the bone to add attenuation to any signal leaking out and minimize multi-path corruption). For each measurement, the amplitude and phase were recorded over our full bandwidth −0.1 to 8.5 GHz in 350 MHz increments. 25 samples were taken for each sweep, the IF bandwidth was set to 10 Hz and the averaging factor was 10. The time for each acquisition was 70 s. After each measurement, the probes were removed and the depth of one hole was drilled 3 mm deeper. The probes were re-inserted and an associated measurement taken. This process was repeated until the probes met in the centre of the bone—note that the drilling side was alternated between each measurement to ensure the final locations were as deep in the bone as possible. The measurements were generally immune to artefacts related to repeated cable motion and bending during the overall process. b shows the amplitude plots as a function of frequency for several probe separation distances (these distances were computed based on the point at which the probes contacted each other and drill depth at each increment). Similar to the graphs for the uniform liquid measurements, the amplitude declines monotonically as a function of separation distance for frequencies greater than 5 GHz. The slopes of the curves as a function of frequency are steeper than that for the 80:20 glycerin:water tests indicating that the attenuation per unit distance is greater. The multi-path corruption appears at considerably higher frequencies than previously observed for the liquid tests. These are likely due to the fact that the propagation modes on the outsides of the coaxial cable do not have sufficient distance within the holes drilled in the bone to sufficiently attenuate. However, there does appear to be a band between roughly 4.5 GHz and 7 GHz where the curves behave well enough for properties to be extracted. From the data in b, a,b shows the transposes of the (a) magnitudes and (b) adjusted magnitudes with the 1/ r 2 term subtracted out as a function of probe separation distance for representative frequencies. The former plots are relatively well behaved with a slight concave upwards curve to it. The plots in b are generally along more of a straight line having had the 1/ r 2 factor removed. While there appears to be more ripple in the data, this is somewhat due to a narrower scale than that for a. c,d show the associated phases for the same measurements. For the plots in c, the phases have been normalized to their value for 2.2 GHz and have been unwrapped as a function of frequency (this technique has been explicitly described in Meaney et al. ). d shows the phases from c transposed and plotted as a function of separation distance. With respect to the transmission mode probe algorithm, the slopes of the magnitudes and phases were extracted from the graphs in b,d, respectively. The final relative permittivity and conductivity values are plotted in e,f as a function of frequency. These values are nominally in line with that reported by Peyman and Gabriel which showed a considerable variation in animal bone dielectric properties as a function of age.
We have demonstrated that utilizing two opposing open-ended coaxial probes, we can accurately recover the transmission medium dielectric properties for quite short distances. This concept is limited in applicability to testing objects where opposing sides can be accessed and the distance is short. However, the technique has important advantages over the classical, reflection-based open-ended coax probe in critical settings. For instance, the reflection based probe’s penetration depth varies as a function of probe diameter and only extends roughly 280 μm for the commercial probes from Keysight Technologies. The transmission-based probes provide a more uniform assessment of the volume circumscribed by the space between the probes. It is also worth noting that these probes operate over a very broad bandwidth, bracketed by multi-path corruption at the low end and the noise floor at the high end. From an operational perspective, the reflection-based probes are challenging to work with given that they are prone to measurement artefacts due to even slight motion of the connecting cables. This is largely due to the fact that the probes are calibrated to operate at one of the more sensitive locations on the Smith chart—i.e., an open circuit. The contact interface is also critical since even small amounts of intervening air can easily disrupt the desired measurements. It will be crucial to evaluate how sensitive this transmission-based approach is to small air gaps in future studies. The intended application for this technique will be to measure the dielectric properties of vertebrae during surgery to provide an indication of bone health. a,b shows two photographs of a 3D printed, lower lumbar vertebrae with two coaxial probes inserted for illustration purposes. As can be seen, the holes through the pedicle arms will be quite small, making it challenging to get anything larger than a small coaxial cable into it. It can also be seen that where the cables terminate inside the bone, their separation distance is only on the order of roughly 1.5 cm. Per our earlier data, it appears we have sufficient signal to detect these measurements. Adjustments will necessarily be made to curve the ends of the open ends so that they face each other. We have extensive experience in these types of alterations. In addition, it will be necessary to mechanically determine the probe separation distances so that the appropriate magnitude and phase slopes can be computed (a reference measurement will need to be made externally to complete the data for the slope calculations). These processes will be designed as part of the product development. In utilizing the new transmission-based probes in a clinical setting, it will be necessary to record measurements at two different separation distances to calculate the magnitude and phase slopes that are used to derive the dielectric properties. The first measurement will be across the tissue volume of interest. A convenient second option will include the case where the two open-ended coaxial probes directly contact each other outside of the body. One challenge will be to determine the spacing of the probes when positioned across the volume. Various geometrical triangulation techniques will be explored for estimating this distance. A second involves the fact that in the case where the probes contact each other, the fields are in the near field and deviate from their desired linear behaviour. As can be seen from the previous data, the measurement characteristics are reasonably well behaved and it will be relatively straightforward to devise a compensation technique. These issues need to be resolved as part of developing an actual product. The current results were somewhat limited in total depth because of the noise floor of the measurement vector network analyser (VNA). While most conventional VNA’s have noise floors on the order of −100 dBM, newer models such as the Rohde & Schwarz ZNBT8 system (Munich, Germany) are capable of measuring signals down to −150 dBm. The cost of this system would be prohibitive for clinical applications. However, new technology, including the use of software defined radios (SDR’s) will make these higher dynamic range measurements feasible and at nominal costs . In this case, the key will be to be able to add sufficient signal amplification and sampling time to adequately reduce the noise floor. These advances are well within current technology capabilities. While it is challenging to envision applications for this technique, in collaboration with orthopaedic surgeons, we are applying this approach to testing the bone quality of individual vertebrae during actual bone fusion surgeries. In this context, it is critical that doctors accurately know the strength of the vertebrae before using pedicle screws to fix the spinal column with appropriate instrumentation. Conventional techniques such as dual-energy X-ray absorptiometry (DXA) only provide average values of the systemic bone mineral density which is insufficient with respect to the individual vertebrae for which the strength can vary considerably from bone to bone. The unique aspect of this application is that the surgeons already form a slight hole through both pedicle arms of the structure into the main body of each vertebrae. In this setting, the coaxial probes will be custom designed to fit into the holes such that once deployed, the faces of the probes will be oriented towards each other and usually separated by 2 cm or less. This has the potential to be an ideal application for this approach where it will require regular cable bending and non-ideal probe:target contact. This approach will revolutionize the use of dielectric measurements for in vivo settings.
|
Assessing parental comprehension of online resources on childhood pain | b0204897-d07e-40e0-9aa1-e94d090dfadb | 11191864 | Patient Education as Topic[mh] | Internet is widely used for health information by individuals. However, internet-based patient education materials are usually difficult to understand by general population.
We evaluated the readability and reliability of websites related to child pain. We found that the readability of websites is below recommended level of sixth-grade level. Most websites had a reliability and quality level of moderate-to-low.
Pain is a disturbing problem for both children and their parents. Pain can impair quality of life of both the individual and family. Untreated pain can result with chronic pain and chronic pain prevalence has been reported between 11% and 53% in children. Pediatric pain can result with anxiety, depression, low self-esteem, the child can miss school and social or sportive activities. It can also create great concern and desperation among families because they may not know the cause of the pain or how to cure it. Almost every individual has a smartphone or computer so internet is generally the first option for people to take health information. This fact increases the number of health-related websites. It has been reported that there were more than 70000 health-related websites. Internet is an important and valuable source for parents for searching health information. The 2018 Health Information National Trends Survey reported that internet is the first choice of health information resource for 74.5% of their respondents. But the quality and readability of internet health information resources varies. Patient education materials (PEMs) should be readable and understandable for public. National Institutes of Health, American Medical Association and The US Department of Health and Human Services recommend that internet-based PEMs should be below sixth-grade level. But unfortunately, several studies that investigate the readability of health information resources showed that many of them are higher than recommended level. Parents seek for online information which is easy to access, up to date and generally from a hospital website. It has been shown that 1 of 8 parents first search internet before going to a hospital emergency room. This finding shows that, it is important to conduct readable and understandable online health information. Especially for parents that are concerned with their child health, it is more important to understand the PEMs to lower the anxiety, unnecessary hospital admissions or the risk of underestimating serious conditions. However, we did not find any study assessing the internet-based PEMs on Child Pain, which is a very important issue for public health. We hypothesized that the readability, quality and reliability of internet-based PEMs related to pain in children were higher than the recommended levels. In order to test this hypothesis, we evaluated the levels of these parameters of internet-based PEMs related to Child Pain.
We designed this cross-sectional, observational study to evaluate the readability and quality of internet-based PEMs about child pain. After approval of Dokuz Eylul University Hospital noninvasive Studies Ethics Committee (2024/08–10), on February 28, 2024, the terms “Child Pain” “Pediatric Pain” and “Pain in Children” were searched on Google ( https://www.google.com ) search engine. Google search engine was preferred and used because it is the most preferred search engine. In order to avoid search results being affected, before our search in Google search machine, web browsing history was erased and cookies were deleted. At the same time, we signed out from all Google accounts. Google Incognito mode was used. The uniform resource locators (URLs) of the websites were recorded. The websites that are not related to pain in children, has advertisement marking, need subscription or registration, are not in English, the scientific papers which do not include patient information, the sale sites that do not have a patient information section, the ones include video or audio content without a text and duplicate websites were not included in our study. Video contents, figures and tables, pictures, punctuation marks, website URLs, references, telephone, address and author information in the text were also excluded while evaluating the texts in order to prevent false results. If no evaluation criteria were on home page of the website, the 3-click rule was used. The 3-click rule is; the user should reach information with up to 3 mouse clicks, it is thought that unless they reach information in 3 clicks the users leave that website. The readability and quality were evaluated by 2 independent researchers (V.H., I.E.) with proper measurement tools. The data was evaluated by 2 different researchers independently, after training how to use the measurement tools, to avoid bias. 2.1. Website typology 2.2. Website rank values 2.3. Reliability and quality of websites 2.4. Readability 2.5. Content analysis 2.6websites were classified according to their ownership and types. The URL extensions (.com,.org,.edu,.gov and.net) were recorded and assessed. Typologies of the websites were classified as society/professional, health-related, government, hospital, news websites and others.
The websites’ rank values were recorded and evaluated for each web, and “Blexb” ( https://www.blexb.com ).
The Journal of American Medical Association (JAMA) Benchmark criteria was used to evaluate websites. It assesses 4 items; authorship, attribution, disclosure and currency. The presence of each item adds 1 point to the score. JAMA Quality test criteria, Modified DISCERN and Global Quality Score (GQS) were used to assess the quality of the websites as previous studies. Modified DISCERN includes 5 questions addressing the quality of the website and each “yes” answer is 1 point where higher scores indicate higher reliability and quality. GQS is a 5-point scale where 1 point is poor quality and 5 points is excellent quality.
The texts of the websites were transferred to Microsoft Office Word 2007 (Microsoft Corporation, Redmond, WA). In order to analyze the readability, as in previous studies about readability, the punctuation marks, excluding periods, were erased. The texts were evaluated by using “ www.readability-score.com ” and the formulas; The Flesch Reading Ease Score (FRES), Flesch-Kincaid grade level (FKGL), Simple Measure of Gobbledygook (SMOG), Gunning Fog (GFOG), Coleman-Liau score (CL), automated readability index (ARI) and Linsear Write (LW) were used to assess readability. All formulas were used for each PEM in our study. These are the most popular formulas used while assessing the readability of texts and the ones commonly used in similar articles. FRES formula uses the ratio of total number of words to total number of sentences and the ratio of total number of syllables to total number of words. FKGL analyzes the sentence length and word complexity. Longer sentences and words with more syllables indicate a higher difficulty level. The SMOG Index focuses on polysyllabic words, which often indicate complex vocabulary and sentence structures. The GFOG formula calculates a grade level based on the average sentence length and the percentage of complex words. CL focuses on the average number of letters and sentences, assuming that longer words and longer sentences are more difficult to read. The ARI formula assesses the average number of characters per word and the average number of words per sentence. LW formula was created by The US Air Force to evaluate the readability of their technical manuals. It calculates the grade level of the text by evaluating the sentence length and the number of words with 3 or more syllables. The readability scores were compared to sixth-grade readability level. This readability level is ≥ 60.0 for FRES formula and < 7 for other used formulas.
Contents of PEMs about child pain were evaluated. In the contents; definition, pathophysiology, incidence, epidemiology, risk factors, diagnosis, treatment, pain scores, morbidity, mortality, complications and prevention were evaluated.
SPSS Windows 25.0 (SPSS Inc., Chicago, IL) was used for statistical analysis. Frequency data were given as numbers (n) and percentages . Data taking continuous values were represented as median (minimum-maximum). Frequency variables were compared with Chi-square or Fisher exact tests. The data taking continuous values were assessed with Kruskal-Wallis or Mann-Whitney U tests. Correlation analysis was made with Spearman correlation coefficient. P value of <.05 was determined as significant. “Bonferroni adjustment” was used in multi-group analyses, and the P value was determined by the number of groups.
The search using Google showed 1.5 million results. The records of the first 167 websites were included in our study. After the first 167 websites, the duplications started. 20 websites (12%) including scientific materials, 6 (3.6%) unrelated websites, 4 (2.4%) commercial websites, 4 (2.4%) websites without texts and 36 (21.6%) websites that could not be reached were excluded from the study. 96 websites that met the inclusion criteria were analyzed. 3.1. Website typologies 3.2. Readability evaluation 3.3. Reliability and quality evaluation 3.4. Correlation analysis 3.5. Content analysis 3.6. Comparison of the first 10 and other websites The first 10 websites’ Blexb rank values were not different from the rank values of other 86 websites ( P = .891). Also, we could not find any difference between the typologies of the first 10 and the other 86 websites (Fig. ). When the quality of first 10 websites was compared with other websites we found a statistically significant difference in GQS score ( P = .02). Similarly, there was a significant difference in GQS quality score distribution between the first 10 websites and others ( P = .02). The JAMA and DISCERN scores of first 10 websites were similar to other 86 websites ( P = .435 and P = .282, respectively). The distribution percentages of JAMA and DISCERN scores of first 10 websites and others were not different ( P = .761 and P = .397, respectively). Reading levels of first 10 websites were not different than other websites ( P = .645). When readability was assessed between first 10 and other websites there was not any difference with ARI ( P = .769), FRES ( P = .606), GFOG ( P = .475), FKGL ( P = .598), CL ( P = .905), SMOG ( P = .513), or LW ( P = .589) scores (Table ). Treatment (n = 8), diagnosis (n = 7), and complications (n = 5) were most common contents in the first 10 websites. We could not find any difference between the contents of first 10 websites and the others ( P > .05) (Table ).
In 96 websites included in our study, hospital websites (n = 44, 45.8%) have the highest percentage, the second most common typology was health-related websites (n = 18, 18.8%) (Table ).
The calculated values of readability scores of evaluated 96 websites were; FRES 64 (32–84), GFOG 11 (7.1–19.2), ARI 8.95 (4.67–17.38), FKGL 8.24 (4.01–15.19), CL 10.1 (6.95–15.64), SMOG 8.10 (4.98–13.93) and LW 8.08 (3.94–19.0) (Table ). The readability of all 96 websites was assessed by all 7 formulas. The readability of the websites was significantly higher than sixth-grade reading level with all formulas ( P = .011 for FRES, P < .001 for GFOG, P < .001 for ARI, P < .001 for FKGL, P < .001 for CL and P < .001 for SMOG) except the LW formula ( P = .112) (Table ). We found a statistically significant difference between websites’ typologies with only ARI formula ( P = .008) (Table ) (Fig. ).
The JAMA score for 96 websites was 1 (0–4), GQS score was 3 (1–5) and DISCERN score was 2 (0–5). 75 (78.2%) of 96 websites’ GQS score was ≤ 3, JAMA score of 86 (89.5%) websites was ≤ 2 and DISCERN score of 64 (66.7%) websites was ≤ 2 indicating that they have a moderate-to-low reliability and quality. We could not find any difference between typology of the websites with GQS and DISCERN scores. But there was a statistically significant difference with JAMA score ( P = .009) (Table ) (Fig. ). The quality of websites with JAMA score can be aligned as; health-related > society/professional organization > hospital > news > other > government websites.
We found a weak negative correlation between JAMA score and Blexb score ( P = .013) but we could not find any correlation with GQS and DISCERN scores ( P > .05). A weak correlation was found between LW score and Blexb score ( P = .043). No correlation was observed between Blexb and other readability scores ( P > .05) (Table ).
Each topic related to child pain was assessed in content analysis and their frequencies were; definition 38.5%, pathophysiology 10.4%, incidence 11.5%, epidemiology 22.9%, risk factors 26.0%, diagnosis 68.8%, treatment 72.9%, pain scores 22.9%, complications 24.0%, morbidity 9.4%, prevention 19.8%. Mortality was not mentioned in any of the websites. We did not observe a significant relation between website typologies and contents ( P > .05) Table ).
We found that the readability grade level of internet-based PEMs about “Child Pain” was significantly higher than the recommended sixth-grade level. We also showed that most of the PEMs’ reliability and quality levels were low to moderate. Hospitals’ websites were the most frequent websites and there was not a difference between the first 10 and other websites typologies and readability. Pain is an unpleasant feeling that can be associated with tissue damage. It can be in an acute or chronic nature. In Germany, the prevalence of pain has been reported as 64.5% for children between 3 and 10 years and 77.6% for 11 to 17 years. The incidence of chronic or recurrent pain has been reported between 11% and 53%. These results show us that more than half of the children suffer from acute or chronic pain. In a recent review, the authors summarized the epidemiologic studies and found that one-third of the children had stomach aches and 20% of the children had inadequate pain relief therapy after surgery. It has been shown that pain in pediatric age can result with impairment of cognitive and social development. Pain experience can be stressful and even traumatizing for the child and parents. Anxiety, depression and low self-esteem can arise as a result of pediatric pain. Pain is also the reason of concern and desperation of the child and parents. Nowadays internet is the primary source of health information for many individuals. It was shown that 90% of the people who live in America use internet and health-related topics constitute 72% of internet searches. Nearly, 6.5 million Google searches are for accessing health information per day. For anxious parents searching internet for information about their child pain, the internet-based PEMs should be easy to read and understand. Otherwise, despite easy access to health information on internet, PEMs can cause confusion and anxiety. Especially for children suffering chronic pain, improving parent related factors is one of the key factors to improve child functioning. When the parents have increased levels of protective behaviors, children became more disable. Parent behaviors are usually related to their experiences and the information they have. In a recent meta-analysis, correlation between parent anxiety and child pain and disability has been shown. It is understandable that a misinformed parent would be more anxious and protective. This study showed that the PEMs about child pain are difficult to read and have moderate quality which could easily result with misinformed parents. Therefore, the easiest way for parents to access information, the internet-based PEMs, should be easy to read and understand. Although the number of internet-based PEMs is increasing due to high demand, their readability and quality varies. Moreover, the understandability of PEMs depends on the person health literacy which is defined as to read and understand health information and use it while making health-related decisions. The adequate health literacy has been reported as 12% in American population. As long as the readers do not understand, the information and accuracy of the website are not important. When the patient in question is a child, the parents have to make a health-related decision on behalf of the child. This would increase the anxiety of the parents. As we determined in our study the PEMs about child pain are difficult to read so it is questionable that the PEMs could help an anxious parent to make health-related decisions. This is an important issue for children health and also for healthcare sector because a misinformed parent can decide to go to emergency for unnecessary pain or decide to stay home while urgent care is needed. Readability is an objective measurement of written material understandability. There are several formulas to evaluate readability but the best one/ones to evaluate health-related materials are unknown. Therefore, in our study, we used 7 different formulas to evaluate the internet-based PEMs about “Child Pain” and also the estimated grade level. The recommended grade for readability is sixth-grade level. The United States Department of Health and Human Services considers below sixth-grade level as ‘easy to read’, 7 to 9 th grade level as “average difficulty” and above 9th grade level as “difficult.” This study showed that most of the internet-based PEMs about “Child Pain” are above sixth-grade level with all readability formulas except LW. Similar to our findings Arslan et al investigated the readability of PEMs about chest pain in children and found that their readability is higher than sixth-grade reading level. Our study, where we included the PEMs about general pain in children, indicates that the PEMs about “Child Pain” are far from understandable for families. Another concern about the internet-based PEMs is their quality and reliability. Eysenbach et al reviewed the studies evaluating PEMs and they found that 70% of the studies reported a problem in the quality of the websites. In this study, similar to previous studies, we found that more than 2-third of the websites about “Child Pain” have low to moderate quality and reliability with JAMA, DISCERN and GQS scores. Nearly half of the PEMs in our study were hospital websites unlike previous studies which found commercial websites have the greatest number. According to our findings the readability of the PEMs was significantly higher than sixth-grade level with ARI formula regardless of typology of the PEM. We also assessed the quality according to the typology of the website. Unlike other studies, we could not find a difference between websites according to their typology with DISCERN and GQS scores. But there was a significant difference with JAMA benchmark criteria which shows the health-related websites have the greatest quality whereas government websites have the lowest. Similar to our findings Arslan et al reported that physician and health information websites have highest JAMA scores. As mentioned before the readability of websites did not differ, so even though the health-related websites have high quality with JAMA scores we believe that they can not provide sufficient information for patients or families with low health literacy. The statistics about Google reported that <10% of the people advance to the second page of the search results. Therefore we separately evaluated the first 10 results for readability and quality and compared the findings with other 86 results. The readability of first 10 websites was higher than 8-grade level and there was not a significant difference with other websites. Kocyigit et al and Bagcier et al could not find a difference between first 10 and other websites similar to our findings. Contrary to our findings, Basavakumar et al reported in their study that the first 10 websites were more readable than other websites. The GQS score showed that the quality of first 10 websites were higher than other websites. An interesting finding of this study was we could not find a difference at rank scores of first 10 and other websites. This finding may be result of low readability and quality of first 10 websites. The most common contents of the PEMs about “Child Pain” were determined as “diagnosis” and “treatment.” This is an understandable finding since the most curious topics about pain are; the reason of the pain and how to treat it. There are several limitations of this study. The first limitation is an unavoidable one and is related to internet nature. Internet is a dynamic information source so the ranks and search results may change. Second limitation is, we only search the websites in English. Including all languages would result in different results. Third and also an unavoidable limitation of our study is even though we cleaned the history and cookies of the search engine, the search results could still be related to our geographic location. In different regions of the world, the same keywords could result with different websites. Another limitation is evaluating readability with computer-based formulas. These formulas may overestimate the readability of the evaluated material. Although many studies assess the readability of PEMs with computer-based formulas, the ideal formula to evaluate readability of internet-based PEMs is unknown. We used 7 computer-based formulas to reduce the misleading results. The last limitation of our study is, we used only general search terms about child pain like “Child Pain,” “Pediatric Pain” and “Pain in Children.” We did not use the keywords for specific pain types like headache or postoperative pain because we wanted to include all kinds of pain that children may suffer and evaluate the PEMs about them. Our aim was to assess the readability and quality of PEMs about pain in childhood. Further studies could be planned to evaluate the PEMs about specific pain types in childhood. In conclusion, we evaluated the readability and quality of internet-based PEMs and showed that the reliability and quality of websites are moderate-to-low and readability grade level is higher than the suggested sixth-grade level. We also found that the rank of first 10 websites did not differ from other websites and we believe that this could be as a result of low readability and quality of the websites. Our findings show that the PEMs that parents search for information about their child health are difficult to read and understand. We believe that while preparing PEMs, the readability and health literacy of general public should be considered by the editors of the websites. This would help parents to find satisfying information about their child health and decision-making. Further studies may be needed to explore the readability of PEMs about specific pain types that may affect a child well-being.
Conceptualization: Elvan Ocmen, Volkan Hanci. Data curation: Elvan Ocmen, Ismail Erdemir, Volkan Hanci. Formal analysis: Volkan Hanci. Investigation: Elvan Ocmen, Ismail Erdemir. Methodology: Elvan Ocmen, Volkan Hanci. Writing – original draft: Elvan Ocmen. Writing – review & editing: Hale Aksu Erdost, Volkan Hanci.
|
Why Men Have Abortions: Quantitative and Qualitative Perspectives From Urban Family Planning Clinics in Chicago, Illinois, USA | e04cacda-abfb-41b5-b8b0-494e8b8c890a | 11811970 | Surgical Procedures, Operative[mh] | Based on survey data from the Pew Research Center in 2022, most adults in the United States agree that abortion should be legal in all or most cases. Notably, support for abortion is not markedly different between U.S. men and women (58% vs. 63%) . Survey data from the National Opinion Research Center in 2022 reported that approximately half of men disapproved of the Supreme Court of the United States’ decision to overturn Roe vs. Wade and reverse federal protections for abortion . Nevertheless, men have seldom responded to legal threats to abortion access in the same way that women have, possibly because abortion remains a comparably abstract issue for men . Based on the frequency with which it occurs, abortion should not be an abstract idea for men. Approximately one out of five reproductive age men in the United States reported involvement in an abortion based on data from the National Survey of Family Growth . Several studies remark on men’s attachments to pregnancy decision-making and varying narratives on how abortion can fulfill or frustrate their perceived roles as men . Societal labeling of abortion as a “woman’s issue” and the stigmatization of abortion may prevent some men from disclosing and talking about their involvement in an abortion , thereby contributing to misguided perceptions that abortion is uncommon or of lesser importance in men’s lives. As women are most impacted by having to carry an unwanted pregnancy, those who choose abortion are frequently able to cite several reasons for their decision, not limited to feeling financially unprepared, wanting to prioritize their education/career, having to prioritize their current family, and being concerned about the state of their relationship . Some women’s decisions to have an abortion may run counter to the parenting preferences and preparedness of their male partner or may stem from a poor relationship with their male partner . Faced with the potential loss of role, masculinity, and relationship, some male partners may face difficulty understanding their female partner’s reasons for the abortion, may misdirect their frustration toward abortion , and may overlook the ways in which the abortion could positively impact their own lives. For example, one longitudinal study of adolescent men involved in a pregnancy noted that men whose pregnancies ended in an abortion were more likely to graduate from college . While men may share similar reasons for wanting an abortion as their female partners, these reasons are seldom identified and discussed. Few studies directly query men’s attitudes about their partner’s abortion and their reasons for ultimately supporting the decision. Regardless of their preference for abortion, characterizing men’s independent reasons to support an abortion may broaden men’s perspectives on abortion, help society move beyond labeling abortion as a “women’s issue,” and facilitate men’s involvement in abortion advocacy. We thus conducted a mixed methods study of male partners in the abortion clinic setting to better characterize their reasons for pursuing and/or supporting an abortion. Study DesignData AnalysisWe conducted a secondary analysis of data collected in a mixed methods study of men recruited from the waiting rooms of two abortion-providing clinics in Chicago, Illinois. Details on the design and conduct of the study are available in previous publications (Newton et al., 2020; Nguyen et al., 2018). In summary, all English-speaking male partners above the age of 18 years who were present in the clinic waiting rooms were screened and invited to participate. All male partners self-identified as biologically involved in the pregnancy being aborted. Thirty in-depth interviews were conducted from April to August of 2015, exploring men’s abortion experiences and attitudes. Findings from the interviews informed the development of a subsequent survey given to 210 men from February to May of 2016. This secondary analysis focuses on characterizing and estimating the prevalence of men’s reasons for abortion, and their influence on men’s preference for and satisfaction with their female partner’s abortion decision. Regarding the qualitative portion of the study, interviews were conducted by B.T.N., an abortion-providing physician who introduced himself as a research assistant with no clinical role. Interviews followed a semistructured interview guide. Interviews lasted approximately 1 hr and were conducted on the phone and in person per interviewee availability. Interviewees received a $40 gift card upon completion. For the quantitative portion, respondents completed the survey in private, using a tablet computer. They received a $10 gift card upon completion. The survey included a qualitatively informed, prepopulated list of 14 reasons for abortion from which men chose as many responses as applied (e.g., not wanting any(more) children, wanting to wait longer before having another child, doubts about the partner/relationship, needing to focus on career/education, partner needing to focus on her education/career, unable to afford a baby right now, “it’s her decision no matter what”). Men’s desire for abortion was assessed with two 4-point Likert-type items; one asked if participants would choose abortion if entirely their decision, and the second asked how much participants wanted to continue the pregnancy. The survey additionally included demographic factors (e.g., age, race/ethnicity, education level, employment status) and reproductive characteristics which were examined for associations with abortion decision preference and satisfaction. Reproductive characteristics included relationship status (just met, going out occasionally, steady/living together, engaged/married), number of pregnancies in which the respondent had been involved, number of children, history of prior abortion, and estimated gestational age of the pregnancy at the time of abortion (<13 weeks, >13 weeks, don’t know). In-depth interviews were analyzed independently by B.T.N. and J.H. for themes arising both deductively and inductively . Deductive analysis was performed via coding data based on motivation (reasons) for abortion. Inductive analysis was performed via reflexivity journals which resulted in coding patterns of engagement and disengagement from decision-making processes. Survey data are presented via simple descriptive statistics where respondent abortion and pregnancy attitudes were collapsed into binary responses. We then conducted bivariate analyses (i.e., chi-square) to examine for associations between respondent characteristics and their preference for abortion and desire to continue the pregnancy. We additionally created a measure of concordance between men’s desires to continue the pregnancy with their perception of their female partner’s desires to continue the pregnancy, examining the association of concordance and decision satisfaction via bivariate analysis. Associated characteristics and reasons for abortion were included in a logistic regression model evaluating independent associations with abortion decision desires and decision satisfaction. All quantitative analyses were performed with Stata version 13.1; qualitative analyses were performed using ATLS.ti version 9.0. We present the qualitative data after the quantitative data to offer additional dimension to the survey findings. Quantitative: Survey ResultsQualitative ResultsOf 318 men screened, 46 did not meet eligibility criteria and 60 declined to participate. Of those declining to participate, 35 did not give a reason, 7 reported discomfort with participating or talking about the situation, and 18 were not interested. Two respondents later asked to retract their data because their female partner did not want them to participate. The analytic sample included 210 men (response rate = 77.2%). With respect to preference, 41% of respondents would not (probably or definitely not) have chosen abortion if entirely their decision, and 59% would have chosen abortion (probably or definitely) if entirely their decision. 12% of men highly desired to continue the pregnancy, 55% somewhat or minimally wanted to continue the pregnancy, and 34% of men did not want to continue the pregnancy at all. 10% of men were dissatisfied or very dissatisfied with the decision to abort, 44% of men reported that they were neither satisfied nor dissatisfied with the decision, and 46% reported they were satisfied or very satisfied with the decision for abortion. Overall, neither demographic characteristics nor reproductive history demographics were associated with men’s desire for abortion , with two exceptions: a history of a prior abortion was associated with higher likelihood of not choosing abortion if it was entirely the man’s decision, and the desire to have more children was significantly associated with the desire to not continue a pregnancy . Desire to continue the pregnancy was significantly associated with satisfaction with the decision for termination. 69% of those very satisfied with the decision for abortion also stated they did not at all desire to continue the pregnancy, and of the 11% of respondents who highly desired to continue the pregnancy, 86% reported being either unsatisfied or very unsatisfied ( p < .01). 75% of men reported a concordant desire with their partner to continue or end the pregnancy, with 25% reporting discordant pregnancy desires. There was moderate correlation between men’s desire to continue a pregnancy and their perception of their female partner’s desire to continue a pregnancy ( R 2 = 0.44). There was a significant relationship between a man’s perception of concordant desire and satisfaction with the decision for abortion: among those who reported they were satisfied or very satisfied, 83% perceived concordant desires with their female partner. Conversely, among those who reported they were unsatisfied or very unsatisfied, 67% perceived discordant desires for the pregnancy ( p < .01). Most participants (97.5%) chose more than one reason for abortion from a prepopulated list. On average, men selected six reasons, which did not significantly differ when accounting for whether they would have chosen to terminate the pregnancy if the decision was their own. Most common reasons included not the right timing (80.7%), interference with current opportunities for the respondent or their female partner (74.6%, 79.0%), and financial unpreparedness (71.9%). Almost one quarter of respondents reported uncertain paternity as a reason for abortion. Notably, 68.5% of men selected that it was their partner’s decision regardless of their opinion. lists survey results of reasons that men chose abortion by desire for abortion. Desire for abortion was not associated with any of the listed reasons for abortion, with the exception that concern about what family/friends would think was significantly associated with not desiring to continue the pregnancy. There was no association between the number of reasons picked for abortion and satisfaction with abortion. The majority of interviewees were non-Hispanic Black ( n = 13), followed by non-Hispanic White ( n = 8), Hispanic ( n = 4), other ( n = 4), and Asian ( n = 1). Thematic analysis further characterized the reasons for abortion, including most chosen answers of financial difficulties, interference with other opportunities, having other dependents, and being “not ready.” Often these reasons overlapped and intertwined, as highlighted in one interviewee’s response: I want to get our lives right before we bring another one into it. Right now, our lives are not right. She does a little work, I do my little side jobs—to try and deal with a child at the same time, it’s not going to work out for us. Can’t pay for a sitter while I’m trying to pay for rent, phone bills. We are just weak. (long-term partner, age 25, supportive, abortion-neutral) Financial InstabilityInterference With Other OpportunitiesCommitment to Other DependentsWrong Time/Not ReadyComplex Partner Involvement: Shared Decision-Making Versus Deference to Female PartnerTension in Decision-Making Almost all interviewees expressed that there was some degree of difficulty in making the decision to terminate a pregnancy. Men called the decision “extremely difficult,”“really hard,”“not an easy thing,” and “10/10 in difficulty.” One man called the experience “traumatizing”, and one interview concluded with the interviewee in tears, stating: “This is not something I enjoy talking about. It doesn’t bring me great joy at all” (long-term partner, age 21, supportive, anti-abortion). Yet in the midst of expressing difficult decision-making, many men expressed satisfaction with their decisions. For example, the partner who described the experience as traumatizing also arrived at this conclusion: These are things that you can sort of get over. The way I thought about it, yes it is a traumatizing, painful experience. I don’t wish it upon anybody. [But] iI am not just like, “It’s over we’re done.” It’s still a rough thing, it’s still something that happened and I have to live with. I know I can walk past it so that when the time comes to have a child I can be happy and not be forced into it. (single, age 19, supportive, pro-abortion) Another man similarly expressed: The main thing that I was saying is, “It’s tough now, but we’ll move past this. We just got to do, what we got to do.” We always will look back at it and see how tough it was. In the end, like I said, we realized that’s what was best. (committed partner, age 30, supportive, anti-abortion) A common theme in interviews was the financial difficulty of having a(nother) child. Men expressed difficulty providing for a(nother) child, particularly an inability to provide for current children simultaneously, as reason to pursue abortion: That’s only the hard thing to do because we already had kids, and I love them, but right now financially I know for a fact that we can’t fend for another child. (committed partner, age 29, supportive, pro-abortion) Men also expressed that finances were a barrier to child-rearing despite desires for more children: I’ll be happy raising the kid; I’m just not at that financial stage yet. Everything else, mentally, physically I am, but the financial stage I’m not. If I had at least more money, if I was financially ready, I would have went totally against [the abortion], but I’m not financially staged for it. It was out of the question. (long-term partner, age 18, supportive, against abortion) Other interviewees used phrases like: once we got better jobs, once I am financially ready , later on down the line when we got jobs , and one day it’ll happen , I’ll be financially able. Together, these quotes indicate that for many men, financial instability was a temporary yet important reason to pursue abortion. Having a(nother) child interfered with men’s current or next opportunities, namely education and employment opportunities. Interference with education was a major focus for men: I wanted to finish college first at least, not work minimum wage job at some retail store like I’ve been doing since I was 18. I don’t want my child on a stroller on a bus, on the CTA. I want a less anxiety lifestyle especially if I’m going to be a parent. (long-term partner, age 22, supportive, anti-abortion) Men were also concerned with the impact a(nother) child would have on their job opportunities, saying “I was just barely started on my job” and “we’re just both in a climb right now” about his a female partner. The same interviewee as above concluded: I’m wrapping up a college degree. I have tuition to pay off. I have a job to get started, and I’m trying to take my life off and have a career. There’s just no way I can take care of another life. I’m just barely starting mine. Men cited the number of children they have or care for, ages of their children, and activities their children partake as reasons not to have a(nother child). One man explained his emotional processing of the abortion in relationship to his commitment to his other children: We’re moving on because we’ve got kids. I knew that it would be ridiculously tough on us if we had another one. The decision was difficult, but I knew it was something that we had to do. (committed partner, age 30, supportive, anti-abortion) Time was an additional resource that men reported needing to devote to their current dependents and undergirded their reasons for abortion. This was best described by one male partner in relationship to his current children’s activities: To add a child to what we’re both trying to accomplish right now, it’s a lot. It’s too much to try to add. Our two boys are in football, basketball, baseball. My daughters are in gymnastics, cheerleading, swimming. We just don’t have the time. For me, that’s basically what it comes down to. (long-term partner, age 31, supportive, pro-abortion) Men cited multifactorial reasons for abortion, which corroborated the survey findings of a mean of six reasons elected for abortion. These multifactorial reasons contributed to an overall sense that the unintended pregnancy came at the “wrong time” to have a(nother) child. Interviewee language revolved around the idea of not being ready for a(nother) child: I was stressed about it because I already didn’t want any more children right now, because I’m not established in life where I want to be. I would like another child but not now the money, the cost of it. I guess we decided what was right. That’s for the time being. (long-term partner, age 29, supportive, pro-abortion) As in the above example, being “not ready” was often couched in the disclaimer that men may desire a child in the future but not at the current moment. One man articulated that though desiring children in the future, currently a child would cause unhappiness: If we were to keep the baby, it would be forcing me into a life that I don’t necessarily want. I don’t want to be forced into this life and regret it for the rest of the time and not be happy. (single, age 19, supportive of decision, pro-abortion) Age, whether too young and too old, was additional timing reason for abortion. Some men expressed feeling too young to have a child, expressing concerns like, “I can’t even teach my child, because I’m too young” and “we need to get a little more wisdom in our systems before we do all that.” On the other end of the spectrum, an older interviewee said, “Lockdown for another 18 years, what’s it going to be like at almost sixty years old trying to take care of teenagers?” (long-term partner, age 40, supportive, pro-abortion) Men articulated a variety of roles in the decision for abortion which ranged from heavily involved in decision-making to deferring the decision to their female partner. Many men articulated significant involvement in decision-making. Phrases like “we both said we agreed,” and “we’re at peace with where we’re at,” or “coming together on the same page,” were all used by men to describe their mutual roles and decision concordance with their female partners. Several men articulated the dynamic yet mutual process of decision-making: We sort of played out both scenarios, what would happen if we did and didn’t keep it. Ultimately, I told her that I was supportive of her, and I wanted to make this decision together. We sort of talked about it over the next couple of days. That’s ultimately the way we ended up going. (short-term partner, age 30, supportive, pro-abortion) In this way, men situated themselves as equal shareholders in the decision, seeing their role as a mutual decision maker. This was true both when men expressed concordance with their partner in terms of desires, but also when the desires were perceived to be discordant: I wanted to have a baby, she didn’t want to have it. I didn’t get mad at her. I didn’t say, “Oh you want to kill my baby?” I didn’t do all that. We sat down and talked. “I want to have a baby, you don’t want to have the baby. Let’s just cut it loose right now and we can try again.” We young, we got our whole life to live. We got a lot of time ahead of us. (long-term partner, age 32, supportive, anti-abortion) One male partner was able to situate his role as a mutual decision maker while also acknowledging the nuance of the unequal burden on his female partner’s body: I feel, ultimately, it’s her [decision] because it’s her body and she’s the one that has to deal with being sick. She’s the one going through it herself. I feel in the end it was her decision, but I think that when we actually made the decision it was a mutual decision, for sure. (long-term partner, age 23, supportive, pro-abortion) Some men not only articulated involvement in the decision, but saw themselves as the ultimate decision maker in relation to their female partner. When asked whose decision the abortion was ultimately, several men stated that it was their decision even more so than their female partners: In a way it’s ultimately my decision because she is scared. (single, age 19, supportive, pro-abortion) I would say that it was my decision. She really wanted to have it. It’s also half her decision. I would definitely say that it was more swayed in my opinion. (long-term partner, age 22, supportive, anti-abortion) Only one man expressed significant sadness over the decision for the abortion, particularly that he didn’t share more of his opinion at the time of the decision which he attributed to his style of conflict management with his partner. He ultimately concluded that his role should have included more control: “Not stepping up and changing her mind about getting the abortion—that’s something I could have done” (long-term partner, age 24, not supportive, anti-abortion). Other men were less involved in the decision-making process. Few men felt entirely excluded from the decision-making process, but many men deferred all decision-making to their female partner, regardless of their desires for the pregnancy. As evidenced by their presence at the abortion center or willingness to take a call with interviewers, deferral to female partners was not associated with absence from the experience of abortion itself. Men frequently provided logistical, emotional, and financial support, but limited their involvement in the actual decision-making process through expressing deference to their female partner in the decision-making: I think for the most part I was just supplemental. She made the decision on her own, and I agreed with that. From there, she went on her own to find what kind of service she wanted to do, what she was most comfortable with. I agreed with whatever that was. The rest of the decision, she had it all set in place. The rest of the process, I was just there to help her along in any way that I could. (single, age 23, supportive, anti-abortion) The entire time it’s up to her. It’s totally up to her what she wanted to do, and she knows I feel that way, but I could have gone either way with it. But with three kids, it’s a handful, but not so much that it’s undoable, but I think at the end it was other considerations for her about things that she wanted to do. She felt like this would set her back, and so that she made the choice. (committed partner, age 44, supportive, pro-abortion) Deference to female partners happened irrespective of desire for abortion. Despite his unease with the situation, one interviewee expressed his support for his partner’s decision: It really wasn’t my decision. I told her I would support her and I would go with her if she wanted to do that, but it really wasn’t something that I would suggest. I supported her through that, even though it wasn’t a good thing for me to experience. (committed partner, age 27, supportive, anti-abortion) Deference of decision-making to female partners was usually not accompanied by hostile feelings or absence from the decision in our subset of men, even when they disagreed with the decision. Instead, most men described a variety of forms of support for their female partners, including buying pads and ibuprofen, providing rides to and from clinics, paying for the abortion, and providing emotional support. Notably, deference to a female partner was sometimes accompanied by articulated repression of emotion. For example, after asserting that the decision was entirely his partner's, when asked about his own emotion one interviewee said, “To be honest, I cut my mind off of it I wasn’t as emotional as her” (single, age 23, supportive, anti-abortion). Other men articulated the need to be “strong” for their female partners, one respondent going so far as, “I’ll sacrifice my emotions to keep her emotions in check” (committed partner, age 37, supportive, neutral on abortion) and another articulating, “Women are a ball of emotions, regardless, you got to try to keep tending on their needs, at the same time not suffocating your own needs” (committed partner, age 25, supportive, neutral on abortion). lmost all men involved in abortions have multiple and interrelated reasons for pursuing abortion. Despite initially not desiring abortion, most men in these Chicago outpatient clinic were not dissatisfied with the decision for abortion, which may in part be explained by their perception of congruence with their female partner, as well as the complex decision-making surrounding termination of pregnancy. Our data show that men are deeply implicated in abortion decisions, which should profoundly shape research, policy, and narratives surrounding reproductive care. Other studies have shown that female reasons for abortion include finance, wrong timing, and interference with future opportunities; women also frequently have multiple reasons for abortion . Our study shows men have similar reasons for pursuing abortion. The benefits of abortion for men are under investigation, though currently the negative effects of unintended pregnancy are better characterized than the benefits of abortion: several studies have shown that men whose female partners continue unintended pregnancy described higher financial burdens than expected, difficulty continuing education, diminished partner relationships, and decreased personal health . One study showed financial benefits to men whose partners had abortions; however, a positive association between abortion and personal income was only noted in men who did not reside with their children during adolescence . Further research is needed to elucidate the full benefit of abortion for men’s quality of life and any correlation to the reasons men pursue abortion. Our data demonstrate that men are emotionally and logistically involved in abortion experiences. Survey responses show a mean number of six reasons given for pursuing abortion, which our qualitative data further revealed were interrelated and overlapping, creating a complex web of motivation for abortion. One of the major findings from this study is that despite initially not wanting an abortion, some men will accompany and support their female partners to an abortion clinic; quantitative survey results indicate that very few of these partners are dissatisfied with the decision for abortion. This seeming contradiction can be partially explained through qualitative data illuminating the complex decision-making process that men undergo in pursuing abortion. Though men may initially report not wanting an abortion, they can provide multiple reasons for it and articulate a decision-making process, and most are ultimately not dissatisfied with the result. Finally, our data support prior work indicating that many men provide substantial support at the time of abortion and directly contradicts widely held narratives of men as uninvolved or uninterested in abortion decisions. Men have preferences, life circumstances, and family planning desires that all factor into decision-making surrounding unintended pregnancy, underscoring men’s role as independent subjects in the decision for abortion. As such, men exist as a population with significant potential for mobilization to advocacy in abortion spaces. This study should be evaluated in light of its limitations: namely, that the participants were isolated to a single geographic location, and that the participants included in the analysis accompanied their female partners to the abortion clinic, thus potentially demonstrating a level of support that is different than male partners who either refused or were unable to accompany their female partners to the clinic. Finally, in pursuit of inclusive research and policy, it is important to note the difficulties of studying abortion outcomes. Our data offer analysis of the discrepancy between many men not desiring abortion, yet so few being dissatisfied with the result. Said simply, men say that they do not want abortion, yet their lack of dissatisfaction suggests that they acknowledge that they need abortion. Future projects should account for the fact that despite perhaps not initially wanting an abortion, men decide to have abortions for many reasons and undergo complex decision-making to arrive at that conclusion. Men who accompany their female partners to abortions have multiple, often interrelated reasons for pursuing abortion. While many of them would not initially choose abortion, very few are dissatisfied with the decision for abortion. Our data do not show any relationship between decision satisfaction and demographics or reasons chosen for abortion; however, there was a significant relationship between men’s perceived decision concordance with their female partner and their own decision satisfaction. Qualitative data reveal complex decision-making processes, as well as men articulating support for their female partner regardless of decision concordance or discordance. |
People, power and participation: strategic directions for integrated person-centred care for NTDs and mental health | 83db8914-fbda-453f-ac1b-2a4416646683 | 10977956 | Patient-Centered Care[mh] | In this second editorial for the International Health supplement on mental health, stigma and neglected tropical diseases (NTDs), we reflect on how we can capitalise on the positive shift toward global normative frameworks of person-centred care, highlighted by Eaton in this supplement, to shape positive change for persons affected by NTDs. The World Health Organization (WHO) presents five interdependent strategic directions that can support shaping the development of integrated people-centred health systems: empowering and engaging people, strengthening governance and accountability, reorienting models of care, co-ordinating services and creating an enabling environment. These strategic directions are designed to generate a set of actions that can support transforming health systems to enable service delivery that is both integrated and people-centred. Here we consider each of these strategic areas in relation to the articles within this supplement, which we hope will enable a collective journey that is guided by the needs and values of the persons affected and that promote positive mental well-being for all.
Social collectives/networks of people affected by NTDs are expanding globally, as emphasised by Zaizay et al. and Duck. However, as illustrated by Molyneux and Duck , people affected by NTDs are frequently excluded from the production of scientific knowledge, policy processes and in defining future research and program priorities. This is particularly true for the majority of persons affected by NTDs who live in remote rural areas and may not have the opportunity, skills and/or resources to be able to communicate their needs and priorities in ways that international (e.g. WHO) and national (e.g. Ministry of Health) agenda-setting bodies are able/willing to engage with. Many within the NTD disease management, disability and inclusion (DMDI) community value the role of people affected by NTDs as assets in the co-production of knowledge, health and social care priorities, supporting them to become owners of health services and healthy environments that can support positive shifts in mental well-being through the attainment of person-centred care. We have upheld this value within the production of this supplement. However, in this supplement we deliberately invited people with lived experience to contribute articles, and the consequence has been important insights into the engrained power imbalances that hinder efforts to address complex social and structural causes of morbidity and disability associated with NTDs. Stigmatising experiences that are catalysing the mental health–NTD nexus are described within several articles in this supplement. The different ways in which gender, stigma and physical morbidity interact at the micro/individual level to shape mental health outcomes, access to health services and inclusion within community interventions are clearly illustrated. Until now, few studies in the field of mental health, stigma and NTDs have taken a gendered or intersectional approach to understanding disease burdens, and fewer still have considered how these differences may require tailoring of emerging interventions to enable gender transformative and inclusive approaches across health system levels. Thus, further engagement and prioritisation of people affected by NTDs to understand how community norms and values shape illness experience are critical to attainment of people-centred responses to mental health and NTDs, ensuring interventions are adaptive to the contextualised experiences of suffering at the community, household and individual level. To date, the movement related to NTDs, mental well-being and stigma has largely focused on skin NTDs as a consequence of their profound physical and social impact. Within this supplement, Pedeboy and Masong remind us not to forget other NTDs in our work, emphasising the negative impact female genital schistosomiasis can have on mental well-being, largely driven by gendered social and structural inequities. Harnessing support from persons affected by NTDs and capitalising on the power of local communities is clearly essential in advocating for the rights of people affected by chronic morbidity, disability and mental ill health as a result of NTDs. Patient advocates and support groups are highlighted across this supplement as an essential resource for the NTD community and health systems globally , ; e.g. such groups and individuals could be engaged to provide feedback on national and subnational policies and plans (see Creating an enabling environment).
Good governance within the design and delivery of people-centred health systems promotes transparency in decision making and ensures that all voices are heard and consensus is achieved. , By integrating service delivery, people-centred health systems seek to bring together disparate strategies and priorities of varying donor agencies and vertical programs (e.g. NTD and mental health) to tackle specific health issues within overall systems strengthening. Within their commentary, Ojo et al. reflect on their experiences of integrating NTDs and mental health services in Nigeria resulting from a shared vision across programs that capitalised on a window of opportunity for policy change. Policy change in Nigeria has also been informed by the generation of new context-specific evidence (some of which is presented in this special issue) that emphasises integration of locally designed services that support mental health of people with NTDs is possible at the primary care level. , However, as highlighted by Eaton et al., weak health system infrastructure and minimal investment are barriers to the sustainability and potential effectiveness of such approaches. Health system governance, decision making and resource allocation is often driven by the availability of data and burden estimates. Many articles within this supplement emphasise the high burden of common mental health conditions (e.g. depression and anxiety) among people affected by NTDs, , , representing further progress in this area. However, across these articles, key challenges in ensuring the accuracy of epidemiological estimates are highlighted: literature on burden estimates is still relatively sparse, concentrated in sub-Saharan Africa and India, and focused on skin NTDs (mainly leprosy) ; despite commonality in the tools used (e.g. nine-item Patient Health Questionnaire and seven-item Generalized Anxiety Disorder questionnaire) across contexts, tools need further validation and testing for psychometric equivalence among NTD populations , mental health measurements through time and disease trajectory are required, mental health conditions that may be more relevant to some NTDs (e.g. post-traumatic stress disorder following snakebite is less prioritised) and evaluation of the economic costs of including mental health burden in disability-adjusted life year estimates linked to NTDs is still required. Responding to these challenges to generate more reliable data is an essential foundation for further progress as a research and implementation community. Across many countries, the current participation deficit of civil society and inclusion of people affected by NTDs in current decision-making processes continues to limit external social accountability of the health system. This further compromises the sustainability of locally designed primary care interventions for mental health and NTDs, largely due to the reliance on communities to be active participants in care delivery (see Coordinating services). Thus, alongside the strengthening of burden estimates, we must continue to create equitable partnerships and democratic checks and balances between government actors, civil society organisations and community members to ensure strong governance and accountability processes. As highlighted by Nganda et al., participatory action-based learning approaches to research and intervention design have great potential in strengthening social accountability processes for mental health and NTD service delivery.
Person-centred systems require a reorientation of care models to embody a more holistic understanding of health while prioritising primary and community-level intervention. This is not to completely substitute other levels of care, but rather, effective coordination of services between levels should be established. As emphasised within this supplement and showcased across multiple contexts (Democratic Republic of the Congo, Nigeria, Haiti, Malawi and India), models of care for NTDs and mental health must prioritise a life-course approach and give attention to case finding for the assessment of risk factors, detection of early disease and identification of risk status; disability limitation and rehabilitation; involving and enabling the affected person in managing the condition; provision of psychosocial support for affected persons and provision of long-term follow-up with regular monitoring to promote adherence to pharmacological and psychological interventions. , , , , , Several models of the ways in which life course approaches that promote positive mental well-being of people affected by NTDs can be embedded within health systems are presented in the supplement. For example, Barrett et al. present an ‘enhanced package of self-care’ within post-elimination service delivery for lymphatic filariasis in Malawi and Sadiq et al. emphasise the potential of embedding a ‘chronic disease self-management programme’ within Hope Clubs in Haiti.
The articles in this supplement illustrate that NTDs create a profound disruption in the lives of people affected. Broader social and structural drivers of the relationship between NTDs and mental health are apparent, yet consideration of these macro-level political factors is still emerging and an area for further consideration. Illness experiences often relate to a collection of challenges that can produce, exacerbate and maintain insurmountable disadvantage and exacerbate poor mental well-being. These challenges are often underpinned as a result of complex historical colonial and neo-colonial approaches, as emphasised by Mora et al. in their commentary highlighting the intergenerational impacts of leprosy stigma in Colombia. Thus service coordination and intersectoral collaboration are essential to tackle sociopolitical drivers of stigma and its consequences to ensure an effective continuum of care for people affected by NTDs. A reduction of mental health stigma and provision of psychosocial support has been shown to be most effective when interventions prioritise social contact. , Learning from the studies presented within this supplement suggests that the decentralisation of mental health services that engage communities is essential in achieving sustained contact in the provision of longitudinal psychosocial support. For example, Argawal et al. and Mol et al. showcase promising practice through the training of peer supporters in the use of a basic package of psychosocial support for NTDs. Engagement of persons affected by mental health conditions and their families and caregivers has also been identified as critical when considering the best approaches to expanding access to integrated primary healthcare services and is essential to the development of people-centred responses.
Reorienting health services becomes a political act that challenges existing interests. Justice and a focus on people—not diseases—are key reasons cited for the increased prioritisation of person-centred care for NTDs within recent global policy shifts. This includes the launch of WHO's NTD Roadmap 2021–2030, WHO's guidance document on ‘Mental health of people with NTDs—towards a person-centred approach’ and development of an essential care package for mental health and stigma. These shifts have undoubtably supported the creation of a more enabling environment for systems change and the integration of mental health and NTDs. Collective action between multiple actors, including national NTD programs, donors, non-governmental development organization partners, community health cadres, persons affected and researchers, is a welcome development, and one that should continue to be nurtured if this creative alliance for change is to flourish. This supplement provides policymakers, future donors and collaborating partners with a snapshot of existing evidence that we hope can be used to continue to shape and support decision making that creates an enabling environment for policy, program and system reform. We have emphasised that the needs and values of affected persons and health system actors must be at the fore, which alongside country ownership can enable system reform that has good outcomes, strengthens social accountability processes, responds to national priorities and reflects the values, needs and experiences of people affected by NTDs, their households and communities. In this way, we can see progress toward DMDI strategies that are truly person-centred and address unnecessary, avoidable, unfair and unjust differences in health outcomes for the most vulnerable.
|
Prescribing antidepressants and anxiolytic medications to pregnant women: comparing perception of risk of foetal teratogenicity between Australian Obstetricians and Gynaecologists, Speciality Trainees and upskilled General Practitioners | e8bb19f0-589f-4003-94ee-1e950ba8891e | 7556911 | Gynaecology[mh] | Depression and anxiety are common disorders, however their occurrence during pregnancy has the potential to significantly impact the health and wellbeing of both mother and child . Negative outcomes of mental health disorders in pregnancy include a variety of serious complications. Inadequately treated depression is associated with a substantial risk of maternal, fetal and neonatal morbidity and mortality . In addition to subjective distress, the impact on relationships can be very significant, particularly when attachment to the newborn is disrupted. This may lead to enduring detrimental effects on the child extending into adulthood . Depression also leads to suicide, with it being the second largest cause of indirect maternal mortality in the perinatal period in Australian women . Unclear messages contribute to pregnant women being reluctant to take psychotropic medication, including antidepressants and anxiolytics with many fearing foetal harm . Medical personnel including O&Gs and GPs form an important part of a pregnant woman’s network of information sources during pregnancy and can impact patient decision-making around medications in pregnancy . The Australian clinicians’ own perception of teratogenicity of antidepressants (AD) and anxiolytics (AX) may influence counseling and care of vulnerable women and is largely unexplored. It is, however, likely to align with the international community where perceived teratogenicity is overestimated by physicians of all medical specialties, except psychiatrists . Professional bodies such as the RANZCOG publish statements and recommendations to provide advice on management of perinatal anxiety and depression, serious mental illness and bipolar disorder. The target audience is all health professionals who are engaged in providing maternity and mental health care to these patients . This study hypothesised that differences exist in the perception of risk of teratogenicity of AD and AX medication commonly prescribed to pregnant women, by differing clinicians, namely O&Gs and GPs. It also explored medication counselling and prescription practices, clinician resources and base knowledge of risk of AD and AX when used in pregnancy. Setting and participantsSurvey instrumentSurvey administrationStatistical analysisUtilising the RANZCOG database, current Obstetrics and Gynaecology Fellows, trainees and “GP diplomates” (upskilled General Practitioners with additional qualifications in Women’s Health) were invited to participate in a nation-wide cross-sectional observational study of practices relating to prescription of AD and AX in pregnancy and provided a link to an anonymous ten-minute online questionnaire ( www.surveymonkey.com ) (Additional file ). Participation was voluntary and consent was implied with completion and submission of the questionnaire. The responses submitted by the participants were de-identified. GP affiliates included in the study from New Zealand were virtually unrepresented, as they do not undertake the Diploma and were therefore not captured by this survey. Our novel questionnaire was developed after researching questionnaire design and a directed literature search. Feedback was obtained from professional peers on the content and relevance of questions. A small pilot group of doctors ( n = 10) tested the coherence of the questions, and the time frame to complete the questionnaire. The 34 questions were designed to elicit clinician attitudes about AD and AX including their prescription during pregnancy, medication counseling practice, perceptions of the level of patient concern regarding their use during pregnancy and the risk perceptions of the stakeholders who influenced a pregnant woman’s decision making. Demographic data was collected about the clinicians aligned specialty including their proportion of public and private practice, age, training, experience, interest in mental health and educational exposure. Clinician confidence in prescribing, managing adherence issues and perceived adequacy of training to manage depression and anxiety in pregnancy were also surveyed. Questions relating to attitudes and confidence were measured using Likert scales. Similar to published literature, we also included a series of questions to gauge basic AD and AX knowledge . The survey was adminstered through the RANZCOG and a reminder email was sent out 4 weeks after the initial invitation, reminding clinicians of the survey closure date. All data was analysed using the SPSS version 23 (IBM Corp., Armonk, NY). To aid with the interpretation of the questionnaire results, the following collapse of the Likert scale categories was made for Questions 21, 24 and 34: Agree = agree, strongly agree and Disagree = Strongly disagree, disagree and neutral. Categorical variables were summarised by frequency and percentage and continuous variables by mean and standard deviation (SD). Mean differences were reported with 95% confidence intervals (CI). Categorical variables were examined using Pearson Chi-squared test or Fisher’s exact test, where more than 20% of the expected values were less than 5. Continuous variables were checked for normality and examined using the Student t-test. Data was summarised for clinicians overall and separately by O&Gs and GPs. P values for the comparison of O&Gs and GPs were reported, with p < 0.05 considered to be statistically significant. Overall, the RANZCOG database identified 5409 eligible clinicians, all of whom received a standardised invitation by email. This comprised of 2120 Fellows, 769 FRANZCOG trainees and 2520 Diplomates. A total of 545 valid responses were received and submitted for analysis (10.1%), less than the predicted response rate for medical personnel (32.8%) . The response rate for O&G affiliates (12.9%) was consistent with gynaecologist rates from a similar risk perception study by Csajka et al. (13%) . The response rate for GP affiliates was 6.8%. DemographicsInterestPerceptionPracticeConfidenceKnowledgeTraining adequacyThree hundred and seventy-three clinicians aligned with RANZCOG (68.4%) and 172 were affiliated with RACGP (31.6%). The demographic characteristics of the respondents are shown in Table . Seventy-two percent of respondents were trained in Australian medical colleges with 60.9% having over 10 years’ experience in their area of speciality. Twenty-six percent of O&Gs and 12.3% of GPs respondents had not yet attained their fellowship. Majority of the clinicians (98%) saw pregnant women in their clinical practice on a regular basis. Seventy-eight percent of O&Gs spent 11 h or more per week caring for pregnant women compared to 18.7% of GPs. In general, respondents had no particular interest in perinatal mental health disorders (36.7%), however more GPs (46.7%) were interested than O&Gs (32.1%). The vast majority of clinicians (96.9%) had not conducted any perinatal mental health research in the last 5 years. Also, fewer than half (46.4%) of all clinicians had attended a conference or read a journal article where AD or AX medication use in pregnancy had been reviewed. In general, only a small percentage of clinicians (15.3%) were involved in the provision of education to trainees about psychotropic prescription during pregnancy. Self-reported perception of concern around prescribing AD or AX medications was not significantly different between the groups ( p = 0.38), with O&Gs ( n = 368) apportioning a mean score of 3.7 (SD 2.3) and GPs ( n = 169) a mean score of 3.9 (SD 2.4). This indicated a relatively low level of concern on a 0–10 scale, with 0 being no concerns. The perceived proportion of patient non-compliance was also not significantly different ( p = 0.36) between the groups. Both of these estimated that just over a third of patients on a 0 to 100 scale would be non-compliant with their AD or AX treatment: O&Gs ( n = 367) mean 34.8% (SD 18.7) and GPs ( n = 170) 36.4% (SD 19.3). When asked to share their perceptions, GPs ( n = 172) estimated their patients’ anxiety regarding AD and AX medication decision making in pregnancy as higher on a 0 to 100 scale: mean 73.7% (SD 21.3) compared with mean 63.1% (SD 24.1) for O&Gs ( n = 372), a mean difference of 10.6% (95% CI 6.4–14.8). Only 10.5% of all clinicians ( n = 545) “very often” provided pregnant women with written information about the intended prescription AD or AX. (6% of O&Gs compared to 14.5% of GPs). Sources of written information were varied and the overall numbers were small. Most of the O&Gs sourced UpToDate (32.2%), followed by MIMS (26.8%) and Mother Risk (13.4%). For GPs, the most commonly used resource was MIMS (27.9%) followed by “other” (19.2%) and Drug Company leaflets (15.1%). Less than 10% of all clinicians had their own practice pamphlets or relied on the pharmacists as their main source of written information. Thirty-two percent of O&Gs provided no written information compared with 16.3% of GPs ( p < 0.001). If seeing a pregnant patient with mental health illness for the first time, the time spent discussing potential maternal and foetal side effects of AD or AX treatment differed between clinician group ( p < 0.001, n = 541). More than half of GPs (52.6%, n = 171) reported spending 15 min discussing potential maternal and foetal side effects of AD or AX treatment compared with O&Gs (48.6%, n = 370) spending less than 5 min. There was a statistically and clinically significant difference ( p < 0.001) in prescription practice where AD or AX initiation was surveyed: 84.8% of 171 GPs initiated these medications compared to 52.2% of 372 O&Gs. The GPs ranked “prior response to the medicine” as being an influential reason (60.5%) for prescribing a particular AD or AX. O&Gs ( n = 372) on the other hand, were more influenced by a medication “a mental health practitioner had previously prescribed” (50.5%). This preponderance for O&Gs to rank a specialist mental health clinicians’ opinion highly was also demonstrated later in the questionnaire, where 55.7% O&Gs ( n = 357) would rely on the original prescriber’s management plan comapred to 11.7% of GPs ( n = 162) ( p < 0.001). Responses to the question relating to discontinuation of fluoxetine in a hypothetical pregnant patient signified varying practices between clinician groups. Fifty-nine percent of GPs indicated that they would initiate a patient consultation compared to only 18.0% O&Gs. Furthermore, 48.8% of O&Gs suggested that they would seek referral to a mental health specialist compared to 5.3% of GPs. The questionnaire revealed that overall, clinicians’ main concerns regarding AD and AX medication prescription to women of reproductive age in order of perceived influence are: medical safety profile including teratogenicity (86.9%, n = 543), medical efficacy (75.2%, n = 537), neonatal adaption syndrome (70.0%, n = 543), and medication addiction potential (48.6%, n = 537). Of note, 57.4% of GPs ( n = 169) were concerned about maternal side effects compared to 47.3% of O&Gs ( n = 368) ( p = 0.029) (Fig. ). There were differences in levels of reported confidence in being up-to-date with medication recommendations and safety profile with 57.6% of GPs feeling confident compared to 44.2% of O&Gs ( p = 0.004). In general, GPs consider themselves to be more confident in their knowledge (mean difference 0.9 (95% CI 0.5–1.3) and ability to prescribe (mean difference 2.2 (95% CI 1.7–2.6) and manage (mean difference 2.1 (95% CI 1.7–2.6) AD and AX medications than O&Gs. Respondents were tested on their knowledge of five well-known AD and AX medications and their potential teratogenicity. As demonstrated in Table , GPs knowledge was generally similar to that of O&Gs, with the majority of respondents recognising that these medications had no significant proven teratogenicity. However, up to 22.3% respondents in both clinician groups incorrectly ascribed recognised teratogenicity to a commonly used AD or AX. Around 13% ( n = 118) trainees were incorrect for sertraline, venlafaxine and diazepam while 28.2% ( n = 117) were incorrect for amitriptyline and 21.2% ( n = 118) for mirtazapine. Twelve percent of O&Gs considered “Sertraline” teratogenic compared to 3.5% of GPs ( p = 0.001). GPs were more likely to agree that training and education had been adequate for them to feel confident in prescribing AD and AX to pregnant women (56.1%) compared to only a third of O&Gs (29.0%), p < 0.001. When asked what would be more useful to daily practice of caring for pregnant patients, 71.0% of all 541 respondents chose increased clinician education and training (71.1% O&Gs versus 70.8% GPs) in preference to increased technological supports such as apps for smart phones. Interestingly, 67.4% of a total of 543 clinicians agreed that completion of the study questionnaire had increased their interest in pursuing more information regarding AD and AX use in pregnancy. Pregnant women with mental health conditions can be managed by a multitude of treatment modalities including psychosocial support and non-pharmacological interventions. This manuscript focusses on one aspect of the treatment- the use of psychotropic medications. To the best of our knowledge, this is the largest Australian survey of clinicians’ attitudes and practices, with regards to AD and AX prescription in pregnancy. It explores the differences between the two groups of medical practitioners, most frequently engaged in counseling pregnant women. Appropriate management of anxiety and depression in pregnancy is an important area of clinical practice. If not properly addressed, it has the potential for deleterious irreversible consequences such as termination of pregnancy and maternal suicide . Over 50% of pregnancies are unintended and may be associated with an increased risk of postpartum depression .. Untreated anxiety and depression during pregnancy is associated with increased weight gain, substance abuse and smoking . Pregnant women with antenatal anxiety and depression are less likely to attend regular antenatal appointments and have higher complications such as stillbirth, premature birth, low birth weight and low Apgar scores . Engagement of pregnant women with perinatal mental health services remains an everlasting challenge with the added concern of patient initiated sudden cessation of medication . Hence, clinician’s confidence and competence in adequately treating anxiety and depression in pregnancy is very important. Considerable uncertainty in prescribing AD and AX in pregnancy exists, even amongst clinicians with expertise in antenatal health care provision . Women in general also express extreme reluctance to take medication in pregnancy . Both clinician groups in this study felt that training had not been adequate to instil confidence in medication prescription, even though many health professionals had trained for more than 10 years. Both groups advocated for improved training to address this need. This study suggests there may be differences in perception, confidence and practice between clinician groups. GPs perceived higher rates of patient anxiety regarding AD and AX use in pregnancy, and felt that they had a greater influence on a women’s use of AD or AX in pregnancy. Even though they saw pregnant women less frequently, they reported that their consultations apportioned more time to discussing medication risk. GPs less frequently expressed an intent to refer to a mental health specialist, most likely reflecting their role as primary prescribers. They also ranked the influence of their psychiatric colleagues lower than O&Gs and the impact of the internet. GPs reported higher rates of confidence in managing mental health conditions in pregnant patients at a community-level compared to their O&G counterparts, perhaps due to their familiarity with medication manipulation and more frequent provision of mental health advice for general patients . Both groups, in practice, recommended close doctor-patient relationships to nurture clear communication and support during the pregnancy, and no groups ill-advisedly recommended ceasing AD or AX upon pregnancy or for lactation. Both groups perceived women’s fears about foetal malformation when AD or AX use in pregnancy was raised. However, it is concerning that 9.4 to 22.3% of clinicians incorrectly labelled commonly used AD or AX medication as causing teratogenicity. This highlights the need for ready access to updated, evidence-based sources of medication advice for clinicians. Provision of written information has a strong evidence base supporting benefits for patient decision making, especially in a population group where anxiety or lack of concentration may cause impairment . Our study shows that this resource is infrequently used (~ 10%). There was no universal patient and clinician-friendly source from where the information was obtained. This likely reflects the difficulty of finding robust evidence regarding medication use in pregnancy, which is likely a consequence of ethical restraints on trialling medications in pregnant women . The onus remains on the clinicians to update themselves with latest available data. The participants who responded to the survey admitted only a modest interest in mental health disorders in pregnancy. They also admitted to not being actively involved in research, nor had their knowledge of treatments challenged often by new data at conferences or in journal articles. In addition, they were infrequently involved in passing on that knowledge to trainees. This lack of familiarity may have led to both clinical affiliates overestimating perceived teratogenicity of commonly used psychotropic medications. Due to the low response rates and inherent limitations such as including self-selected groups of respondents in such surveys, the findings of this study should be interpreted with caution. The number of responders is however not trivial and their perceptions around the prescription of AX an AD during pregnancy clearly suggests a need for further research in this very important area of medicine. The authors also acknowledge that grouping broad groups of antidepressants and anxiolytics is also a potential limitation of the study, however the very high level of comorbidity of anxiety and depressive symptoms and the anxiolytic properties of antidepressants made a general focus on these medication groups a practical and less potentially confusing approach, In pregnancies complicated by mental health conditions requiring AD or AX treatment, GPs are potentially more confident discussing and prescribing these medications compared to their O&G counterparts. Nevertheless, with nearly a quarter of clinicians overestimating the teratogenicity of a commonly used AD, training could be improved for both GPs and O&G affiliates. This would assist with optimal management of anxiety and depression in pregnancy for the benefit of the mother and unborn child. Additional file 1. |
Surgical parameters influence paediatric knee kinematics and cartilage stresses in anterior cruciate ligament reconstruction: Navigating subject‐specific variability using neuromusculoskeletal‐finite element modelling analysis | 4c016892-7663-4220-ba0c-e5d8715e9c7a | 11848988 | Surgical Procedures, Operative[mh] | Anterior cruciate ligament (ACL) rupture is an increasingly common injury amongst physically active children and adolescents that often occurs during sports participation . The ruptured ACL is typically reconstructed (ACLR) by surgically implanting a graft (e.g., from gracilis, semitendinosus, or patellar tendon) to replace the failed native ligament with the aim of restoring passive stability to the knee . Despite ACLR, knee laxity often deviates from normal, and ambulatory knee motions, loads, and muscle activation patterns are chronically disrupted which affects knee cartilage mechanics . These altered knee biomechanics may contribute to the development of early onset knee osteoarthritis (OA), which is highly prevalent in ACLR knees one to two decades following injury . The exact mechanisms remain incompletely understood, but impaired knee kinematics and cartilage mechanics (e.g., stress) post‐ACLR are thought influential in subsequent early onset of knee OA . To maximize function and minimize the risk of OA in ACLR knees, optimizing surgical parameters becomes crucial, with primary objective being restoration of normal kinematics and cartilage mechanics to the knee . During ACLR, the surgeon must select graft type, size, pre‐tension and femoral tunnel location amongst other parameters. Each of these surgical parameters in isolation can mechanistically influence the biomechanics of the ACLR knee . In paediatric ACLR literature, no study has systematically evaluated effects of these surgical parameters on knee biomechanics. Further, these surgical parameters likely interact to create complex and non‐intuitive effects on ACLR knee kinematics, kinetics, and cartilage mechanics . Biomechanical outcomes of ACLR are likely also influenced by subject‐specific anatomy (e.g., knee size) and neuromusculoskeletal (NMSK) factors (e.g., motion and loading). Knee anatomy, particularly size, might play a pivotal role, with smaller or larger knee joints exhibiting distinct responses to ACLR . Likewise, subject‐specific NMSK control of knee motion and loading adds further complexity to understanding post‐ACLR biomechanics. Recognizing this complexity, surgical parameters may need to be specific to each patient to ensure the post‐ACLR knee behaves similarly to the intact knee . However, to do so, a physics‐based platform is required to study such a complex mechanical system. Finite element (FE) models enable study of the effects of ACLR surgical parameters on knee kinematics, kinetics, and tissue‐level mechanics , and provides a computational platform to assess surgical optimality. The primary aim of this study was to use a linked NMSK‐FE model to determine the effects of four key surgical parameters (i.e., graft type, size, location and pre‐tension) on knee motions and tibial articular cartilage stresses during walking gait. The secondary aim was to assess if and how subject‐specific variation in knee joint geometry and motion/loading conditions affected tibial articular cartilage stresses. The tertiary aim was to find optimal surgical combinations and examine whether trends emerge in the surgical parameters that produced optimal ACLR knee biomechanics. It was hypothesized that ACLR surgical parameters (i.e., graft type, size, location and pre‐tension) would affect knee motions and tibial cartilage stresses during walking. It was further hypothesized that the response of tibial cartilage stresses to surgical parameters would be affected by subject‐specific knee anatomy (e.g., size) and NMSK control (e.g., motion, loading). Overview of the workflowParticipants, medical imaging and biomechanical data acquisitionNMSK‐FE modelling pipelineACLR FE modelsStatistical analysesthical authorization was granted by the Human Research Ethics Committee of Children's Health Queensland Hospital and Health Services (HREC/13/QRCH/197). An established NMSK‐FE modelling pipeline was used to create subject‐specific motion, loading and boundary conditions (i.e., FE model inputs) during the stance phase of walking gait for three subjects. Previously validated intact FE models of three paediatric subjects served as the basis for developing subject‐specific FE models of the corresponding ACLR knees. The entire surgical parameter set (explained below) was applied to each ACLR FE model and tibial cartilage mechanics were simulated throughout the stance phase of walking. To determine effects of surgical parameters on cartilage stresses, normalized root mean square error (nRMSE) was calculated by comparing stresses in ACLR and corresponding intact model knees. Additionally, surgical parameters yielding nRMSE <10% for both knee kinematics and maximum principal stresses of medial and lateral tibial cartilages were reported for each subject as they were considered optimal (i.e., closely matching intact knee biomechanics) (Figure ). Extant data from three different typically developing children and adolescents with varying knee dimensions based on intercondylar width (i.e., small, medium and large) were selected (Table ). Written informed consent was obtained from the legal guardians of each participant prior to commencement of any of theEach subject was provided with explicit instructions to execute a sequence of walking trials at their self‐selected pace within the Queensland Children's Motion Analysis Service (QCMAS) at the Centre for Children's Health Research (Brisbane, Australia). Marker trajectories (100 Hz, MX System; 0.017 mm accuracy relative to reference value), ground reaction forces (1000 Hz, 510 mm × 465 mm, four force plates, AMTI; with accuracy ±0.1% of applied load) and surface electromyography (EMG) signals (1000 Hz, Noraxon) were concurrently and synchronously recorded throughout the trials. The EMG signals were captured from gluteus maximus, semitendinosus, biceps femoris long head, rectus femoris, vastus medialis and lateralis, gastrocnemius, gracilis, tensor fasciae latae and sartorius. Subsequently, each participant underwent magnetic resonance imaging (MRI) of their unloaded right knee utilizing a MAGNETOM Skyra 3T scanner (Siemens). In the NMSK modelling pipeline, the initial step involved modelling the external biomechanics in OpenSim , followed by muscle and knee contact forces using CEINMS (see Supporting Information, NMSK modelling pipeline). In this study, subject‐specific ACLR FE knee models were developed based on previously validated intact FE model using an atlas‐based approach (Supporting Information S1: Tables – ). The ACLR FE models incorporated cylindrical graft structures with cross‐sectional diameters of 6, 8 and 9 mm, replacing the native ACL from intact FE models. Graft placement had five different locations achieved by placing different femoral tunnel apertures (guided by consultations with an experienced orthopaedic surgeon [co‐author I. A.]). Optimal graft positioning aimed to replicate native ACL footprint, and we simulated deviations up to 5 mm in medial, lateral, anterior and posterior directions (Supporting Information S1: Figure ). The tibial tunnel was positioned to achieve anatomically appropriate placement, taking into account the native insertion site of ACL on the tibia. Specifically, our aim was to target the centre of the ACL footprint within the intercondylar area. The outside entry into the tibial tunnel was situated approximately 4 cm inferiorly from the tibial joint line and 2 cm medial to the tibial tubercle. Different graft types, representing semitendinosus, gracilis and patellar tendon, were simulated using corresponding transversely isotropic elastic materials (Supporting Information S1: Table ). Graft pre‐tension, in increments of 0, 40 and 100 N, was applied to tibial end of graft in FE models. This combination of surgical parameters and their increments resulted in 135 distinct ACLR FE models per subject (i.e., Total number of models = 3 (increments for the graft type) × 3 (increments for the graft size) × 3 (increments for the graft pre‐tension) × 5 (increments for location) = 135). Simulations were conducted using Abaqus/Standard soils consolidation solver (Daussault Systemes) for all 135 ACLR FE models per individual, and maximum principal stresses from medial and lateral tibial cartilages were reported for each FE model instance. The nRMSE was used to assess disparity between ACLR maximum principal stress on medial and lateral tibial cartilages and corresponding values on intact reference models. The nRMSE values were computed across 135 FE simulations per participant for both medial and lateral tibial cartilages and were presented as means with their 95% confidence intervals. To identify surgical combinations with minimal deviation from intact knee, histograms were generated for maximum principal stresses on both medial and lateral tibial cartilages and knee kinematics (i.e., anteroposterior and mediolateral translations, as well as abduction/adduction and internal/external rotations) based on nRMSE. A nRMSE < 10% was considered satisfactory, indicating minimal deviation from intact knee biomechanics . A multiple linear regression analysis was conducted to determine significant independent variables, including surgical parameters (i.e., graft type, size, location and pre‐tension), as well as knee size, for predicting the dependent variables (i.e., nRMSE of medial and lateral tibial cartilages). Stepwise selection criteria were applied (entry p < 0.05, removal p > 0.1). F statistics and p‐values were used to evaluate predictive capability, while adjusted R 2 was employed to gauge effect size. Unstandardized coefficients with confidence intervals were reported for each independent variable to elucidate their direct impact on the dependent variables. Statistical analyses were performed using SPSS software (version 27, IBM), with significance set at p < 0.05. Effects of surgical parameters and knee size on maximal stresses in tibial cartilagesOptimal surgical configurations: Cartilage stress and knee kinematics For the small knee, ~21% of surgical combinations yielded substantial deviation (nRMSE > 10%) in tibial cartilage stresses compared to the intact knee, particularly for medial tibial cartilage (Figure ). Similarly, ~38% of surgical combinations resulted in lateral tibial cartilage stresses deviating substantially from the intact knee. Approximately 60% of surgical combinations resulted in minimal stress deviations relative to the intact knee on both medial and lateral tibial cartilages concurrently. Similar trends were seen for knee kinematics, where ~59% of surgical combinations resulted in minimal rotational and translational deviations compared to the intact knee (Figure ). Notably, minimal cartilage stress deviations were found when graft pre‐tensions of 40 and 100 N were applied to graft diameters of 6, 8, and 9 mm. Of the set with minimal deviation, the most common configuration involved a 6 mm graft diameter positioned medially. For the medium knee, ~59% and ~67relative to the intact knee (Figure ). Only ~10% of surgical combinations concurrently resulted in stresses in both medial and lateral tibial cartilages similar to the intact knee. Moreover, none of the surgical set restored mediolateral translations, with only ~3% of combinations resulting in minimal (<10% nRMSE) deviation from intact knee for other DoF (Figure ). When considering both stresses and kinematics together, two surgical configurations achieved nRMSE<10% from intact knee. These configurations were graft diameter of 6 or 9 mm, semitendinosus graft type, 0 N pretension, and anterior graft positioning. For a large knee, ~20% and ~36% of surgical combinations resulted in substantial stress deviations for medial and lateral tibial cartilages, respectively, compared to intact knee (Figure ). Approximately 52% of surgical combinations had nRMSE values <10% for both medial and lateral tibial cartilages concurrently compared to intact knee. Likewise, ~60% of surgical combinations successfully restored knee kinematics such that they mimicked intact knee kinematics (Figure ). Considering both stresses and kinematics concurrently, ~52% of surgical combinations resulted in minimal deviations from intact knee and were mainly characterized by graft diameters of 6 and 8 mm, pre‐tension values of 40 and 100 N and anterior graft positioning. Surgical parameters had substantial effects on maximum principal stresses in tibial cartilages (Figure ). This was indicated by the many surgical configurations that resulted in maximal principal stresses in both medial and lateral tibial cartilages >10% nRMSE relative to the intact knee (Table ). Deviation (i.e., nRMSE) in maximal stresses was greatest for both medial and lateral tibial cartilages in medium‐sized knees (i.e., 11.6 ± 5.2% and 12.1 ± 3.6%, respectively), followed by small and then large knees. Across knee sizes, deviation in maximal principal stresses was greater in lateral compared to medial tibial cartilages. Maximum principal stresses exhibited distinct patterns in magnitude and distribution across small, medium, and large knees (Figure ). In small knee, majority (~60%) of surgical combinations led to smaller maximum principal stresses in both medial and lateral tibial cartilages compared to the intact knee (Figure ). In medium knees, the surgical parameter set produced divergent effects, with instances of both increased and decreased stresses on medial tibial cartilage compared to the intact knee. Notably, lateral tibial cartilage stresses in the medium‐sized ACLR knee were predominantly lower in magnitude compared to the corresponding intact knee (Figure ). In contrast to effects of ACLR surgery in the medium‐sized knee, ACLR in the large‐sized knee resulted in lower maximum principal stress on medial tibial cartilage compared to the intact knee, however, stresses fluctuated across the gait cycle with periods of increased and decreased magnitudes in lateral tibial cartilage compared to the intact knee (Figure ). As no similar computational models have been performed on the paediatric ACLR knee, a post hoc analysis was performed using G*Power software (version 3.1) . The responses of the 135 simulations were used, averaged across the different knees studied, and it was found the study was powered to 95% given an α of 0.05. A significant regression model was observed for the nRMSE of the lateral tibial cartilage ( F = 43.776, p < 0.001, adjusted R 2 = 0.096) with graft pre‐tension ( p < 0.001) emerging as a significant predictor. Each 1 N increase in pre‐tension was associated with a 2.081% decrease in nRMSE ( p < 0.001). Additionally, deviation from optimal graft positioning led to a 0.52% increase in nRMSE ( p < 0.004), while the increase in knee size resulted in a 0.14% increase in nRMSE ( p < 0.024). Conversely, for the nRMSE of the medial tibial cartilage ( F = 21.609, p < 0.001, adjusted R 2 = 0.049), graft pre‐tension was the sole significant predictor, with a 1.31% decrease in nRMSE observed for every 1 N increase in pre‐tension ( p < 0.001) (Table ). The most significant finding of this study was the dependency of maximum principal stresses in tibial cartilages on both surgical parameters and subject‐specific features, such as knee anatomy, motion, and loading. Not all combinations of surgical parameters yielded acceptable knee biomechanics, highlighting the highly subject‐specific nature of optimality without overarching trends. This underscores the complexity of biomechanical responses to ACLR, which stem from intricate interactions between phenotype and NMSK factors unique to each individual. It is imperative to model this complexity to tailor surgical interventions effectively for each patient. Consistent with our hypothesis, subject‐specific parameters such as knee phenotypes, motions, and loads influenced effects of surgical parameters on tibial cartilage stresses. The small knee had least biomechanical deviation (i.e., stresses and kinematics) across the tested surgical parameter set followed by large and then medium‐sized knees, indicating a non‐linear relationship between knee size and sensitivity to ACLR. Variation in biomechanical responses to ACLR in knees of different phenotype results from complex mechanical interaction between anatomy and NMSK biomechanics. Even for highly controlled tasks, there is large inter‐subject variation in muscle activation patterns, force sharing, and neuromuscular control, all of which interact with anatomical differences such as condylar width, tibial plateau slope, TFJ varus/valgus, and subject‐specific variables including mass and stature, to influence knee motion and cartilage mechanics post‐ACLR. Indeed, non‐linear complex behaviour at the knee has been reported in previous studies , however, our study adds novelty by specifically examining the interplay of surgical variations, knee anatomy, and motion/loading conditions in the understudied paediatric knee. The regression analysis identified significant predictors of nRMSE for both the lateral and medial tibial cartilage. Graft pre‐tension emerged as a crucial factor affecting nRMSE in both regions, with higher pre‐tension associated with reduced nRMSE, indicating potentially favourable biomechanical outcomes post‐ACLR. Deviation from optimal graft positioning increased nRMSE in the lateral tibial cartilage, highlighting the importance of precise graft placement during surgery. Knee phenotype also influenced nRMSE, with larger knees exhibiting slightly higher values of nRMSE, underscoring the complexity of the relationship between surgical parameters and knee biomechanics. Although graft type and size were excluded from the regression analysis, sensitivity analysis suggests they directly and/or indirectly affect kinematics and may subsequently alter tibial cartilage stresses. This discrepancy may be attributed to factors such as multi‐collinearity and overfitting, emphasizing the limitations of regression analysis in capturing the full complexity of these relationships. Results indicated ACLR can result in both higher and lower magnitude cartilage stresses compared to corresponding intact knees (Table , Figure ) depending on the specific surgical parameters used. The surgical combination associated with the highest nRMSE was accompanied by significantly larger maximum principal stress in both the medial and lateral tibial cartilage, as compared to the combination resulting in the lowest nRMSE (across diverse knee types). Abnormal cartilage stresses (e.g., under‐ or over‐loading), when chronic (e.g., during walking which is a most common motor task), are known precursors to degeneration and eventual OA onset . Previous studies have consistently highlighted elevated risk of early onset knee OA following ACLR and presence of abnormal contact loads in ACLR knees . The propensity for ACLR to disrupt normative cartilage stresses underscores the importance of patient‐specific surgical planning and graft selection to restore normal biomechanics (motion, loading, and tissue stress) to the post‐operative knee. With respect to the onset of knee OA, some studies indicate initial cartilage damage to the medial compartment post‐ACLR, a region more frequently affected by degeneration than its lateral counterpart . Conversely, other studies propose more instances of post‐ACLR knee OA in the lateral compartment . Our study identified greater deviation in stress patterns on lateral tibial cartilage across knee sizes (Table ). These contradictory findings from literature underscore the complexity of the relationship between ACLR and knee OA onset, suggesting the potential subject specificity in the predilection for either the medial or lateral compartment to develop OA . This study introduced a novel dimension of personalization to the field of ACLR, effectively bridging a gap in existing research methodologies and applied this novel approach to an understudied clinical population—paediatric ACLR. Diverging from conventional FE studies that have often applied fictive, generalized, and/or simplistic NMSK loading to FE models , this study employed an established NMSK pipeline that integrated subject‐ and task‐specific motion, loading, and muscle activation patterns, and applied this regime to create personalized FE models of the ACLR knee. This integration ensured heightened levels of personalization when modelling knee joint dynamics. Furthermore, we conducted comprehensive analysis by simulating knee mechanics across the stance phase of walking gait. This temporal scope was crucial to capturing fluctuations in knee motion and loading across the distinct phases of walking—an aspect often neglected by prior investigations. Collectively, personalization, application to paediatrics, and temporal scope of current research constitute substantial advances in study and contribution to the knowledge of ACLR knee biomechanics. This study is subject to several limitations that warrant consideration. First, four surgical parameters were analysed with predefined parameter spaces, but certain variables such as graft fixation angle, tibial tunnel location and graft length were excluded. These omitted variables could potentially influence knee kinematics and cartilage stresses. However, graft fixation angle was not included due to technical limitations in the FE implementation, which modelled fixation as ideal and non‐mobile. Tibial tunnel location, while minimally variant in surgical practice, was excluded because guiding articular geometry is typically unambiguous, and there are no significant limitations on tool posture or access during the extra‐articular approach. Second, the surgical parameter space was confined to three values per parameter to manage computational demands. Although this approach allowed for feasible analysis, it may have limited the resolution of findings. Expanding the parameter space or including additional surgical parameters could enhance the granularity of our analysis but would significantly increase computational demands to levels that are currently unfeasible. Third, bone deformation was not considered in the FE model due to its negligible effect on walking compared to soft tissues. Incorporating bone deformation would dramatically increase computational demands, particularly considering the extensive nature of our study, which involved 405 FE simulations across the stance phase of walking. Fourth, mechanical properties of knee tissues were not tailored to individual subjects but were instead sourced from literature. Although this approach provided a foundation for the sensitivity analyses, it may introduce variability in the results. Additionally, graft pre‐tensions were set to specific values of 0, 40 and 100 N, reflecting a clinically relevant range but potentially differing from the exact pre‐tension values achieved during surgical procedures. Last, despite efforts to mitigate uncertainties through rigorous validation procedures and statistical methods, the complexity of knee biomechanics and inherent limitations of FE modelling may still introduce some level of uncertainty in the results. Furthermore, the descriptive nature of the study limits the ability to establish causality, and future research endeavours should explore longitudinal or interventional designs to elucidate causal pathways further. From a clinical perspective, it has been shown patient variability presents a challenge to orthopaedic surgeons. Implementing a one‐size‐fits‐all approach, although efficient, will likely lead to elevated risk of re‐rupture and/or sub‐optimal loading for some patients. To mitigate this risk, results suggest personalized surgical parameter selection be informed by patient‐specific computational modelling. In the context of planning paediatric ACLR, orthopaedic surgeons may benefit from consulting orthopaedic engineers who can provide simulation services. These simulations can aid in predicting optimal surgical parameters, which can subsequently inform the surgical procedure. Future investigations are needed to determine whether such a personalized approach can reduce the incidence of poor outcomes in the paediatric ACLR population. In conclusion, graft type, size, location, and pre‐tension exerted complex effects on knee motion and tibial cartilage stresses in paediatric ACLR knees. Findings highlight the interplay between subject‐specific knee phenotype and NMSK factors, emphasizing the importance of tailored ACLR planning. Ayda Karimi Dastgerdi created the NMSK and FE models, extracted and analysed results, prepared figures and tables. Amir Esrafilian helped with the FE modelling and interpreted the results. Christopher P. Carty conceived the study, helped with the data gathering and the NMSK model development and interpreted the results. Azadeh Nasseri conceived the study, helped with the NMSK model development and interpreted the results. Martina Barzan helped with data gathering and interpreted the results. Ivan Astori, Wayne Hall and Rami K. Korhonen conceived the study and interpreted the results. David John Saxby conceived the study, helped with the NMSK model development and interpreted the results. All authors revised the manuscript for important intellectual content. The authors declare no conflict of interestWritten informed consent was obtained from the legal guardians of each participant before the commencement of any assessments. All experimental protocols adhered strictly to relevant guidelines and regulations, in accordance with the principles established by the Declaration of Helsinki. Supporting information. |
Investigating the causal relationship between the plasma lipidome and cholangiocarcinoma mediated by immune cells: a mediation Mendelian randomization study | b8b84a7c-ceb7-4134-be2a-df4bf94a3fa0 | 11832772 | Biochemistry[mh] | Cholangiocarcinoma (CCA) is a form of malignant neoplasm that arises from the epithelial cells within the biliary tract, categorized as intrahepatic, perihilar, and extrahepatic CCA based on anatomical location . The global incidence of CCA has been increasing in recent years, especially in certain regions of Asia such as Thailand, China, and South Korea . The incidence of CCA is highest among Hispanic and Asian populations and lowest among non-Hispanic white and black populations . Common risk factors for CCA include biliary stones, inflammatory bowel disease, liver cirrhosis, viral hepatitis (especially types B and C), chemical exposures such as aflatoxin, parasitic infections (e.g., Clonorchis sinensis), genetic factors, and polymorphisms in certain genes . The primary treatment for CCA is surgical resection, and for advanced cases that are not amenable to surgery, biliary drainage may be required to alleviate symptoms and prolong survival . Chemotherapy and radiotherapy exhibit restricted effectiveness in treating CCA. However, combination chemotherapy protocols, like the regimen of gemcitabine coupled with cisplatin, may occasionally be employed for patients who are ineligible for surgical intervention . Liver transplantation can be a primary treatment for perihilar CCA but is not suitable for intrahepatic and extrahepatic CCA. Therefore, exploring new therapeutic approaches for CCA is imperative. The plasma lipidome is an emerging field focused on the systematic study of lipids in biological systems, encompassing a spectrum of fatty acids and lipid subclasses . In current research on CCA, lipidome is primarily used to analyze changes in plasma lipids to identify biomarkers potentially associated with the onset of CCA. Studies have found that the levels of plasma glycochenodeoxycholic acid (GCA) in patients with hepatocellular carcinoma (HCC) are significantly higher compared to those with intrahepatic CCA, indicating that some bile acid components in the plasma exhibit distinct differences in diseases such as HCC and CCA, possessing the capability for clinical differential diagnosis . Metabolomics, the science of analyzing metabolic products in the body to study disease states, has found in CCA research that abnormal changes occur in plasma metabolites such as amino acids, sugars, and lipids, which may be related to the development of CCA . Additionally, research has found that the plasma lipid metabolism in CCA patients may undergo reprogramming to support the rapid growth and proliferation of cancer cells . During the development of CCA, cancer cells can promote their own growth and metastasis by altering lipid metabolism in the tumor microenvironment (TME) . Furthermore, changes in plasma lipids may affect the TME, including the behavior of immune cells, thereby influencing tumor immune evasion and progression. Also, some studies have found that specific lipids in the plasma may be associated with the recurrence of CCA, with certain lipid levels in the plasma of recurrent patients correspondingly increasing . The TME is populated with various immune cells, such as tumor-infiltrating lymphocytes, dendritic cells, and macrophages, which interact with tumor cells and influence tumor growth, invasion, and metastasis . Studies have found that in CCA, the expression level of PD-L1 is closely related to the tumor immune microenvironment and patient prognosis . Additionally, CCA can resist immune cell attacks by regulating the expression of immune inhibitory molecules or inducing the differentiation of immune cells into subpopulations with inhibitory functions . MR studies are analytical approaches that leverage the random assortment of Mendelian genetics to explore the causal links between exposure and outcome variables , . In MR studies, single nucleotide polymorphisms (SNPs) serve as instrumental variables (IV) to assess the causal link between exposure and outcome variables. To our best knowledge, this research represents the pioneering effort to employ MR to investigate the causal links between the lipidome and CCA. This study leverages extensive genome-wide association studies (GWAS) to perform a mediation MR analysis, integrating lipidome and CCA datasets. By incorporating immune cells as mediators, we aim to confirm the causal relationship between the lipidome and CCA.
Study design Data sources SNP selection Statistical analysis The statistical analysis for this study was executed utilizing R software (version 4.2.1). For the two-sample MR analysis, we employed the “TwoSampleMR”, “VariantAnnotation”, and “ieugwasr” packages. Among the five methodologies evaluated for assessing causality in MR analysis—IVW, MR Egger, Weighted Median, Simple Mode, and Weighted Mode—the IVW method was chosen as the primary analytical approach for its superior precision and reliability. The significance of the connection between exposure and outcome was determined using a P value threshold of 0.05. All P value were adjusted using the FDR correction, with a threshold of significance set at less than 0.2. Heterogeneity among the studies was evaluated using Cochran’s Q statistic, complemented by the IVW and MR Egger regression techniques . To detect pleiotropy, we utilized the MR-Egger intercept test and the MR pleiotropy residual sum and outlier method (MR-PRESSO), facilitated by the “MR-PRESSO” package . A leave-one-out analysis was conducted to assess the influence of any outlier genetic variants on the aggregate findings. Finally, the potential mediating effect of lipidome on the risk of CCA was investigated through the 'product of coefficients’ method, which aims to measure the indirect impact of these lipid components on disease risk.
Figure illustrates the research methodology, designed to investigate the connection between plasma lipidome, genetic predisposition to CCA, and the role of immune cells in this relationship. In the initial phase, a two-sample MR approach was utilized to assess the impact of lipidome and immune cells on CCA risk, identifying those most closely linked to CCA susceptibility. The subsequent phase involves evaluating the causal influence of the selected lipidome on specific immune cell types and determining the mediation proportion for immune cell mediator. Furthermore, it is essential to ensure that there is no overlap in the subjects under investigation, meaning that the exposure and outcome represented by SNPs should originate from separate research sources.
The summary statistics pertaining to lipidome were sourced from the GWAS Catalog website, which provides data on 179 lipid species, ranging from GCST90277238 to GCST90277416. The dataset for CCA was sourced from the finngen_R10_C3_BILIARY_GALLBLADDER_EXALLC, encompassing 1207 CCA cases and a comparison group of 314,193 individuals. Additionally, data on 731 immune cell types, labeled GCST90001391 through GCST90002121, were obtained from the GWAS Catalog website . The immunophenotypes under investigation were classified into seven principal groups: B cells, cDCs, mature stages of T cells, monocytes, myeloid cells, TBNK (T cells, B cells, natural killer cells), and Treg panels. It should be noted that the study exclusively included participants with European ancestry.
The P value threshold for selecting SNPs as IVs for lipidome and immune cells was set below 1 × 10 –5 , based on prior research. SNPs identified as IVs for CCA were subject to a more stringent P value threshold of under 5 × 10 –6 . Genetic variants were grouped if their R 2 value was below 0.001 within a clumping distance of 10,000 kb. Each SNP underwent an F statistic calculation to identify potential biases from weak IVs. SNPs with an F statistic lower than the threshold of 10 were flagged for possible weak IV bias and subsequently removed from the analysis to ensure the reliability of the research outcomes.
MR analysis between lipidome and CCA Mediator selection Effect of lipidome on immune cells Mediation analysis We employed MR to conduct an in-depth analysis of the association between plasma lipidome and CCA. We identified 14 lipid species that exhibit a causal relationship with CCA (Fig. , Table ). Sterol ester (27:1/14:0) levels were negatively correlated with CCA (OR 0.6672, 95% CI 0.4643–0.9588, P < 0.05). Phosphatidylcholine (0–17:0, 17:1) levels were positively correlated with CCA (OR 1.3135, 95% CI 1.1050–1.5613, P < 0.05). Sphingomyelin (d34:1) levels and Triacylglycerol (54:7) levels showed negative correlations, with OR values of 0.8129 and 0.6858, respectively. In contrast, Phosphatidylethanolamine (18:1, 0:0) levels and Phosphatidylinositol (18:1, 18:1) levels both demonstrated positive correlations with CCA. Other significant lipid species, such as Sterol ester (27:1/16:0), Sterol ester (27:1/18:1), Phosphatidylcholine (17:0, 20:4), and Phosphatidylcholine (18:1, 20:3) levels, further delineate the intricate relationship between plasma lipidome and CCA. After pleiotropy and sensitivity analysis, we identified 14 lipid species significantly associated with the risk of CCA (Tables – ). Additionally, MR PRESSO analysis revealed no outliers or pleiotropy (Table ). Finally, we conducted a reverse MR analysis and found no evidence of reverse causality between CCA and 9 lipid species (Table ).
During the screening for potential mediators, we identified and selected 731 distinct immune cell phenotypes to assess their influence on CCA. In subsequent analysis, we explored the causal relationships between these immune cells and CCA, and identified 24 immune cells with a causal relationship to CCA based pleiotropy and sensitivity results (Fig. , Tables – ). The CD19 on CD20 − showed a positive correlation with CCA (OR 1.2186, 95% CI 1.0123–1.4671, P = 0.0367). A total of 12 immune cells with similar relationships to CCA were identified, such as CD24 on lgD − CD38 − , CD25 on lgD + CD38dim, CD8br AC and HSC AC. In contrast, the Naive CD4 + %CD4 + exhibited a protective effect against CCA (OR 0.7834, 95% CI 0.6899–0.8895, P < 0.001). Similarly, 12 other immune cells showed a protective association with CCA, such as BAFF-R on B cell, HLA DR on CD33dim HLA DR + CD11b + , BAFF-R on lgD + CD38dim and BAFF-R on naive-mature B cell.
Building on previous research, we identified 9 lipid species and 24 immune cells that have a critical impact on CCA. Subsequently, we investigated the causal effects of these 9 lipid species on the 24 immune cells. MR analysis revealed significant causal effects of 4 lipid species on 8 immune cells (Fig. ). For instance, the strong correlation between Sphingomyelin (d34:1) levels and Naive CD4 + %CD4 + (OR 1.099, 95% CI 1.002–1.204, P = 0.0444), IgD on lgD + CD38 − unswitched memory (unsw mem) (OR 0.843, 95% CI 0.731–0.973, P = 0.0191), and IgD on unsw mem (OR 0.884, 95% CI 0.805–0.970, P = 0.0095) were observed.
After identifying the key immune cell mediators influencing CCA and evaluating the impact of exposure factors on these mediators, we quantified the proportions of their mediating effects. As shown in Fig. , Sphingomyelin (d34:1) levels exert their highest mediating influence on CCA through Naive CD4 + %CD4 + , with a mediating effect value of 11.1%. Similarly, Sphingomyelin (d34:1) impacts CCA via IgD on IgD + CD38 − unsw mem with a proportion of 9.71%, and through IgD on unsw mem with a mediating effect of 7.62%. Comparable effects were observed for Phosphatidylcholine (0–16:0, 22:5), which mediates its impact on CCA via CD24 on IgD − CD38 − (10.00%) and CD25 on IgD + CD38dim (8.57%), as well as for Sterol ester (27:1/16:0), which influences CCA through BAFF-R on naive-mature B cells (4.33%). These findings underscore the cumulative impact of specific plasma lipids on CCA, mediated through distinct immune cell populations.
In recent years, a growing body of research has delved into the intricate interplay between plasma lipidome and diseases that are mediated by the immune system , . The lipidome has a complex interplay with the immune system, which has a profound impact on human health . Lipid metabolism is closely linked to the function of immune cells. Lipids in the cell membrane, such as sphingomyelin and phosphatidylcholine, can regulate the fluidity and permeability of the cell membrane, thereby affecting cell signaling and intercellular communication. An increase in the level of sphingomyelin can promote the formation of lipid rafts, which in turn regulate the activation and function of immune cells. Metabolites of phosphatidylcholine can activate the PI3K/AKT signaling pathway, which is crucial for cell survival and proliferation. Once the balance among these factors is disrupted, it may lead to immune dysregulation. Utilizing MR, we have conducted in-depth research on the association between specific plasma lipidome and CCA, providing compelling insights into their complex interplay. Sphingomyelin (d34:1) level, Phosphatidylcholine (0–16:0, 22:5) levels and Sterol ester (27:1/16:0) level are all negatively correlated with CCA risk, suggesting that these specific plasma lipidome may have a protective effect against CCA. Our mediation analysis has unveiled the pivotal role that certain immune cells play in the influence of plasma lipidome on CCA. Plasma lipidome, comprising a diverse array of lipid species, are generated and released by various cell types, including tumor cells themselves, and can also be exposed on the membrane surface of different cells. These lipids serve not only as structural components but also as active participants in cellular signaling and communication. Sphingolipids, including sphingomyelin, have been demonstrated to play crucial roles in the regulation of T cell responses . The sphingolipid metabolism pathway is a complex network centered on ceramide, which can be converted to sphingomyelin by sphingomyelin synthase (SMS) and degraded back to ceramide by sphingomyelinases (SMases) . This dynamic balance, known as the SM cycle, has critical regulatory functions in apoptosis, autophagy, and cell survival, all of which are pertinent to T cell maturation and function. Recent studies have also underscored the impact of T cell receptor (TCR) stimulation on the sphingolipid biosynthesis pathway. Activated T helper (Th) cells significantly elevate the levels of ceramide, a key hub in sphingolipid metabolism, leading to the generation of sphingomyelin, sphingosine, and glycosphingolipids . Specifically, TCR stimulation results in an almost twofold increase in the expression of CerS2 and CerS5 in activated Th cells, which synthesize specific ceramide species that are essential for glycosphingolipid biosynthesis. This upregulation of CerS2 expression, along with the subsequent increase in ceramide (d18:1/24:0) and related glycosphingolipids (hexosyl Cer and di-hexosyl Cer), highlights the significance of sphingolipids in T cell activation and differentiation. The level of Sphingomyelin (d34:1) participated in CCA tumorigenesis by affecting IgD on IgD + CD38 − unsw mem, IgD on unsw mem and Naive CD4 + %CD4 + . Sphingomyelin (d34:1), composed of a sphingosine molecule and a fatty acid, is a dominant sphingolipid molecule in mammalian cell membranes, found in cellular organelles such as the cell membrane, endoplasmic reticulum, Golgi apparatus, and lysosomes. Studies have found that the abnormal accumulation of Sphingomyelin (d34:1) in the tumor cell membrane is associated with the occurrence, development, and immune evasion of cancer . It can form hydrogen bonds, altering the fluidity and permeability of the cell membrane, thereby affecting cell-to-cell contact inhibition and signal transduction. Additionally, Sphingomyelin (d34:1) can also play a role in the tumor microenvironment, mainly by affecting tumor angiogenesis, invasion, and metastasis, and may be involved in drug resistance through tumor-derived exosomes . Phosphatidylcholine (0–16:0, 22:5) levels can exert their influence on CCA through CD25 on IgD + CD38dim and CD24 on lgD − CD38 − . Phosphatidylcholine (0–16:0, 22:5) is composed of glycerol, two fatty acid chains, and a phosphorylcholine group. It is a major component of the cell membrane, affecting its fluidity and permeability, while also participating in cell signaling and intercellular communication. In recent years, an increasing number of studies have identified key enzymes in the Phosphatidylcholine (0–16:0, 22:5) metabolic pathway as significant factors in various biological processes and diseases. For instance, lysophosphatidylcholine acyltransferase 1 (LPCAT1) is upregulated in multiple gastrointestinal cancers, including liver cancer, colorectal cancer, and CCA. LPCAT1 promotes cancer cell proliferation, migration, and invasion, and is associated with the early recurrence of these tumors – . Cholesterol is another key lipid that significantly influences the fluidity of cellular membranes . It is an essential component of cell membranes and plays a critical role in modulating membrane fluidity, permeability, and the formation of lipid rafts – . In CCA, alterations in cholesterol levels can affect the activity of membrane-bound proteins, including those involved in cell signaling and immune recognition. Our results revealed that Sterol ester (27:1/16:0) levels can mediate their impact on CCA through BAFF-R on naive- mature B cell. Sterol esters (27:1/16:0) consist of a sterol molecule, such as cholesterol, and a fatty acid molecule, primarily existing within cells as a form of energy storage . However, they are also involved in cell signaling and the composition of the cell membrane. Sterol esters (27:1/16:0) can be hydrolyzed by esterases within cells, releasing sterols and fatty acids to participate in cellular energy metabolism or act as signaling molecules. Studies have found that tumor cells typically increase the synthesis and accumulation of sterol esters (27:1/16:0) to support their rapid proliferation needs. Additionally, sterol esters (27:1/16:0) may affect immune cells in the tumor microenvironment, such as B cells, regulatory T cells, and tumor-associated macrophages, thereby influencing tumor growth and metastasis . Our research has uncovered a significant negative correlation between specific lipid components, such as Sphingomyelin (d34:1), Phosphatidylcholine (0–16:0, 22:5), and Sterol ester (27:1/16:0), and the risk of CCA. In terms of clinical applications, changes in the levels of these lipid components may serve as potential biomarkers for early diagnosis or risk prediction. Specifically, the levels of Sphingomyelin (d34:1), Phosphatidylcholine (0–16:0, 22:5), and Sterol ester (27:1/16:0) influence CCA by affecting different subpopulations of immune cells. It is possible to develop a biomarker panel for early CCA diagnosis by detecting the levels of these lipids in plasma and combining them with other clinical indicators. For instance, by establishing a predictive model based on lipid components and immune cell markers, the accuracy of early CCA diagnosis can be enhanced. However, there are several limitations in this study that need to be considered. First, our research was primarily conducted on the European population, hence further studies are required to extend our findings to other ethnic groups. Second, we used a more lenient threshold to assess the results, which may increase the number of false positives, while also allowing for a more comprehensive evaluation of the strong association between lipidome and CCA.
Our results found that Sphingomyelin (d34:1), Phosphatidylcholine (0–16:0, 22:5), and Sterol ester (27:1/16:0) exhibit significant negative correlations with the risk of CCA, suggesting that they may have a protective effect against CCA. Moreover, the study revealed the mediating role of immune cells in the influence of the lipidome on CCA, particularly the impact of Sphingomyelin (d34:1) through different subsets of B cells and T cells. Mediation effect analysis further quantified the indirect influence of these lipid species on the risk of CCA mediated by immune cells. These findings not only provide a new perspective on the role of the lipidome and immune cells in the development of CCA but also offer a scientific basis for future research and the development of new therapeutic strategies.
Supplementary Information 1.
|
Conjunctiva in strabismus surgery – to stitch or to stick? – A randomized clinical trial | 14c91f8e-2388-426c-b70d-5d04e3d9d4df | 10941947 | Suturing[mh] | This was a prospective interventional randomized double-blinded single-center study conducted at our eye institute between November 2020 and March 2022 to compare the post-operative outcome measures after using vicryl suture and fibrin glue for conjunctival closure in strabismus surgery. The institutional ethical committee approval was obtained before beginning the study. The study was registered in the Clinical Trial Registry, India (CTRI/2020/11/029338). A written informed consent according to the tenets of the Declaration of Helsinki in the case of adult patients and a parental consent form and assent form in the case of pediatric patients were obtained prior to inclusion in this study. Patients in age groups of 5–50 years with concomitant exotropia or esotropia undergoing unilateral horizontal strabismus surgery were included. Patients with a history of previous eye surgery (e.g. pterygium, scleral buckle for retinal detachment, previous strabismus surgery, trabeculectomy) or trauma involving conjunctiva or any ocular surface disorders were excluded from the study. Randomization and blinding Conjunctival closure technique after strabismus surgery Outcome measures Sample size calculation Statistical analysis Data were entered in Microsoft Excel version 2016 and were analyzed using a trial version of Statistical Package for the Social Sciences (SPSS) version 25. Data were summarized by the mean and standard deviation (SD). The normality of data was tested by the Kolmogorov–Smirnov test. An unpaired t -test was used for data with normality, and Mann–Whitney U test was used for the data where normality was not shown to test the difference between the two groups. For the test of significance, a probability ( P ) value of less than 0.05 was considered statistically significant.
Patients were randomized 1:1 to either the vicryl suture group or the fibrin glue group using a computer-generated randomization list. Patients were not aware of their treatment group prior to surgery, and the outcome measures were noted by a masked assessor post-operatively (double-blind). shows the patient disposition flowchart.
All surgeries were performed by a single surgeon. After the surgery, in the fibrin glue group, the surgical field including the location on the sclera, where the glue was intended to be applied, was dried meticulously with the help of a sterile cotton bud. Sequential application of fibrin glue-Tisseel VH Fibrin sealant (Baxter AG, Vienna, Austria) was done, in which initially, one or two drops of thrombin were applied to the area of interest, followed by one or two drops of fibrinogen. After application, the tissue is pressed gently over the glue for 3 minutes with two forceps for firm adhesion due to the polymerization of fibrin glue. In the vicryl suture group, the incised conjunctiva was apposed with four or more interrupted sutures with 8-0 polyglactin suture (Vicryl®; Johnson and Johnson, Livingston, UK); the loose ends of the suture were cut close to the knot.
The outcome measures included post-operative conjunctival inflammation and wound apposition using slit-lamp examination, patient comfort with the help of a questionnaire, and conjunctival thickness using anterior segment optical coherence tomography (AS-OCT). A pre-operative horizontal AS-OCT using CIRRUS HD-OCT (device model 5000) was taken at the planned site of surgery. The conjunctival thickness was measured at 3 mm from the scleral spur and was compared with the conjunctival thickness at the same site post-operatively at 6 weeks. To scan the AS-OCT images of the medial aspect of the bulbar conjunctiva, the patient gazed in the temporal direction and vice versa for the lateral aspect while keeping the head straight. The conjunctival thickness was measured using central corneal thickness measurement software included in the OCT device. Three-layer structures in the conjunctiva, that is, the conjunctival epithelium, conjunctival stroma, and Tenon’s capsule, were identified. With the help of a slit-lamp biomicroscope (Topcon SL 1E, Topcon Corp, Japan), we graded the conjunctival inflammation in the operated quadrant on day 1, week 2, and week 6 post-operatively using a modified conjunctival inflammatory index in which hyperemia, chemosis, and discharge were rated on a 0–3 scale (0 none; 1 mild; 2 moderate; 3 severe) for a maximum possible inflammation score of 9. Patient comfort following surgery on day 1, week 2, and week 6 was obtained with the help of a questionnaire. The symptoms assessed were pain, redness, irritation, watering, and foreign body sensation. Patients were asked to grade the symptoms from 0 to 4 using a five-point scale adapted from Lim-Bon-Siong and associates. In patients below 12 years of age, the assistance of the parents or guardians was taken to answer the questionnaire.
The sample size was estimated using the formula N = Z 2 (1-a/2) SD 2 /D 2 where N = required/minimum sample size, z = standard normal variate corresponding to 95% confidence level = 1.96, SD = standard deviation of the variable = 0.2, and D = margin of error of 5% (standard value, being 0.05). This gives a sample size of 61 to achieve 80% power with 5% level of significance to detect the differences between the two groups. After adding contingencies such as non-response rate or record error, the sample size was finalized at 64 and this sample size was equally distributed in the two arms of the study. Participants were recruited to the study by purposive sampling. Allocation to the two groups was made by systematic random sampling.
Sixty-four eyes of 64 patients in the age group of 5–50 years of age having concomitant exotropia or esotropia undergoing unilateral horizontal muscle strabismus surgery were included in this study. The suture group contained 32 eyes of 32 patients (13 males and 18 females). The fibrin glue group also contained 32 eyes of 32 patients (17 males and 15 females). The mean age in the suture group and the glue group was 19.1 years and 19.3 years, respectively. The fibrin glue group performed better than vicryl sutures for most of the symptoms like redness, irritation, watering, and foreign body sensation at day 1 (P < 0.001), 2 weeks (P < 0.001), and 6 weeks post-operatively ( P = 0.023) . However, the pain was found to be significantly lesser in glued eyes on post-operative day 1 only, and no significant difference in pain was noticed in between the two groups at 2 weeks and 6 weeks. shows the comparison of mean post-operative sign scores between the two groups. Among the clinical signs, conjunctival hyperemia was significantly lesser in the fibrin glue group at 2 weeks post-operatively (P < 0.001). The remaining signs, like chemosis and discharge, showed no significant differences between the two groups. On the first post-operative day, 63 out of 64 eyes (98.43%) had well-apposed conjunctiva. However, 1 out of 64 eyes (1.56%) cases in which fibrin glue was applied had 2 mm retraction of conjunctiva which healed on its own without any intervention. The conjunctival thickness measured using AS-OCT at 6 weeks revealed that the thickness increased significantly post-operatively in the suture group as compared to that in the glue group ( P < 0.001 medial site, P = 0.004 lateral site) . Various parameters of healing were compared between the patients of <18 years and ≥18 years of age separately for two study groups. None of the mean estimates of the pediatric age group (<18 years) differ significantly from that of the adult age group (≥18 years).
Fibrin glue usage is a better closure technique for most features including signs and symptoms. Fibrin glue is significantly better when it comes to redness, watering, irritation, and foreign body sensation, mostly on day 1 and 2 weeks post-operatively. Fibrin glue is a better conjunctival closure technique for these symptoms even at 6 weeks post-operatively, but the symptom score does not differ significantly from the vicryl sutures group at 6 weeks post-operatively. The pain was less on post-operative day 1 among the fibrin glue group significantly, and none of the study participants suffered from pain by the end of 2 weeks. Similarly, no signs were acknowledged in any study participants, irrespective of the closure techniques, by the end of 6 weeks. However, conjunctival hyperemia was present in both groups on day 1, but by the end of 2 weeks, the individuals in the group with the fibrin glue closure technique improved significantly. In the study conducted by Lee et al . in 2007, the pain and tearing were significantly lower in the fibrin glue group than in the suture group at day 1 and week 1 post-operatively. A study conducted by Biedner and Rosenthal showed signs of conjunctival injection or chemosis and subjective symptoms of redness and tearing were worse in the suture group compared with the fibrin glue group on post-operative days 1 and 5; however, by 14 days, there was no difference between the groups. It is to be noted that the suture material used in the study was 6-0 polyglactin. Dadeya and Ms conducted a similar study in 2001 in India and found that discomfort and excessive watering of the eyes were reported in 42% of the patients whose conjunctiva was sutured as opposed to none of the patients with fibrin glued eyes. This was evident only in the immediate post-operative period, and after 2 to 3 weeks, both the groups had similar symptoms and signs, which is similar to the present study. A study conducted by Basmak et al . showed that only pain and watering differed significantly between sutured eyes and glued eyes, only on post-operative day 1. The remaining symptoms and signs did not differ between the two groups in a noteworthy manner. This contrasts with the findings of the present study. None of the study participants in the present study had any wound gaping by the end of 6 weeks. There was only one patient (3.1%) in the fibrin glue group of age 8 years who developed a conjunctival wound gaping in the temporal quadrant of around 2 mm on post-operative day 1, which reduced to 1 mm on post-operative 2 weeks and healed completely without the need of any suturing by the end of 6 weeks. Thus, proper excision of excessive tenon’s fascia, application of gentle pressure over the reposited conjunctiva after application of the glue for a minimum of 3 minutes for firm adhesion, followed by application of an eye patch or bandage at least 1 day post-operatively, and careful removal of the eye patch on day 1 of surgery can help in good conjunctival wound apposition and lesser risk of conjunctival wound retraction post-operatively. Dadeya and Ms reported the need for intervention in the form of suturing for 4 out of 19 eyes (21% cases), in which limbal conjunctiva was primarily closed by fibrin glue. Basmak et al . reported that one eye in the suture group had to undergo repeat suturing because of a 3 mm large conjunctival opening. Lee et al . showed a conjunctival opening of around 1–2 mm by the end of 2 weeks in 1 case of the glued eyes (i.e., in 10.5%), which healed spontaneously at the end of 6 weeks just like a case in the present studyWith this, it may be inferred that age does not influence on conjunctival wound healing or healing time after the surgery in either of the groups. Conjunctival thickness was used as a measure of inflammation and wound healing. The thicker the conjunctiva, the more the underlying inflammation is and delayed the wound healing. Both the groups performed equally by the end of 6 weeks in relation to clinical measures of wound healing, which includes all the signs and symptoms studied, where we noticed that these clinical features had reduced in both groups by the end of 6 weeks. The conjunctival thickness measured at 6 weeks using AS-OCT showed that the thickness increased significantly post-operatively in the vicryl suture group compared to that in the fibrin glue group. This indicates that even though the conjunctival healing appears to be complete by clinical examination and the patient being asymptomatic, inflammation and healing continue to occur underneath the conjunctival surface in both groups but were found to be more in the suture group. The strength of our study includes the use of AS-OCT for measuring differences in conjunctival thickness pre- and post-operatively at 6 weeks as a measure of wound healing. The study does have some limitations. The course of change in conjunctival thickness at day 1 and week 2 using AS-OCT was not studied. The cost difference between using fibrin glue and suture was not analyzed. We also had difficulty in obtaining the questionnaire from children less than 7 years of age, and the help of parents or guardians was taken in such scenarios. Further prospective studies with a larger sample size and the use of AS-OCT to observe the change in conjunctival thickness as a measure of conjunctival wound healing over a longer follow-up period may provide more significant results.
Fibrin glue can reduce early post-operative pain and inflammation and eventually provide a more comfortable course for conjunctival closure after strabismus surgery. Hence, limbal conjunctival closure with fibrin glue in strabismus surgery can be considered an alternative to vicryl sutures in the absence of cost constraint |
Microfocus computed tomography for fetal postmortem imaging: an overview | a92eba54-e41b-461b-92b6-b0422d06e996 | 10027643 | Forensic Medicine[mh] | Fetal autopsy is the modality of choice for diagnosing causes of stillbirth and intrauterine fetal demise (IUFD) and for confirming congenital anomalies. It offers clinically significant findings in approximately 40–70% of cases . Knowing the cause of their loss provides consolation for bereaved parents and offers important information for management of future pregnancies . Over the last years parental consent for autopsy has dropped because parents often view the procedure as too invasive. It is also technically difficult in fetuses at early gestation . To overcome these issues, new and less invasive modalities are being actively proposed. A promising alternative to autopsy is microfocus computed tomography (micro-CT) , a technique that has already made its mark in non-medical industries, e.g., non-destructive precision engineering, ecology and geosciences . Like conventional CT, it is an X-ray-based technology, but instead of a rotating gantry, micro-CT scanners have a fixed radiation source, while the samples are mounted on a rotating platform. The radiation-source-to-sample distance, as well as sample-to-detector distance, can be altered to achieve much higher resolutions, up to sub-micron (<µm) level. In comparison, conventional CT scanners typically have maximum resolutions of 500–1,000 μm. Scan time and radiation dose are typically much higher in micro-CT, clearly making it less suited to imaging live patients, but ideally suited to imaging postmortem fetuses and specimens. Before imaging, the fetus is treated with staining agents to allow visualization of soft tissues, which otherwise offer very little contrast. Staining is frequently done using iodine compounds as a contrast agent, usually by submerging the fetus in the solution. This technique is referred to as diffusible iodine-based contrast-enhanced CT (diceCT) . Because staining time is directly related to diffusion speed, it takes longer in larger fetuses, ranging from hours to several weeks . Because this method is non-destructive, the fetus can be returned to the parents as soon as scanning is complete or after the discoloration caused by the iodine has been reversed. Imaging is preferred by parents as an alternative to autopsy because imaging is less invasive and does not leave disfiguration . The opportunities for fetal imaging and research provided by micro-CT imaging have recently been reviewed . Advances have been made with extensive scan and staining protocols , such as the use of buffered Lugol solution (B-Lugol), which limits the extent of tissue shrinkage when staining fetuses . In this review we provide an overview of the latest research concerning micro-CT imaging of human fetuses, with a special emphasis on diagnostic accuracy, endovascular staining approaches, placental studies and the reversibility of staining. We discuss new methods that could prove suitable for larger fetuses as well as other relevant techniques that might help to forward fetal postmortem imaging. This review covers data that were previously published, so no ethical approval was required.
The current reference standard for diagnosis in a postmortem setting is invasive autopsy. Through dissection and microscopic analyses, organs can be visualized to study the anatomy and pathology and to establish the cause of death. Recent studies comparing fetal micro-CT imaging to autopsy found 93% concordance for overall diagnosis among more than 250 cases; in only 1% of these cases did fetal autopsy provide a diagnosis that could not be established using micro-CT scanning . These findings led to a workflow in which micro-CT scanning was used to triage before autopsy, potentially avoiding autopsy in 87% of cases. Concordance between autopsy and micro-CT was met in most cases, with concordance of 97% across all body systems . The largest discordance was found in evaluating the cardiovascular system, which had a sensitivity of 67%; however, this number might be subject to bias because autopsy was only performed in cases where it was expected to add value . In other studies where the heart was extracted before imaging, the sensitivity of micro-CT in cardiac diagnosis has been higher (85–100%) [ , , ]. In fetuses younger than 16 weeks of gestation and in small or macerated fetuses, autopsy can sometimes be difficult or even impossible because of the technical challenges. For example, the fragile and semifluid fetal brain is generally removed from the skull by submersion in a water bath but remains hard to evaluate. Micro-CT imaging, however, can even be used in embryos (Fig. ) before 8 weeks of gestation and in macerated fetuses . Although at this early stage of gestation, not much is known about pathophysiology because there is a lack of data on these early stages, micro-CT has been found to be a good, if not better, alternative to autopsy for postmortem imaging of fetuses at early gestation [ – , ]. To ascertain a diagnosis, histology can be of added value. However, staining with iodine or other solutions and micro-CT scanning might hamper further histological analysis because of the potential disturbances in tissue integrity. Lupariello et al. used cardiac samples to determine the effects of staining and scanning samples prior to histology. Samples were divided into two groups, either stained with a combination of Lugol solution, methanol and Tween (Thermo Scientific, Waltham, MA) or perfused with Microfil (Flow Tech Inc., S Windsor, CT), an endovascular casting agent. After micro-CT scanning, samples were stained with immunohistochemistry, hematoxylin or Masson trichrome. All samples demonstrated histological staining in all tissues, and the Microfil samples showed a brown substance covering the endothelial layer of the arteries . Although the authors did not find that staining hampered further microscopic evaluation of the tissue, the micro-CT scanning itself appeared to alter proteins in these cardiac samples, possibly the result of thermal damage caused by scanning . The extent to which other proteins are denatured or affected remains unknown. This raises the questions of whether deoxyribonucleic acid (DNA) degradation occurs during micro-CT scanning, which needs be evaluated in future studies. One might consider taking microbiopsies for microscopic evaluation or DNA analysis prior to scanning to prevent iatrogenic thermal alteration of the sample, the extent of which is unknown.
Whole-body staining by submersion Endovascular diffusion staining o enhance soft-tissue differentiation, staining is needed before scanning. For human fetuses, the most common method to enhance soft-tissue contrast is staining by submerging the fetus in a staining solution . Frequently used staining agents are based on iodine, phosphotungstic acid (PTA), phosphomolybdic acid (PMA) or osmium tetroxide [ , , ]. For human fetuses, the most frequently used staining solution is a water-based solution containing two parts potassium iodide (KI) for every one part iodine (I 2 ), or potassium triiodide (I 2 KI), also called Lugol solution. This solution is known for its rapid and deep penetration, provision of excellent contrast in all tissues and for being non-toxic and relatively cheap, all of which make it a versatile and robust staining agent. Nonetheless, successful staining by submersion in I 2 KI is dependent on several factors, including the size of the fetus, the concentration of the staining solution and the staining time. As fetuses become larger with increasing age, the triiodide must penetrate deeper, requiring longer incubation periods to reach adequate staining of all structures (Fig. ) . Moreover, with increasing gestational age the skin of the fetus becomes less penetrable to iodine, prolonging the staining time. Staining concentration and time are interdependent factors in the staining process. A shorter staining time can be reached with higher concentrations because faster diffusion occurs, although higher concentrations can result in overstaining, causing loss of tissue differentiation . Although some institutions have used unbuffered Lugol solution for fetal staining without encountering significant tissue shrinkage , users should be aware that Lugol solution can cause tissue shrinkage of up to 30% [ , , ]. This negative effect on the tissue was long thought to be the result of an osmotic imbalance between the staining solution and the fetus; however, the use of an isotonic staining solution did not prevent tissue shrinkage, thus demonstrating that osmotic imbalance is not the driving factor. Dawood et al. demonstrated that acidification of the solution is the key factor in soft-tissue shrinkage rather than the osmolarity of the staining solution . Moreover, the authors found that staining in Lugol solution prepared with a buffer (B-Lugol) stabilizes the pH and almost completely prevents soft-tissue shrinkage, without affecting the staining time .
Whole-body staining by submersion has proved effective for excised tissues and small whole-body fetuses (< 20 weeks of gestation). However, as mentioned, complete penetration of the staining agent takes longer when there is increased distance between surface and center of the fetus and requires large volumes of submersion fluid in bigger, i.e. older fetuses. Moreover, the increased skin density and ossified structures (e.g., the skull) may form barriers and hamper diffusion in older fetuses. This is particularly evident when aiming at complete staining of the brain in older fetuses in cases where the skull is formed. Depending on the size of the fetus, staining time ranges from a couple of days to several weeks. When providing a postmortem scanning service as an alternative for conventional autopsy, this delay is undesirable. Endovascular infusion of the contrast agent offers a possible alternative for staining by submersion to accelerate uptake and direct delivery to peripheral tissues. In theory, this would create a larger surface area and bypass hard-to-penetrate structures such as the skin and skull. To the best of our knowledge, this technique has not been used for whole-body staining of human fetuses to date. However, it has been tried in animal experiments using mice [ – ] and newborn piglets . In mice, contrast agents were either administered into the left ventricle of the heart or retrograde through the aorta . Before administration of the contrast agent, the mice were heparinized to prevent blood clotting . After infusing the staining agent, the fetuses were fixed . Staining agents PTA and Lugol were tested, and Lugol was found to have a faster uptake (15 min vs. 30 min stain time) and showed more homogeneous staining throughout the body compared to PTA. This may be because triiodide is a smaller molecule than PTA. Lugol also resulted in better contrast in the myocardium and bronchial walls than PTA. On the other hand, PTA appeared to have more uptake in the liver, renal parenchyma and vessel walls. So, if the aim of the staining is to visualize the vasculature, then PTA would be the preferred contrast agent . Schweitzer and coworkers recently presented the first results using diluted barium sulfate solution in combination with a high-pressure (max. 1.4 bar, max. 22 L/min) angiography pump for venous and arterial filling in postmortem newborn piglets. This procedure results in staining of the entire fetus without macroscopically visible discoloration of the tissues . In their studies, adequate staining could be achieved even several days after the animals’ demise while they were stored in a cool environment. These studies approach the needs of a clinical setting with delayed staining, for example if parents wanted some time with their child before diagnostics or there were logistical delays to staining. Time between demise and staining, however, should be kept short to prevent too much clotting of the blood and decay of the body. Only one study reported on the comparison and combined administration of Lugol through submersion and endovascular infusion. Fixed mice were either injected with Lugol in the right ventricle or submerged in Lugol after their skin was incised, in both cases using a similar concentration of Lugol solution; the endovascular approach failed to deliver contrast because the staining agent remained within the heart and was not distributed to the body . This is likely the consequence of specimen fixation prior to infusion of the agent and the lack of perfusion with heparin to prevent clotting. Endovascular delivery should be further explored because it might dramatically reduce staining time . However, it becomes increasingly technically difficult to administer a staining agent in smaller vessels, and for this reason submersion is likely to remain the staining method of choice for the smallest fetuses. Intravascular administration of various staining agents and concentrations might provide new ways of visualizing anatomy of large fetuses and thereby form a basis for future studies, although any interference with the body including needling or injection might be unacceptable to parents.
The placenta is the chief regulator of nutrient supply for the fetus, and placental disorders such as pre-eclampsia can impact the health of both mother and fetus. Pathogenesis of pre-eclampsia is not fully understood, and by visualizing the smallest vasculature of the placenta, new light could be shed on this problem and advance our understanding of such diseases. Two umbilical arteries and one umbilical vein provide the blood circulation of the fetus. They branch out in the placenta to form a complex network that is anatomically variable and is, even in healthy placentas, not fully understood. It has been investigated using micro-CT . For intravascular and placental research, Microfil and BriteVu (Scarlet Imaging, Murray, UT) are two frequently used casting agents . These agents are infused into capillaries while in liquid form and are allowed to solidify into a highly dense intravascular cast that can be visualized using various medical imaging techniques, micro-CT among them. The contrast agent does not leave the vessel and does not cause staining of soft tissue and therefore provides visualization exclusively of the vascular anatomy . In general, care must be taken that the tissue is not perfused with a pressure higher than the physiological blood pressure because this leads to damage of the vessels and overestimation of vascular volume from vessel expansion. Microfil is a lead-based substance designed for visualizing microcirculation of vessels larger than 100 microns . Before administration, the vessels must be flushed with sodium chloride and perfused with heparin to prevent clotting for maximal perfusion of the substance. Aughwane and coworkers infused Microfil into term placentas and found that only half of the vessels larger than 200 µm 2 were perfused adequately (more than 75% filling), which might indicate that Microfil is less suitable than anticipated to visualize the microcirculation. BriteVu is a barium sulfate–based substance that is less viscous than Microfil . Using BriteVu, James and coworkers were able to visualize capillary vessels down to 10 μm wide. From their study, it is not clear whether BriteVu reached the vasculature through the entire fetus because only specific areas are shown. In detailed images (Fig. ), the vessels appear to be well perfused, supporting their findings, and one might anticipate that the entire vasculature of the fetus can be reached . These casting agents have been used to study the microvasculature of the placenta, but no researchers have been able to scan a complete placenta at the resolution needed to image the 10-µm wide micro-vessels . For vascular research it remains questionable whether it is necessary to visualize the smallest vessels, considering their anatomical variability. Depending on the expected size of the studied entity, intravascular casting agents could be an asset in future studies. Note, however, that endovascular casts might interfere with (subsequent) soft-tissue staining, especially in endovascular approaches.
After iodine staining of fetuses, there is potential skin discoloration to a brown color, which might be undesirable to parents if they are not forewarned that this can happen, or if scanning is done on rare museum specimens as an alternative for the inevitable destructive effects of dissection. Lanzetti and coworkers found that in animal museum specimens that were preserved in 70% ethanol and stained using 1% iodine in 70% ethanol, the discoloration could be reversed using 3% sodium thiosulfate (STS) in 70% ethanol within hours to a few days (depending on fetal size) . Figure shows an example of a very rare humpback whale embryo that was stained and de-stained according to this protocol. Although specimens are de-stained on the outside, they might still be stained internally . When scanning postmortem human fetuses, superficial de-staining might be sufficient for parents if this accelerates the process of returning the body to the family. For rare museum specimens this might not be sufficient and requires future studies. Moreover, it should be noted that STS does not restore a fetus to its earlier chemical state because STS reacts with the brownish triiodide and reduces it to the colorless iodide. To remove the iodide from the iodine stained specimen, further incubation steps are required, though it should be considered that these additional incubation steps might alter the fetus because of an osmotic imbalance between the incubation solution and the tissue. This process, called leaching, may easily take several weeks or months, depending on the size and volume of the fetus, and requires regular handling to refresh the solution as it saturates with iodide . Longer de-staining times are, however, less of an issue in the setting of museum specimens. The notion that ethanol-preserved specimens can be stained, scanned and de-stained without causing damage might provide an opportunity to scan even rare pathological human fetal specimens and thus give more insight in human fetal development by utilizing rare collections.
Over the last decade a considerable number of studies have been published regarding micro-CT imaging of human fetuses. The use of micro-CT offers high diagnostic accuracy in the clinical setting comparable to that of a classic autopsy. It should be noted that even after autopsy, 55% of intrauterine fetal demise cases are still left unexplained , but this information can help counsel parents appropriately following pregnancy loss. It is to be expected that the diagnostic accuracy of fetal micro-CT imaging will increase in the coming years because of advancing techniques and protocols, improved scan quality and the growing experience of developmental biologists, radiologists and radiographers. With further research, we anticipate that contrast-enhanced micro-CT will become a suitable alternative to fetal autopsy at early gestation and for rare museum specimens. Moreover, the acquired micro-CT images can be made available worldwide to be studied and rendered into three-dimensional models for both research and wider teaching.
|
Changes in Protein Expression in Warmed Human Lens Epithelium Cells Using Shotgun Proteomics | b7accf40-6f9e-4655-94ff-b2e3d85ecac9 | 11857641 | Biochemistry[mh] | Cataracts are a leading cause of blindness worldwide. With the increasing lifespan worldwide, the number of individuals whose sight is threatened by this disease is expected to increase. There are four major types of cataracts: cortical, nuclear, posterior subcapsular, and mixed. Different risk factors were associated with each risk type. Epidemiological research has identified multiple factors that are linked to an elevated risk of developing nuclear cataracts (NUCs), including greater sunlight exposure, lower socioeconomic status, poorer nutrition, smoking, cortical cataracts due to diabetes, greater sunlight exposure, and female sex . NUCs have the greatest clinical significance because they are the most common type of cataracts and occur along the visual axis. Treatments that prevent the appearance or delay the progression of NUCs have significant therapeutic value. Previous research has shown that the prevalence of NUCs, graded at level ≥1 according to the World Health Organization (WHO), regardless of racial factors . ThereforeThus, we hypothesized that the occurrence of cataracts is associated with environmental temperature. The supporting evidence includes a study on ambient temperature effects, where the lens temperature of monkeys exposed to direct sunlight at 49 °C increased to 41 °C within 10 min . Similarly, in rabbits, the lens temperature decreased by 7 °C when maintained in an environment at 4 °C . Another rabbit-based experiment demonstrated significant correlations between ambient temperature under sunlight and the temperatures of the lens and posterior chamber aqueous humor . In this study, we investigated the relationship between environmental temperature and lens temperature through an in silico computer simulation. The lens temperature was estimated to range between 35 °C and 37.5 °C depending on the ambient temperature surrounding the eyeball. However, when the ambient temperature exceeded 30 °C, the estimated lens temperature varied with age, showing an increase in older individuals . Our study showed that, as environmental temperatures rise, the temperature of the eye lens increases to 35–37.5 °C or higher, which correlates with the development of NUCs. The temperature increase, particularly in the lens nucleus, coincides with the opacity area of the cataract. When the lens temperature exceeds 37.5 °C, cumulative heat exposure is positively correlated with NUC incidence . This suggests that prolonged exposure to elevated temperatures, especially with aging, may increase the risk of developing NUCs. In addition, we previously investigated the relationship between temperature and NUC incidence in the rat whole lens (including the epithelium, cortex, and nucleus) using a shotgun proteomic analysis approach and showed that the levels of actin, tubulin, vimentin, filensin, and fatty acid-binding protein 5 decreased under warming-temperatures (37.5 °C) . However, it remains unclear whether similar results can be obtained in the human lens, and the detailed mechanisms underlying these findings have yet to be elucidated. Based on this background, identifying the expression of proteins that fluctuate under warming conditions in human lens cells and discussing preventive measures could contribute to the clinical prevention of NUCs. In this study, we employed a shotgun proteomic analysis approach in iHLEC-NY2 (human lens epithelial cells) to investigate the cataractous factors that are relevant to normal and warming conditions. 2.1. Culture Cells2.2. Tryptic Digestion of Proteins Extracted from iHLEC-NY22.3. Identification of Proteins2.4. BioinformaticsThe immortalized human lens epithelial cell line iHLEC-NY2 was used as described by Yamamoto et al. . Briefly, the iHLEC-NY2 cell line (source of the cell line “Fujita Health University, Research Promotion Headquarters”), derived from human lens epithelial cells and transfected with modified SV40 large T antigen , was cultured in medium containing FBS, bFGF, GlutaMAXTM I, DMEM/F12, and penicillin–streptomycin. Cells were cultured at 35.0 °C (normal-temp) and 37.5 °C (warming-temp) in a 5% CO 2 incubator. The experiment using iHLEC-NY2 was approved by the Ethic Review Committee of Fujita Health University (No. 004, approval date 1 April 2021) and Kanazawa Medical University Biosafety Committee for Recombinant DNA Research (Approval No. 2020-18, approval date 11 November 2020). iHLEC-NY2 cells were homogenized using at the Minute™ total protein extraction kit for mass spectrometry (Invent Biotechnologies, Inc., Plymouth, MN, USA). Protein concentrations were determined using the Bio-Rad Protein Assay Kit (Bio-Rad Laboratories, Inc., Hercules, CA, USA). Gel-free trypsin digestion was performed as previously described . Briefly, 10 µg of protein extract from each sample was reduced at 37.5 °C for 30 min using 20 mM Tris(2-carboxyethyl)phosphine in 50 mM ammonium bicarbonate buffer and 45 mM dithiothreitol. Subsequently, the proteins were alkylated with 100 mM iodoacetamide in 50 mM ammonium bicarbonate buffer at 37.5 °C for 30 min. Following this alkylation, the samples were digested at 37.5 °C for 24 h using MS-grade trypsin gold (Promega Corp., Madison, WI, USA) at a trypsin-to-protein ratio of 1:100 ( w : w Waltham, MA, USA) according to the manufacturer’s instructions. The analysis was performed following our previous study . Briefly, p, Tokyo, Japan) and HTC PAL autosampler (CTC Analytics, Zwingen, Switzerland). Peptide separation occurred on a Paradigm MS4 system (AMR Inc., Tokyo, Japan) with a reverse-phase C18 column (L-column, 3-µm gel particles, 120 Å pore size, and 0.2 mm × 150 mm) at a flow rate of 1 µL/min. The mobile phase consisted of 0.1% formic acid in water (solution A) and acetonitrile (solution B), with gradient elution from 5% to 40% solution B over 120 min. Peptides were analyzed using an LTQ ion-trap mass spectrometer (Thermo Fisher Scientific, Inc.) without sheath or auxiliary gas. MS scan sequences included full-scan MS followed by MS/MS of the two most intense peaks, with parameters optimized for fragmentation. MS/MS data were searched against the SwissProt database using Mascot version 2.4.01, enabling trypsin digestion, missed cleavages, and modifications such as cysteine carbamidomethylation and methionine oxidation. In this study, the fold change in expression was determined as the log2-transformed ratio of protein abundance ( Rsc ) and assessed via spectral counting . Rsc was calculated by Equation (1) as follows: (1) R s c = l o g 2 n s + f n n + f + l o g 2 t n + n n + f t s + n s − f In addition, the normalized spectral abundance factor ( NSAF ) was computed by Equation (2) as follows: (2) N S A F = S p C n / L n S U M ( S p C n / L n ) Here, n n and n s represent the spectral counts for proteins in rat retinas, whereas t n and t s indicate the total spectral counts for all proteins in each sample. The correction factor, denoted as ƒ, was 1.25. SpC n refers to the spectral count of the protein in rat lenses incubated at normal-temp and warming-temp, while L n denotes the protein length in these conditions. Proteins were considered differentially expressed when the Rsc value was greater than 2 or less than −2, which corresponded to fold changes greater than 2 or less than 0.5, respectively. This study explored the roles of proteins that exhibited notable changes under normal and warming conditions. The sequences were annotated by assigning Gene Ontology (GO) terms corresponding to molecular functions, cellular components, and biological processes, along with Kyoto Encyclopedia of Genes and Genomes (KEGG) signaling pathways,tools.jsp , accessed on 3 February 2025) . Additionally, p -values for the GO analysis were computed through this database tool. Protein Expression in iHLEC-NY2 With or Without Warming Amounts of 437 and 485 proteins were identified in iHLEC-NY2 cultured at normal-temp and warming-temp, respectively ( A). Moreover, 615 proteins were detected in iHLEC-NY2, including 307 (49.9%) present in both lenses cultured at normal-temp and warming-temp, 130 (21.1%) unique to the lens cultured at normal-temp, and 178 (29.0%) unique to the lens cultured at warming-temp ( A). Next, we investigated the proteins expressed in the iHLEC-NY2 cells. B shows the Rsc values for the proteins identified in the lenses cultured at normal-temp and warming-temp. A positive Rsc value indicated enhanced expression of proteins in the iHLEC-NY2 cells cultured at elevated temperatures, while a negative value signified reduced expression. Additionally, the NSAF value was computed for each protein identified in iHLEC-NY2 cells cultured at both normal- and warming-temp. Proteins with Rsc values greater than 2 or less than −2 were identified as candidate proteins exhibiting differential regulation in response to the different culture conditions. At different culture temperatures, the housekeeping protein levels (GAPDH, glyceraldehyde-3-phosphate dehydrogenase) did not change. We performed a GO analysis on the candidate proteins regulated in the iHLEC-NY2 cells cultured at elevated temperatures. For this analysis, we queried GO terms using the DAVID database, and the results of “molecular function”, “cellular component”, “biological processes”, and “KEGG pathway” are shown in , , and , respectively. In the categorie, the detected counts were 29, 44, 43, and 16, respectively. Among these, the most abundant terms in each category. In addition, we listed proteins with expression changes at warming-temp that showed Rsc > 2 or <−2 via the label-free semiquantitative method based on spectral counting ( and ). The proteins demonstrating Rsc > 2 or <−2 were detected to be 30 in total, and, at warming-temp, the expression levels of 19 proteins were upregulated, while the expression levels of another 11 proteins were downregulated. In this study, our focus was on the downregulated proteins at warming-temp since they are more prone to being influenced than overexpressed proteins. The factors in this study were actin, alpha cardiac muscle 1, actin-related protein 2, putative tubulin-like protein alpha-4B, ubiquitin carboxyl-terminal hydrolase 17-like protein 1, ubiquitin-ribosomal protein eL40 fusion protein, ribosome biogenesis protein BMS1 homolog, histone H2B type 1-M, and histone H2A.J. Keratin was also detected via proteomic analysis. However, because keratin is not present in the lens, the possibility of contamination during lens extraction has been suggested. Previous research has shown that the prevalence of NUCs, graded at level ≥1 according to the WHO cataract grading system, was notably higher in tropical and subtropical regions than in temperate and subarctic regions regardless of racial factors . In addition, it was reported that cumulative heat exposure is positively corelated with NUC incidence when the lens temperature exceeds 37.5 °C . Thus, elevated lens temperatures resulting from higher environmental temperatures may contribute to an increased risk of NUC formation. However, the exact connection between NUCs and temperature is yet to be fully understood. We demonstrated the types of proteins expressed under normal and warming conditions by using shotgun proteomic analysis and found a decrease in the specific proteins involved in actin, tubulin, ubiquitin, ribosome, and histone under warming conditions in this study. First, we determined the incubation temperature at normal-temp and warming-temp following a previous computer simulation in silico study and identified 30 proteins exhibiting > 2-fold changes in expression between iHLEC-NY2 under normal-temp and warming-temp. Furthermore, the effect on the expression system is typically more significant when a protein is underexpressed compared to when it is overexpressed. Therefore, we have focused on variations in the expression of 11 factors (the specific proteins concerned were actin, tubulin, ubiquitin, ribosome, and histone), as described in . Decreased actin and tubulin expression was observed under warming conditions . The cytoskeleton of the human eye, comprising actin microfilaments, intermediate filaments, microtubules, and their associated proteins, is essential for cellular growth, maturation, differentiation, integrity, and function. Actin microfilaments are composed of F-actin helices, which are built from G-actin subunits (47 kD) . These filaments are distributed throughout the cytoplasm, form a fine mesh under the plasma membrane, or organize into stress fibers. The processes of actin polymerization and depolymerization are modulated by actin-regulatory proteins such as gelsolin. Additionally, various associated proteins bind actin filaments to the plasma membrane, supporting the cellular architecture . Therefore, a decrease in actin levels may weaken cell membrane protein binding, resulting in lens opacity. The putative tubulin-like protein alpha-4B is a cytoskeletal protein that constitutes a part of a structure known as microtubules. Microtubules play a crucial role in maintaining cell shape, cell division, and intracellular transport. Tubulin forms microtubules by dimerizing α-tubulin and β-tubulin, thereby providing structural stability within cells. The lens cells rely on microtubules to maintain their morphology . Tubulin dysfunction can compromise microtubule stability, thus leading to alterations in cell shape and function, which may result in the loss of lens transparency. Furthermore, microtubules are essential for the proper transport of proteins within cells, including lens cells, where their functions are critical . Abnormalities in putative tubulin-like protein alpha-4B may disrupt protein transport, potentially causing protein aggregation within the lens. This aggregation contributes to lens opacification. Additionally, because microtubules are involved in the proliferation and maintenance of lens cells, tubulin defects can lead to cellular dysfunction, which may contribute to lens opacity. Therefore, the putative tubulin-like protein alpha-4B plays a vital role in maintaining the structural integrity of lens cells and protein transport. A reduction in putative tubulin-like protein alpha-4B under high-temperature conditions may be one of the factors that contribute to lens opacification. In addition, the expression of the proteins related to ubiquitin and ribosome in warming-temp-incubated iHLEC-NY2 was also lower than that in normal-temp-incubated iHLEC-NY2. Many of the signals that maintain lens epithelia appear to be substrates of the ubiquitin–proteasome pathway . Ubiquitin C-terminal hydrolase L17-like protein 1 is an enzyme that is responsible for protein degradation and is particularly involved in the ubiquitin–proteasome system, a key protein quality control mechanism . This system is essential for preserving cellular homeostasis by facilitating the elimination of damaged or misfolded proteins. The eL40 fusion protein consists of ubiquitin, which tags damaged or unnecessary proteins for degradation, and the ribosomal protein eL40, which is involved in protein synthesis . The BMS1 homolog is crucial for ribosome assembly, particularly ribosomal RNA (rRNA) processing and ribosomal subunit assembly . Ribosomes are essential for protein synthesis within cells, and proteins such as BMS1 are indispensable for the proper formation of functional ribosomes . Impairment of ribosome biogenesis can lead to increased production of misfolded proteins, especially in long-lived cells such as lens cells, which can contribute to cataract formation. Therefore, dysfunctions or mutations in BMS1 may increase the risk of cataract development. Histones are key proteins involved in DNA packaging within the nuclei of eukaryotic cells. They wrap DNA to form chromatin, thus enabling it to be compactly stored and to regulate gene expression . If histone modifications or structural changes adversely affect the expression of genes that are critical for maintaining lens transparency, improper protein folding and aggregation within the lens may occur, leading to loss of lens transparency. Moreover, we examined their functions by analyzing the four GO terms ( , , and ). The GO analysis indicated that the most common factors identified in the molecular function, cellular component, biological processes, and KEGG pathway categories were “protein binding”, “extracellular exosome”, “nucleosome assembly”, and “neutrophil extracellular trap formation”, respectively ( , , and ). The proteins involved in protein binding were actin and alpha cardiac muscle 1. The proteins associated with extracellular exosomes included actin, alpha cardiac muscle 1, actin-related protein 2, ribosome biogenesis protein BMS1 homolog, histone H2B type 1-M, and histone H2B type 1-M. Additionally, the protein involved in nucleosome assembly was histone H2B type 1-M. Taken together, it is possible that factors associated with actin, ribosomes, and histones are specifically involved in the onset of cataracts due to temperature changes. However, the present results also show that the expression of other proteins related to tubulin and histones, such as tubulin alpha-1C chain and histone-H2B type 1-C/E/F/G/I, -H2B type F-S, -H2B type 1-D, -H2A type 1-H, and -H3.1, increases at warming-temp . Therefore, changes in the tubulin and histone levels may be associated with homeostatic maintenance. Further investigations are required in order to determine whether the decrease or increase in these proteins at higher ambient temperatures plays a dominant role. It is crucial to explore whether the overexpression of certain proteins and the reduction in others at elevated temperatures are associated with lens dysfunction. In our previous study utilizing a similar shotgun proteomic analysis, we demonstrated that heating the rat whole lens (including the epithelium, cortex, and nucleus) at warming-temp resulted in reductions in actin, tubulin, vimentin, filensin, and fatty acid-binding protein 5 . Among these, both actin and tubulin were found to decrease upon heating in both the rat lens and iHLEC-NY2. These findings suggest that the observed reductions in actin and tubulin may at least be attributable to epithelial cells. Thus, this study has successfully screened lens proteins that change in response to elevated temperature, which were previously unidentified as potential causes of NUCs. As a result, it is now possible to consider temperature-related factors in NUC development, contributing to future research advancements. However, this study does not fully reflect the changes occurring in the nuclear or cortical regions of the lens since human epithelial cells were used. Moreover, additional research is required to assess the relationship between the onset of NUCs and changes in the proteins involved in actin, tubulin, ubiquitin, ribosomes, and histones. Therefore, we are planning to measure the localization and expression of the specific proteins concerning actin, tubulin, ubiquitin, ribosomes, and histones under warming-temp by using Western blotting and an immunostaining method. The conducted shotgun proteomic analysis revealed that warming decreased the expression of specific proteins involved in actin, tubulin, ubiquitin, ribosomes, and histones in iHLEC-NY2. This study could provide a valuable framework for understanding the relationship between temperature and the onset of NUCs. However, additional research is necessary to fully comprehend the mechanisms that link these factors. In addition, regarding the clinical correlation of shotgun proteomics and future directions, it is desirable to investigate whether similar protein fluctuations occur using postoperative samples from human NUC patients. Furthermore, establishing prevention or treatment strategies for nuclear cataracts by suppressing these protein fluctuations is anticipated. |
Brain tissue iron neurophysiology and its relationship with the cognitive effects of dopaminergic modulation in children with and without ADHD | c04c1e84-838c-47ef-aa0c-f45ac918afc1 | 10372187 | Physiology[mh] | Introduction Attention-deficit/hyperactivity disorder (ADHD) is one of the most common neurodevelopmental disorders of childhood, affecting 5–10% of children worldwide . ADHD is characterized by developmentally inappropriate levels of inattention, hyperactivity, and impulsivity . These core symptoms are often accompanied by deficits in cognitive control, a set of goal-directed processes involved in regulating thoughts and behaviors . One domain of cognitive control that is typically impaired in ADHD is response inhibition ; individuals with ADHD exhibit difficulty suppressing actions that may interfere with goal-directed behaviors . These response inhibition deficits put individuals with ADHD at risk for negative long-term outcomes, including substance use disorders and criminal behavior . Studies of the neural etiology of response inhibition deficits in ADHD have focused on the neurotransmitter dopamine, given its established role in modulating frontostriatal circuits important for response inhibition . It is thought that dopamine’s actions in distinct cortical-basal ganglia loops redirect information from ventromedial frontostriatal networks involved in reward processing to dorsolateral frontostriatal networks involved in cognitive control . Previous neuroimaging research suggests that dopamine neurotransmission is dysfunctional in individuals with ADHD . Positron emission tomography (PET) studies have shown that dopamine metabolism, receptor availability, and transporter function is disrupted in adults and children with ADHD . Magnetic resonance imaging (MRI) studies have further shown that children and adults with ADHD exhibit reduced activation and functional connectivity in frontostriatal regions and networks during tasks that probe attention and response inhibition . Previous research has demonstrated that modulation of the dopaminergic system improves the symptoms and cognitive deficits related to ADHD, including response inhibition . In fact, the receipt of rewards during laboratory tasks has been shown to improve response inhibition performance in individuals with ADHD and in age- and sex-matched typically developing (TD) controls . This reward-related reinforcement increases synaptic availability of dopamine in the striatum . Further, the psychostimulant methylphenidate (MPH), the current first-line treatment for ADHD, is an indirect dopamine and norepinephrine agonist. Due to MPH’s dopamine agonism via blockage of dopamine transporters, extracellular levels of striatal dopamine increase following MPH administration . Examining how response inhibition performance changes following the receipt of rewards and MPH administration, and how these performance changes relate to indirect measures of dopamine availability, will therefore shed light on the neurobiological mechanisms through which dopaminergic modulation improves response inhibition in both individuals with ADHD and TD children. Research assessing dopaminergic functioning and dopamine availability in humans in vivo is limited, especially in children, because techniques such as PET involve the use of radiation . One way to circumvent this limitation and indirectly assess dopamine availability in the brain is with magnetic resonance-based measurements of brain tissue iron. Iron is a cofactor of the rate-limiting enzyme tyrosine hydroxylase and of monoamine oxidase, both of which are critical for dopamine synthesis . In the human brain, iron is preferentially sequestered in regions that make up the brain’s dopaminergic reward pathway, including the basal ganglia and thalamus . These regions are also critical components of the aforementioned frontostriatal circuitry involved in response inhibition and reward-related reinforcement . Since the presence of iron increases the rate of T2* relaxation, quantifying the T2* relaxation rate (i.e., R2*) of functional MRI (fMRI) data can be used to measure basal ganglia tissue iron levels . Indeed, previous work has employed this approach to investigate basal ganglia iron content by estimating the relative T2* relaxation rate across the brain using existing fMRI data . Recent neuroimaging work using PET has confirmed that midbrain tissue iron measurements derived by quantifying the T2* relaxation rate are correlated with dopamine availability in the striatum, specifically with presynaptic vesicular storage of dopamine . Few studies to date have leveraged brain tissue iron measurements to probe dopaminergic function in individuals with ADHD. These studies have found that individuals with ADHD exhibit reduced brain tissue iron levels in the basal ganglia and thalamus relative to their age- and sex-matched TD peers , which is in line with other neuroimaging work finding reduced midbrain dopamine activity in ADHD . None of these studies have examined the relationship between dopamine-related brain tissue iron neurophysiology and response inhibition performance in individuals with ADHD. Even so, research suggests that greater levels of brain tissue iron are associated with better cognitive ability in TD children , adolescents, and young adults , as well as with responsivity to the receipt of rewards during a response inhibition task in TD adolescents and adults . However, the question of whether these relationships are consistent in individuals with ADHD, and whether dopamine-related physiology modulates the response to dopaminergic modulation in ADHD, remains. The overarching goal of this pre-registered project, therefore, is to investigate brain tissue iron content in the basal ganglia and thalamus using time-averaged normalized T2*-weighted (nT2*w) signal and to assess whether variability in the nT2*w measurement is related to responsivity to dopaminergic modulation in children with ADHD and TD children. Here, ‘dopaminergic modulation’ refers to reward reinforcement or administration of MPH, and responsivity to this modulation will be operationalized as improvement on tasks probing response inhibition. As prior work has found that individuals with ADHD have lower basal ganglia and thalamic tissue iron levels relative to their TD peers , we predict that individuals with ADHD will have higher nT2*w signal, reflecting reduced brain tissue iron levels, in these regions. Based on previous work in TD individuals , we hypothesize that individuals with lower nT2*w signal, reflecting greater tissue iron levels, will exhibit better response inhibition, as well as greater improvements in response inhibition following the administration of rewards and MPH. Materials and methods 2.12.22.32.42.52.62.72.82.9 Analyses Before conducting each of the below analyses, power analyses were performed to determine statistical power to detect the expected effects. See , Expected Power for details. Group comparisons of demographic variables : To ensure that the ADHD and TD groups did not differ on demographic variables, including sex, race, family income, and parental education, we used two-sample chi-squared tests to compare groups on each of these variables. We additionally conducted Welch’s t-tests for unequal variance to compare groups on age, FSIQ, and Word Reading scores. Tests were FDR-corrected for seven comparisons at p < .05. Replication analysis – comparing nT2*w signal in children with ADHD and TD children : We first replicated previous work and examined whether there were group differences between children with ADHD and TD children in nT2*w signal in the whole basal ganglia and thalamus . We used separate linear regression models covarying for age and sex for each of the two ROIs, as follows: nT2*w signal ∼ group + age + sex Models were FDR-corrected for two comparisons at p < . 05 . We also determined whether there were group differences in nT2*w signal in specific basal ganglia subregions (i.e., caudate, putamen, globus pallidus, and accumbens). In this secondary analysis, we again used separate linear regression models covarying for age and sex for each of the four ROIs. Here, results were FDR-corrected for four comparisons at p < .05 . One participant with ADHD was not included in this analysis as their fMRI data following placebo administration had fewer than 170 volumes included after excluding high-motion timepoints (See Motion-related quality assurance ). This left 64 participants (n ADHD = 35, n TD = 29) in this analysis. Relationship between nT2*w signal and response inhibition : To determine the relationship between nT2*w signal and response inhibition in children with ADHD and TD children, we used linear regression models covarying for age and sex using the following equation: response inhibition performance ∼ nT2*w signal + age + sex For the primary analysis, we used separate linear regression models for each response inhibition performance measure (commission errors, tau) and ROI (whole basal ganglia and thalamus) for a total of four models (two response inhibition measures x two ROIs). Statistical tests were FDR-corrected for two comparisons separately for each response inhibition measure at p < .05, as we included two ROIs in the primary analysis . In a secondary analysis that assessed regional specificity of the relationship between nT2*w signal and response inhibition performance, we extracted nT2*w signal from each basal ganglia ROI separately. Wbasal ganglia ROI (caudate, putamen, globus pallidus, and accumbens) for a total of eight models (two response inhibition measures x four ROIs). Here, statistical tests were FDR-corrected for four comparisons separately for each response inhibition measure at p < .05, as we included four basal ganglia ROIs . Given literature indicating that the relationship between brain tissue iron and response inhibition performance is strongest in individuals with high levels of brain tissue iron , and that individuals with ADHD have reduced brain tissue iron relative to TD individuals , we performed an additional analysis in which we examined whether the relationship between nT2*w signal and response inhibition performanceresponse inhibition performance was predicted by nT2*w signal, diagnostic group, and the interaction between nT2*w signal and diagnostic group, covarying for age and sex as follows: for each response inhibition measure and ROI were used, and groups of models were FDR-corrected separately at p < .05 as above (i.e., two corrections per response inhibition measure for whole basal ganglia and thalamus in the primary analysis; four corrections per response inhibition measure for caudate, putamen, globus pallidus, and accumbens in the secondary analysis). One additional participant was not considered for inclusion in this analysis due to inconsistent presentation of no-go stimuli (n = 1, TD). This left 63 participants (n ADHD = 35, n TD = 28) in this analysis. Relationship between nT2*w signal and responsivity to reward : To examine whether variability in nT2*w signal predicts responsivity to reward in children with ADHD and TD children, change in performance from the standard go/no-go task to the rewarded go/no-go task was calculated for all participants by subtracting performance measures on the rewarded go/no-go task from those on the standard go/no-go task, such that higher values (i.e., more positive) reflected greater improvements in task performance. Linear regression models covarying for age and sex were used to relate nT2*w signal to change in response inhibition performance separately for each response inhibition measure and ROI using the following equation: ∆ response inhibition performance ∼ nT2*w signal + age + sex As in the analyses above, separate linear regression models for each response inhibition measure and ROI were used. Again, groups of models were FDR-corrected separately at p < .05 . Specifically, for the primary analysis, two corrections per response inhibition measure (commission errors, tau) were made for the whole basal ganglia and thalamus. In the secondary analysis, four corrections per response inhibition measure were made for the basal ganglia subregions (i.e., caudate, putamen, globus pallidus, and accumbens). We also performed an additional analysis that examined whether the relationship between nT2*w signal and responsivity to reward differed as a function of diagnostic group. As such, we implemented linear regression models wherein change ∆response inhibition performance ∼ nT2*w signal + group + nT2*w signal*group + age + sex Separate linear regression models covarying for age and sex for each response inhibition measure and ROI were used. Models were FDR-corrected in the same way as described in the previous analyses at p < .05 . That is, for each response inhibition measure (commission errors, tau) two corrections were made in the primary analysis that examined nT2*w signal in the basal ganglia and thalamus, and four corrections were made in the secondary analysis that examined nT2*w signal in the four basal ganglia subregions (i.e., caudate, putamen, globus pallidus, and accumbens). Two additional participants were not considered for inclusion in this analysis for missing rewarded go/no-go data (n = 1, TD) and incorrect button presses during the rewarded go/no-go task (n = 1, ADHD), leaving 61 participants (n ADHD = 34, n TD = 27) in this analysis. Relationship between nT2*w signal and responsivity to MPH : To investigate whether variability in nT2*w signal predicts responsivity to MPH in children with ADHD, change in standard go/no-go performance from placebo to MPH was calculated by subtracting performance measures on MPH from those on placebo. Here, higher values (i.e., more positive) indicate greater improvement of performance following MPHprevious analyses, statistical tests were FDR-corrected separately for each response inhibition measure at p < .05 . In the primary analysis, statistical tests were corrected for two comparisons per response inhibition measure (commission errors, tau), as there were two ROIs (whole basal ganglia and thalamus). In the secondary analysis, statistical tests were corrected for four comparisons per response inhibition measure, as there were four basal ganglia subregion ROIs (caudate, putamen, globus pallidus, and accumbens). Three participants with ADHD were not considered for inclusion in this analysis for missing standard go/no-go data on placebo (n = 1) and on MPH (n = 2), leaving 33 participants in this analysis. For all linear regression models, standardized betas are reported in the Results section. Participants The dataset for the proposed study is a subset of 65 participants with ADHD and TD participants between the ages of 8–12 years who participated in a larger study assessing the effects of MPH administration on functional brain network organization (ADHD: n = 36, 17 F, mean age = 9.70 y; TD: n = 29, 12 F, mean age = 10.23 y). Participants were selected for inclusion in the current study based on fMRI and behavioral data quality. See Motion-related quality assurance and Go/no-go tasks and measures for details about data quality criteria for fMRI and behavioral data, respectively. General exclusion criteria for the sample included full scale intelligence quotient (FSIQ) less than 85 as determined using the Wechsler Intelligence Scale for Children, Fifth Edition (WISC-V; ), Word Reading subtest score less than 85 from the Wechsler Individual Achievement Test, Third Edition (WIAT-III; ), or if any of the following conditions were met: (a) diagnosis of intellectual disability, developmental speech/language disorder, reading disability, autism spectrum disorder, or a pervasive developmental disorder; (b) visual or hearing impairment; (c) neurologic disorder (e.g., epilepsy, cerebral palsy, traumatic brain injury, Tourette syndrome); (d) medical contraindication to MRI (e.g., implanted electrical devices, dental braces). Diagnostic status was assessed using the Diagnostic Interview Schedule for Children Version IV (DISC-IV; ) and the Conners 3rd Edition Parent and Teacher Rating Scales (Conners-3; ). Participants were included in the ADHD group if they met: (a) full diagnostic criteria for ADHD on the DISC-IV, or (b) intermediate diagnostic criteria (i.e., subthreshold with impairment) on the DISC-IV and full diagnostic criteria for ADHD on the Conners-3 Parent or Teacher Rating Scales. They were additionally required to be psychostimulant medication naïve. Participants were included in the TD group if they met the above criteria and did not meet diagnostic criteria for any psychiatric disorders, including ADHD, on the DISC-IV. Additionally, TD participants were required to have three or fewer symptoms of inattention and of hyperactivity/impulsivity on the DISC-IV. Finally, TD participants were required to have no history or presence of developmental disorders, and no history or presence of ADHD in first-degree relatives. A demographic summary including age, FSIQ as determined using the WISC-V, Word Reading subtest score from the WIAT-III, sex, race, family income, and parental education, is provided in . Procedures All procedures for the parent study were reviewed and approved by the Institutional Review Board at the University of North Carolina at Chapel Hill. Written parental consent and participant (child) assent were obtained for each participant included in the larger study. Only procedures relevant to the proposed analyses will be described here. Participants underwent two fMRI sessions approximately one week apart (mean = 10.4 days; standard deviation = 7.4 days; range = 3–42 days). Since attention fluctuates throughout the day in school-aged children , most sessions were scheduled at approximately the same time of day (i.e., between 8 am – 12 pm). Seven TD sessions (12.5%) and 16 ADHD sessions (23%) were scheduled in the afternoon. There was not a significant difference across groups in terms of when sessions were scheduled ( c 2 (1) = 2.4, p = .13). Two participants with ADHD and two TD participants only participated in a single session. Each MRI session included an MPRAGE anatomical T1-weighted scan and the following T2*-weighted echo-planar imaging (EPI) functional scans: two resting-state scans (five minutes each), two standard go/no-go task scans (6.5 min each), and four rewarded go/no-go task scans (six minutes each), administered in that order. During the resting-state scans, participants viewed a white fixation cross on a gray background and were instructed to lie quietly, but awake, in the MRI scanner. For the go/no-go task scans, stimuli were projected onto a screen visible to the participant through a mirror mounted to the head coil and an MRI-safe handheld button box was used to record task responses. Both tasks were presented using PsychoPy v1.85.1 . Both fMRI sessions were identical for participants in the TD group. In the case that a TD child had usable fMRI and behavioral data from both sessions, data from the session with the highest percentage of fMRI volumes retained after motion scrubbing were used, based on fMRI data exclusion criteria (See Motion-related quality assurance ). Participants in the ADHD group participated in a double-blind, randomized, placebo-controlled crossover MPH challenge. On each day of testing, participants in the ADHD group received 0.30 mg/kg MPH or placebo, rounded up to the nearest 5 mg, orally approximately one hour before scanning. Aside from the MPH challenge, fMRI sessions for participants in the ADHD group were identical. Since we were interested in how intrinsic, baseline brain tissue iron levels are related to improvements in response inhibition that follow dopaminergic modulation, such as the receipt of rewards during cognitive tasks or the administration of MPH, fMRI data collected following placebo administration were used to assess brain tissue iron in all analyses for children with ADHD. Furthermore, it has been proposed that brain tissue iron estimates derived from fMRI are reflective of stable properties of brain tissue , and we confirmed that this was the case in our data, both across sessions in TD children (i.e., across a weeks-long period) and between placebo and MPH sessions in children with ADHD (i.e., after a single dose of MPH; see , Supplementary analyses and Supplementary results, , ). In the analyses that examined responsivity to rewards only, behavioral data collected following placebo administration were used. In the analyses that examined the effects of MPH administration, behavioral data collected following both placebo and MPH administration were used. All analyses here were pre-registered and the protocol was submitted to Open Science Framework prior to data analysis. Go/no-go tasks and measures Two versions of a go/no-go task, a standard and a rewarded version, were administered. The versions of the go/no-go task we used were adapted from one that was initially designed to have a high proportion of errors . In both versions, eight sports balls were used as the stimuli. Two of the eight sports balls were randomly selected for each participant as ‘no-go’ stimuli. The other six sports balls were ‘go’ stimuli. Participants were instructed to respond as quickly as possible with a button press using their right index finger following the presentation of ‘go’ sports balls (73.4% of trials) and to withhold responding when ‘no-go’ sports balls were presented (26.6% of trials). The standard go/no-go task ( a ) consisted of two runs of 128 trials each, for a total of 256 trials (188 go trials and 68 no-go trials). Stimulus order was pseudorandom such that between two and four go trials preceded each no-go trial. There were 16 instances of two consecutive go trials, 10 instances of three consecutive go trials, and eight instances of four consecutive go trials, randomized for each run. Each stimulus was presented for 600 ms, with a jittered interstimulus interval (ISI) of 1250–3250 ms selected from a uniform distribution. In the rewarded go/no-go task ( b ), stimuli and timing were identical to the standard go/no-go task, with the addition of feedback after each response. Feedback (coins for correct trials and empty circles for incorrect trials) was presented for 600 ms after a brief delay that was jittered identically to the ISI between trials. Due to the longer trial length, the rewarded go/no-go task consisted of four runs of 64 trials each, again for a total of 256 trials. The instructions for the rewarded go/no-go task were identical to the standard version, but participants were also told that they would be rewarded for correct, fast responses on go trials (≤650 ms) and for correct non-responses on no-go trials. Participants received one penny per correct/fast go trial and five pennies per correct no-go trial. Participants received the money they accumulated over the four runs of the rewarded go/no-go task at the end of each visit. Individual runs of the standard and rewarded go/no-go tasks were excluded for an omission rate on go trials that was greater than three standard deviations from the mean omission error rate, separately for each task, as determined using standard and rewarded go/no-go task data collected from participants with ADHD following placebo administration and TD participants. Specifically, individual runs of the standard go/no-go task were excluded if the proportion of omission errors exceeded 0.44 and individual runs of the rewarded go/no-go task were excluded if the proportion of omission errors exceeded 0.39. This was to ensure that participants were awake and actively engaging with the task. Additionally, individual go trials with response times faster than 200 ms were excluded from analyses, as exceptionally fast response times are indicative of anticipatory responses . Behavioral performance was indexed using the proportion of commission errors and response time variability. The proportion of commission errors was calculated as the proportion of no-go trials on which a response was made. Response time variability was quantified using tau, which is derived from the exponential-Gaussian distributional model of response times and assesses infrequent, extremely slow response times that are indicative of attention lapses . Tau quantifies the mean and standard deviation of the exponential component of the response time distribution. To calculate tau, the timefit function from the ‘retimes’ package in R was used to bootstrap the response times associated with correct go trials 5000 times, and the mean and standard deviation of the exponential distribution of response times was calculated. For summary statistics of behavioral performance, see in . The distributions of the proportion of commission errors and tau were assessed for normality using the Shapiro-Wilk test of normality ( α = 0.05) . Tau was not normally distributed, and the proportion of commission errors on the standard go/no-go task in all participants and following MPH administration in participants with ADHD was not normally distributed (all p-values < 0.04; see , ). Therefore, log-transformed values of both the proportion of commission errors and tau were used in all analyses. MRI data acquisition All neuroimaging data were collected at the University of North Carolina at Chapel Hill Biomedical Research Imaging Center. Data were acquired with a 32-channel head coil on a 3-Tesla Siemens MAGNETOM Prisma-fit whole-body MRI machine. High resolution T1-weighted anatomical scans were acquired using a magnetization prepared rapid acquisition gradient echo (MPRAGE) sequence (TR = 2400 ms, TE = 2.22 ms, FA = 8°, field of view 256 × 256 mm, 208 slices, resolution = 0.8 mm × 0.8 mm × 0.8 mm). Whole-brain T2*-weighted fMRI data were acquired using an echo-planar imaging (EPI) sequence (39 axial slices parallel to the AC–PC line, slice thickness 3 mm, interslice distance = 3.3 mm, TR = 2000 ms, TE = 25 ms, FA = 77°, echo spacing = 0.54 ms, field of view 230 mm × 230 mm, voxel dimensions: 2.9 mm × 2.9 mm × 3.0 mm). For the resting-state scan, 300 timepoints were collected (150 timepoints and five minutes per each of two runs). A total of 390 timepoints were collected during the standard go/no-go task (195 timepoints and 6.5 min per each of two runs), and 740 timepoints were collected during the rewarded go/no-go task (185 timepoints and 6.17 min per each of four runs). fMRIPrep anatomical and functional data preprocessing The following text has been adapted from the fMRIPrep boilerplate text that is automatically generated with the express intention that it is used in manuscripts. It is released under the CC0 license. All T1w images were corrected for intensity non-uniformity (INU) with N4BiasFieldCorrection and distributed with ANTs 2.2.0 ( , RRID:SCR_004757). The T1w reference was then skull-stripped with a Nipype implementation of the antsBrainExtraction.sh workflow (from ANTs), using OASIS30ANTs as the target template. Brain tissue segmentation of cerebrospinal fluid (CSF), white matter (WM), and gray matter (GM) was performed on the brain-extracted T1w using FAST (FSL 5.0.9, RRID:SCR_002823, ). A T1w-reference map was computed after registration and INU-correction of the T1w images used mri_robust_template (FreeSurfer 6.0.1, ). Brain surfaces were reconstructed using recon-all (FreeSurfer 6.0.1, RRID:SCR_001847, ), and the brain mask estimated previously was refined with a custom variation of the method to reconcile ANTs- and FreeSurfer-derived segmentations of the cortical gray matter of Mindboggle (RRID:SCR_002438, ). Volume-based spatial normalization to standard MNI152NLin2009cAsym space was performed via nonlinear registration with antsRegistration (ANTs 2.2.0), using brain-extracted versions of both the T1w reference and the T1w template. ICBM 152 Nonlinear Asymmetrical template version 2009c was selected for spatial normalization ( , RRID:SCR_008796; TemplateFlow ID: MNI152NLin2009cAsym). For each subject’s BOLD runs (across all tasks and sessions), the following preprocessing was performed. First, a reference volume and its skull-stripped version was generated using a custom methodology of fMRIPrep. A deformation field to correct for susceptibility distortions was estimated based on fMRIPrep’s fieldmap-less approach. The deformation field resulted from coregistering the BOLD reference to the same-subject T1w reference with its intensity inverted . Registration was performed with antsRegistration (ANTs 2.2.0), and the process was regularized by constraining deformation to be nonzero only along the phase-encoding direction and modulated with an average fieldmap template . Based on the estimated susceptibility distortion, an unwarped BOLD reference was calculated for a more accurate coregistration with the anatomical reference. The BOLD reference was then coregistered to the T1w reference using bbregister (FreeSurfer), which implements boundary-based registration . Coregistration was configured with six degrees of freedom. Head-motion parameters with respect to the BOLD reference (transformation matrices, and six corresponding rotation and translation parameters) were estimated using MCFLIRT (FSL 5.0.9, ). BOLD runs were slice-time corrected using 3dTshift from AFNI 20160207 ( , RRID:SCR_005927). The BOLD timeseries (including slice-timing correction) were resampled onto their original, native space by applying a single, composite transform to correct for head-motion and susceptibility distortions. These resampled BOLD timeseries will be referred to as preprocessed BOLD in original space, or just preprocessed BOLD. The preprocessed BOLD timeseries were then resampled into standard space, generating a preprocessed BOLD run in MNI152NLin2009cAsym space. A reference volume and its skull-stripped version were generated using a custom methodology of fMRIPrep. Following the processing and resampling steps, confounding framewise displacement (FD) timeseries were calculated based on the preprocessed BOLD for each functional run, using implementations in Nipype (following the definition by ). Many internal operations of fMRIPrep use Nilearn 0.5.2 ( , RRID:SCR_001362), mostly within the functional processing workflow. For more details of the pipeline, see the section corresponding to workflows in fMRIPrep’s documentation. Normalization and time-averaging of T2*‐weighted data In this set of analyses, BOLD signal was not used since we did not perform a timeseries analyses. Instead, we were interested in iron levels, which are a time-invariant property of brain tissue quantified from T2*‐weighted data . To quantify normalized T2*‐weighted (nT2*w) signal, first high motion timepoints were removed and subsequently excluded from analyses. High motion timepoints were defined as those that exceeded 0.3 mm FD . Then, to correct for scanner drift and potential differences between participants and MRI runs, each volume was normalized to the whole brain mean. The nT2*w signal from each voxel was then aggregated across all remaining volumes of the resting-state and task runs using the median, separately for each participant. The median was used to reduce the impact of outlier volumes . This resulted in a voxel-wise map of median nT2*w signal for each participant. As the presence of iron is inversely related to nT2*w signal, reduced nT2*w signal indicates increased brain tissue iron and therefore increased intrinsic DA availability. ROI selection Bilateral caudate, putamen, globus pallidus, accumbens, and thalamus were selected as regions of interest. These regions were selected for two reasons. First, they are dopamine rich, and iron is colocalized with dopamine in the brain . Second, reduced brain tissue iron has been observed in each of these regions in children with ADHD relative to age- and sex-matched TD peers . ROIs were defined using the Harvard-Oxford subcortical atlas . ROIs only included voxels with at least 50% probability of belonging to each specific brain region. NT2*w signal was averaged across all voxels in each ROI, resulting in a single value per ROI. We first analyzed nT2*w signal across a whole basal ganglia ROI and a thalamus ROI. The basal ganglia ROI combined Harvard-Oxford atlas masks of bilateral caudate, putamen, globus pallidus, and accumbens into a single basal ganglia ROI. Next, to assess the regional specificity of basal ganglia nT2*w signal and its relationships with performance, we extracted nT2*w signal from bilateral caudate, putamen, globus pallidus, and accumbens separately. Motion-related quality assurance Any participant with at least 170 volumes remaining across all fMRI runs after excluding high motion timepoints was included in the current study . One participant with ADHD was excluded as their fMRI data following placebo administration did not meet this criterion. To ensure that nT2*w signal was not significantly impacted by motion, we correlated mean FD across all runs with the nT2*w signal for each region of interest (ROI) using Pearson correlations ( α = 0.05). There were no significant relationships between mean FD and nT2*w signal in any of the ROIs we examined (all corrected p-values > 0.48; see , ). Results 3.13.23.33.43.5Group comparisons of demographic variables There were no significant differences between the ADHD and TD groups on age, FSIQ, word reading scores, sex, race, family income, or parental education (all corrected p-values > 0.44; ). Replication analysis – comparing nT2*w signal in children with ADHD and TD children There were no significant group differences in nT2*w signal in the whole basal ganglia and thalamus ROIs or in the individual basal ganglia ROIs (i.e., caudate, putamen, globus pallidus, and accumbens) (all corrected p-values > 0.52; see , ). Relationship between nT2*w signal and response inhibition First, we examined the relationship between nT2*w signal and response inhibition performance on the standard go/no-go task in all participants. There were no significant relationships between nT2*w signal in the whole basal ganglia or thalamus ROIs and the proportion of commission errors (both corrected p-values > 0.15; a-b ). When assessing the regional specificity of the relationship between nT2*w signal and the proportion of commission errors, lower nT2*w signal (i.e., higher brain tissue iron) in the putamen was significantly related to higher proportion of commission errors ( β = − 0.13 , corrected p-value = .04; c ). There were no significant relationships between nT2*w signal of the caudate, globus pallidus, or accumbens and the proportion of commission errors (all corrected p-values > 0.24; d , , ). When relating nT2*w signal to tau, no relationships were significant (all corrected p-values > 0.867; both corrected p-values for tau > 0.1762; all corrected p-values for tau > 0.10). For parameter estimates, see , . Relationship between nT2*w signal and responsivity to reward Next, we examined whether there was a relationship between nT2*w signal and responsivity to reward in all participants. We operationalized responsivity to reward as the change in performance (i.e., the proportion of commission errors or tau) between the standard and rewarded go/no-go task (standard go/no-go – rewarded go/no-go). There were no significant relationships between nT2*w signal in the basal ganglia or thalamus and responsivity to reward as measured by change in proportion of commission errors (both corrected p-values > 0.20; -b ). Further, there were no significant relationships between nT2*w signal in any of the subregions of the basal ganglia (i.e., caudate, putamen, globus pallidus, and accumbens) and change in proportion of commission errors (all corrected p-values > 0.25; -d ,43). For parameter estimates and plots of relationships with tau and of additional ROIs, see , , . In additional analyses that examined group differences in the relationship between nT2*w signal in the whole basal ganglia and thalamus ROIs and the change in proportion of commission errors and tau, we did not observe significant interaction effects (both corrected p-values for proportion of commission errors > 0.07; both corrected p-values for tau > 0.96). We similarly did not observe significant interaction effects when examining each basal ganglia ROI separately (all corrected p-values for proportion of commission errors > 0.06; all corrected p-values for tau > 0.38). For parameter estimates, see , . Relationship between nT2*w signal and responsivity to MPH We then examined whether there was a relationship between nT2*w signal and responsivity to MPH in children with ADHD. Responsivity to MPH was defined as a change in the proportion of commission errors or tau on the standard go/no-go task from placebo to MPH (placebo – drug). There was a significant relationship between nT2*w signal in the basal ganglia and change in the proportion of commission errors ( β = − 0.47 , corrected p-value = .01; a ). That is, lower basal ganglia nT2*w signal (i.e., higher brain tissue iron) was significantly related to greater improvements in the proportion of commission errors on MPH. There was not a significant relationship between nT2*w signal in the thalamus and change in proportion of commission errors (corrected p-value > 0.26; b ). In secondary analyses examining each basal ganglia subregion separately, there were significant relationships between nT2*w signal in the caudate and in the putamen and change in proportion of commission errors (caudate: β = −0.56, corrected p-value = .005; putamen: β = −0.47, corrected p-value = .01; c-d ). There were no significant relationships between nT2*w signal in the globus pallidus or accumbens and change in proportion of commission errors (both corrected p-values > 0.26; , ). When relating nT2*w signal to change in tau, no relationships were significant (all corrected p-values > 0.06 ) . For parameter estimates and plots of relationships with tau and of additional ROIs, see , , -8 . Discussion The main goal of this study was to examine how brain tissue iron levels in the basal ganglia and thalamus related to the cognitive effects of dopaminergic modulation in children with ADHD and TD children. While we did not find significant group differences in basal ganglia or thalamic brain tissue iron, we did observe that tissue iron levels in the putamen related to proportion of commission errors on the standard go/no-go task. Critically, tissue iron levels in the whole basal ganglia, and specifically in the putamen and caudate, were significantly related to improvements in proportion of commission errors on the standard go/no-go task following MPH administration in children with ADHD. First, in both registered and unregistered supplementary validation analyses, we confirmed that brain tissue iron measurements are stable over a weeks-long period in TD children and following a one-time MPH challenge in children with ADHD (see ). Prior work has demonstrated that brain tissue iron measurements are stable over a months-long period in children and minutes- and days-long periods in adults . We are the first to show that brain tissue iron measurements in children are indeed stable when assessed approximately one week apart. Additionally, previous work has shown that brain tissue iron levels normalize as a function of chronic psychostimulant treatment . We have now confirmed that there is no change in brain tissue iron following a single administration of MPH. As such, our work represents a crucial contribution to the literature. We did not find significant differences in brain tissue iron in the basal ganglia or thalamus between children with ADHD and TD children. Existing literature is inconsistent in this regard, likely due to the methods used to quantify brain tissue iron. For example, Adisetiyo and colleagues leveraged both MRI relaxation rates, as implemented here, and magnetic field correlation (MFC). They only observed group differences using MFC-derived measures of brain tissue iron. Further, the ages of our study participants (i.e., 8–12 y) cover a narrower range than other studies, whose ages cover at a minimum 8–14 years . Adolescence is a period of significant change within cortical and subcortical dopamine systems , and these changes can be observed using T2*‐weighted imaging . It is therefore possible that differences in brain tissue iron levels between children with ADHD and TD children may not emerge until adolescence. In addition to examining whether there were group differences in brain tissue iron, we examined relationships between brain tissue iron and response inhibition performance. Previous research examining the relationships between brain tissue iron and cognition have shown that greater levels of brain tissue iron were related to faster processing speed and higher general intelligence , as well as greater verbal reasoning, nonverbal reasoning, and spatial processing . Though previous work has not focused specifically on response inhibition, we hypothesized that we would similarly find that greater levels of brain tissue iron would be related to better cognitive performance in our study. Instead, we found that higher brain tissue iron in the putamen was related to more commission errors (i.e., worse response inhibition performance) on the standard go/no-go task. Though this significant relationship was specific to the putamen, the general direction of these relationships was largely consistent across brain regions examined. This finding is consistent with literature that has examined the interplay between the dopamine system and response inhibition using other indices of dopamine functioning . For example, reduced D2/D3 receptor availability in the caudate and putamen and increased spontaneous eye-blink rate, both of which indicate increased extracellular dopamine levels, have been related to poorer response inhibition performance on the stop-signal task (i.e., increased stop-signal reaction times) . The present study is therefore in line with previous work, as our results suggest that increased dopamine levels as indexed by increased putamen tissue iron are related to more commission errors (i.e., poorer performance) on the standard go/no-go task. Our findings and others’ are also in line with models of striatal behavioral control that characterize ‘go’ (direct) and ‘no-go’ (indirect) pathways of the basal ganglia . Specifically, increased dopamine levels are hypothesized to bias the balance toward the ‘go’ pathway and suppress the ‘no-go’ pathway , which would result in increased commission errors, as we observed in our study. In children with ADHD, we found that higher basal ganglia tissue iron levels were associated with greater responsivity to MPH, as indexed by a greater reduction in commission errors on the standard go/no-go task. When focusing on specific basal ganglia subregions, this result was driven by significant relationships in the caudate and putamen. The caudate and putamen are key regions in the cortical-basal ganglia loops associated with cognitive control broadly and response inhibition specifically . Dopaminergic activity in these regions has also been related to the ADHD phenotype (i.e., inattentive symptoms) and to performance improvements following reward in healthy individuals . The observations that participants with higher brain tissue iron levels exhibited more commission errors, as well as that children with ADHD with higher brain tissue iron levels exhibited the greatest reduction in commission errors following MPH, suggest that medication effects on response inhibition might be related to response inhibition performance at baseline (i.e., on placebo). Recent work in children with ADHD shows that inhibitory control improvements following MPH administration were greatest in those with the poorest baseline inhibitory control . We confirmed this was the case in our data via an unregistered exploratory analysis in which we conducted a linear regression model predicting change in commission errors (placebo – drug) from baseline commission errors (placebo), controlling for age and sex. We found that children with ADHD with the most commission errors on the standard go/no-go task on placebo improved the most following MPH ( β =.95, p = .01). Notably, these findings are consistent with prior literature observing that individuals with higher intrinsic DA are more responsive to MPH , including reduction of symptoms and improvement of cognitive functioning . Even so, it has been shown that the relationships between intrinsic DA and cognition improvements following MPH administration depend on the specific domains of cognitions examined . As such, additional investigations of the precise dopaminergic mechanisms through which MPH improves response inhibition across individuals with varying levels of intrinsic, baseline DA are needed. The administration of MPH and the receipt of rewards are both known to improve cognition via their impact on the dopamine system . While recent work has shown that individuals with high levels of brain tissue iron display greater responsivity to dopaminergic modulation via the receipt of rewards , we did not observe this in our data. Crucially, however, we did observe that the relationships between brain tissue iron and the reduction in commission errors from the standard to rewarded go/no-go task were in the same direction as those observed in analyses that examined responsivity to MPH. Thus, this is generally consistent with prior work suggesting that the receipt of rewards and MPH modulate dopamine similarly by increasing its synaptic availability in the striatum . Notably, prior literature has reported greater striatal activation in the presence of both MPH and rewards relative to the presence of reward alone and that MPH administration results in greater improvements in cognitive performance relative to reward-related reinforcement , corresponding to a greater effect size (d = 1.54 for MPH and d = 0.60 for rewards . Thus, a larger sample may have been needed to detect the smaller effect of rewards on response inhibition. This highlights the need to replicate these results in a larger sample of children with and without ADHD. We did not observe significant relationships between brain tissue iron and response time variability as indexed by tau on the standard go/no-go task, nor between brain tissue iron and responsivity to reward or MPH. Response time variability is thought to index a range of cognitive processes, including attention and working memory . Larsen and colleagues did not find relationships between brain tissue iron and the ‘executive control’ cognitive domain of the Penn computerized neurocognitive battery, which is defined as abstraction and mental flexibility, attention, and working memory . Our results are therefore consistent with this work and suggest that the relationships between brain tissue iron and response inhibition performance might be specific to the ability to withhold responding (i.e., stopping), which is better captured via commission error quantification. Though our findings contribute to the growing body of evidence that brain tissue iron neurophysiology is linked to cognition in children with ADHD and TD children, there are certain limitations that must be acknowledged. First, it is important to recognize that iron is not a direct measure of all aspects of dopamine function but is most associated with presynaptic dopamine availability . Larsen and colleagues showed that brain tissue iron measurements derived from tissue relaxation rates were significantly associated with PET-derived presynaptic vesicular dopamine storage although not in a 1:1 manner. Iron is important in several biological processes that are not limited to the dopaminergic system, including myelination and production of other catecholamines . While the basal ganglia is unique in its predominance of DA, it is not a direct measure and results should be interpreted with this limitation in mind. Regardless, leveraging tissue relaxation to quantify dopamine indirectly in children with ADHD and TD children is a promising avenue of research, given the radiation exposure associated with PET imaging and subsequent challenges of assessing dopamine levels in vivo in children. Additionally, we did not collect daily serum iron data from participants in the present study. Given the dynamic nature of peripheral iron levels , the relationship between serum iron and brain tissue iron is difficult to quantify. Even so, it has been shown that serum iron levels do not differ between individuals with and without ADHD , so it is unlikely that our results are driven by differences in serum iron level. However, future studies should investigate whether serum iron level relates to cognitive performance and responsivity to dopaminergic manipulation as we have here with brain tissue iron. We also did not collect sleep data from our participants. We therefore cannot determine whether variability in response inhibition performance in this sample is due to variability in sleep duration or quality. To combat the possibility of sleep differences impacting our results to the best of our ability, we excluded individual runs of the standard and rewarded go/no-go tasks based on omission error rates to ensure that subjects were awake and responding to the task, as described in Go/no-go tasks and measures. Finally, MPH is an indirect dopamine and norepinephrine agonist . We were therefore not able to determine whether modulation of the norepinephrine system impacted improvements in response inhibition following the administration of MPH. Additional investigations into the precise neural mechanisms through which MPH improves cognition are needed to answer this question. In conclusion, while we did not observe significant differences in basal ganglia or thalamic tissue iron in children with ADHD and TD children aged 8–12 y, we did validate the assumption that tissue iron is stable across a weeks-long period in TD children and following a one-time MPH challenge in children with ADHD. We additionally demonstrated that increased tissue iron in the putamen was significantly related to increased commission errors on the standard go/no-go task, and that increased tissue iron in the caudate and putamen, as well as generally in the whole basal ganglia, was significantly related to improvements in the proportion of commission errors following MPH administration. These relationships were not observed when response inhibition performance was indexed using tau (i.e., response time variability). Together, these findings augment existing literature that examines brain tissue iron and its relationships with cognition in children with ADHD and TD children and is one of the first to clarify the role of brain tissue iron in the cognitive effects of dopaminergic modulation. This work is a crucial step toward understanding the mechanisms of both behavioral and medication treatment for ADHD. Further, the present findings suggest that noninvasive brain tissue iron measurements may represent a biomarker for response to dopaminergic treatment in ADHD |
The influence of orthodontist change on treatment duration and outcomes in patients treated with Clark’s twin block appliance followed by non-extraction fixed mechanotherapy – a retrospective cohort study | 395ceab6-b02e-4907-8f1f-324a54013112 | 11662596 | Dentistry[mh] | Orthodontic treatment aims to achieve a permanent occlusion that is both aesthetically pleasing and functionally sound . Considering the fact that orthodontic treatment falls under the category of long-term therapy, accomplishing this objective presents numerous challenges such as the complexity of the cases and patient compliance . Although orthodontists undergo training at the same institution, their clinical experience, skills and knowledge differ, substantially impacting the treatment outcomes and duration . There is a higher occurrence of post-graduate resident turnover in orthodontic clinics associated with public and teaching hospitals . When a care provider changes during orthodontic treatment, it frequently becomes challenging for the replacement to swiftly perform at the same level of efficiency . Typically, this procedure necessitates time before being cognizant of the objectives and treatment plan to facilitate the continuation of care . In instances where orthodontists approve the transfer of patients, they may need to adjust the treatment plan to accommodate the patients’ preferences, given the diverse array of available treatment options . In addition to the practice and biomechanics of orthodontics, approaches to registration procedures, managing finances and scheduling of appointments vary among orthodontists . Treatment duration and outcomes in patients treated by multiple orthodontic residents can be influenced by several factors, including the variability in clinical experience among residents, reduced motivation of the patient when transferred to another resident against their own volition, and poorly managed transitions of patients . Variations in expertise among different residents may impact the quality and efficiency of care. More experienced residents might handle cases more effectively and make better decisions, potentially shortening treatment duration and improving outcomes . Moreover, smooth transitions between resident teams or healthcare settings are crucial. Poorly managed transitions might cause delays, discontinuity in care, or information loss, affecting both treatment duration and outcomes. Without a valid and reliable measuring standard, it is challenging to critically evaluate the treatment outcomes . The ability to objectively evaluate the treatment outcomes has always piqued the interest of researchers. Several indices and measurements, such as the Peer Assessment Rating (PAR) Index; the Index of Treatment Need (IOTN) and the Index of Complexity, Outcome and Need (ICON); have been developed for the purpose of these evaluations . In 1998, the American Board of Orthodontics Objective Grading System (ABO-OGS) came into effect; the system underwent revision in 2007 to evolve into the Model Grading System. Currently, the American Board of Orthodontics Cast-Radiograph Evaluation (ABO-CRE) is the most recent iteration of the system. It evaluates the quality of finished orthodontic cases based on eight parameters, using plaster casts and orthopantomograms . The general consensus is that orthodontist transition during orthodontic treatment negatively impacts treatment quality and prolongs treatment duration . However, few empirical studies have investigated the validity of this perception . In addition, only a handful of studies have implemented the ABO-CRE to assess the treatment efficacy. The critical knowledge vacuum in the field of orthodontics concerning the treatment efficacy of patients who receive care from multiple residents provides the impetus for undertaking this investigation. Thus, this study aimed to analyze the influence of orthodontist transition on treatment duration and outcomes in patients treated with Clark’s twin block appliance (CTB) followed by non-extraction fixed orthodontic treatment using the ABO-CRE and PAR indices. Our null hypothesis stated that there was no statistically significant difference in orthodontic treatment duration and outcomes between transfer and control groups.
This study received an institutional review board exemption (number 2023-9127-26255) from the Aga Khan University, Pakistan, before initiating this retrospective cohort study. The sample size was calculated with OpenEpi ® software v. 3.01 using the findings of Aktas et al. , who reported the mean and standard deviation values of total ABO-CRE score of 35.74 ± 10.12 and 29.88 ± 6.28 for the non-extraction transfer and control groups, respectively. Keeping α at 0.05, the power of study at 80%, and the confidence level at 95%, a sample size of 33 was required. Since there were two groups, a total sample size of 66 (N) was required to conduct this study. We included patients treated as non-extraction fixed mechanotherapy after growth modification with CTB and subjects with complete routine orthodontic records. Our study included both transfer and non-transfer patients who underwent treatment at the same outpatient orthodontic clinic, with a minimum treatment duration of six months before and after patient transfer. Patients with any craniofacial/dental anomaly, syndrome, trauma or previous history of orthodontic and orthopedic treatment, orthognathic cases, and only single-arch treatment cases were excluded from the study. This research employed two distinct groups: the transfer group and the control group. Patients in the transfer group received orthodontic care from second and, if needed, third resident after the first resident completed their specialty training. In essence, the treatment pertaining to the transfer patients was administered by not more than three clinicians, and the entire treatment procedure was carried out at the very same university clinic. Control group patients were those whose treatment had been completed by a single resident. Each group was treated by CTB followed by non-extraction fixed mechanotherapy. A 0.022 × 0.025-in Roth prescription preadjusted edgewise appliance was employed in all patients and their treatment appointments were scheduled every 4–5 weeks. All cases were managed under the same consultant orthodontist’s supervision, which emphasizes the mentorship aspect of the residents’ training. Treatment duration and post-treatment orthodontic outcomes in all patients were collected from orthodontic records which included all pre- and post-treatment data in the form of files and orthodontic models. Treatment duration and post-treatment orthodontic outcomes between the two groups were assessed and compared using ABO-CRE and PAR indices. The percentage reduction in PAR scores was computed by dividing the difference in the PAR score between pretreatment and posttreatment by the pretreatment PAR score, and then multiplying the result by 100. Statistical analysis The analysis of the data was performed using SPSS (version 26.0) and STATA (version 12.0). Shapiro-Wilk test was performed to examine the normality of data which showed a non-normal distribution. Frequencies were reported for categorical variables such as gender. Descriptive statistics such as mean and standard deviations were reported for the age of male and female patients. Mann-Whitney U test was utilized for comparison of treatment outcomes between the transfer and control groups, as well as between the two subgroups within the transfer group. Wilcoxon signed rank test was employed to analyze the changes in the ANB and Z angle, WITS values, and overjet from pre-treatment to mid-treatment. Linear regression analysis was used to determine the association among variables. A p -value < 0.05 was considered statistically significant.
is study comprised a sample size of 66 individuals, of which 27 were males and 39 were females. Descriptive information regarding the mean age of male and female patients and their cervical vertebral maturation (CVM) stages recruited in the transfer and control groups are shown in (Table ). There were insignificant differences regarding pre- and post-treatment PAR and percent reduction in PAR scores between the two groups (Table ). We also found insignificant differences regarding pre- and post-treatment ABO-CRE composite and component scores between the groups (Table ). We found no significant difference in the percent reduction of PAR scores and post-treatment ABO-CRE composite scores between the two subgroups within the transfer group, i.e., patients treated by two residents compared to those treated by three residents (Table ). We didn’t find any statistically significant differences in CTB treatment duration and treatment outcomes between patients at different growth phases (Table ). Mann-Whitney U test revealed that significant differences were found for CTB phase as well as for fixed appliance phase treatment durations and number of visits between the transfer and control groups (Table ). There were significant differences between pre-treatment and mid-treatment ANB and Z angle, WITS values, and overjet in both transfer and control groups (Table ). Linear regression analysis revealed that treatment duration in the transfer group was prolonged by 21.03 months compared to the control group. There were around thirteen more visits in the transfer group than in the control group (Table ).
In the field of orthodontics, assessing treatment outcomes is essential for upholding standards and gauging the effectiveness of orthodontic procedures. PAR and ABO-CRE are valid and most widely accepted orthodontic tools for assessing treatment outcomes . When assessing treatment efficacy through the utilization of post-treatment models and panoramic radiographs, ABO-CRE is among the most comprehensive indices employed . In our study, we utilized both tools to increase the validity of the assessment of results. When considering orthodontic treatment time at dental institutions, one may find it challenging to initiate and complete the procedure for a variety of reasons, in contrast to the duration of treatment observed in other areas of dentistry . The findings of this study report noteworthy variance in treatment durations between the two groups of patients, hence refuting one of the null hypotheses but there was no statistically significant difference in the outcomes of orthodontic treatment. Aktas et al. examined the effect of resident changes on the duration and outcomes of orthodontic treatment by using ABO-CRE. Their study revealed that the transfer of the patient not only resulted in a protracted treatment duration but also had detrimental effects on the efficacy of the treatment outcomes. Their findings regarding treatment outcomes were in contrast with our results as we found no significant difference in post-treatment outcomes between the transfer and the control groups. In the current study to make sure of a fair group comparison, we focused on a single treatment modality which was CTB followed by non-extraction fixed mechanotherapy. McGuinness and McDonald utilized the PAR index in their study to investigate the effect of resident alterations on the duration and outcomes of orthodontic treatment. They included patients who were treated with non-extraction fixed mechanotherapy. They found insignificant differences between the transfer and control groups which was in agreement with our study outcomes. In a study conducted by Alsaeed et al. , they explored how the treatment outcomes and duration of orthodontic patients were influenced when they were treated by a single consultant orthodontist versus a collaborative approach involving two consultant orthodontists. They found insignificant differences in treatment duration and outcomes between the groups. Their findings contradict the findings of our study regarding the duration of treatment as they found no significant difference in the length of treatment between the groups. This may be due to their collaborative approach and their experience. Toh also investigated the effects of operator changes and found that the group of patients treated by a single operator manifested a more prolonged mean treatment duration. The reasons for the prolonged mean treatment time observed for solitary operators in their study remain unclear. The results of their study were inconsistent with the outcomes of our study. This phenomenon could be attributed to the inclusion of both fixed and removable functional appliances, extraction cases and orthognathic surgical cases in their study. In our study, the transfer and control groups had post-treatment ABO-CRE scores of 29.33 ± 12.76 and 31.18 ± 14.19, respectively. Yang-Powers et al. assessed the efficacy of treatment outcomes for patients treated at the university hospital by utilizing ABO-CRE. The investigators found that the mean scores of the patients in the university group were 45.54 ± 18.33, while those of the control group were 33.88 ± 9.69, which were very high in comparison to the scores of the current study. This may be ascribed to the lack of strict supervision. Santiago and Martinez also analyzed the outcomes of orthodontic therapy using ABO-CRE which revealed that a failing score of > 30 was obtained by 46.9% of the patients who encountered evaluation. The present study revealed that 62.1% of patients received a passing ABO-CRE score of ≤ 30 which showed good results that can be attributed to the skills of residents and strict supervision. When percent reduction in PAR score was used to assess treatment outcomes, 3.03% (2/66) showed “improvement (50–69%),” and 96.97% showed “great improvement (70–100%) " as compared to Peppers et al. , treatment outcomes, 1.0% (2/191) of the subjects showed “no improvement,” 4.2% (8/191) showed “little improvement,” 8.4% (16/191) showed “improvement,” and 86.4% showed “great improvement.”. The pre-treatment difficulty index of the cases was not computed using any method in prior investigations. The pre-treatment difficulty index is a metric used to assess the intricacy and difficulties involved in cases prior to the initiation of orthodontic treatment. It takes into account various factors that may affect the treatment process and outcomes . In the current study, we computed the pre-treatment difficulty index of the cases by using PAR and ABO-CRE indices. The ABO-CRE index was not specifically developed to serve as an index for pre-treatment malocclusion. In contrast, the ABO employs the use of the discrepancy index as a means to quantify the complexity of each case during pre-treatment analysis . However, the discrepancy index falls short in its ability to determine treatment outcomes, as certain elements rely on cephalometric data. Additionally, the quantification of treatment progress can only be achieved by using the same quantitative index for pre- and post-treatment evaluation. Therefore, we employed the ABO-CRE index for both pre-treatment and post-treatment evaluations and also to enable comparison of our findings with other studies . We found insignificant differences between the transfer and control groups. These findings were important because they established a baseline understanding that the cases in our study were comparable in terms of their initial complexity. Therefore, discrepancies in treatment duration or outcomes that may be discernible between the transfer group and the control group were less likely to be attributed to variations in the initial complexity of the cases. Patient compliance was measured directly by total number of treatment visits. S. We founthat showed patients were compliant, so the prolonged treatment duration observed in transfer group patients could be attributed to two possible explanations. Treatment progress constitutes the initial reason. It is indeed challenging for the successor resident to become familiar with the initially prescribed treatment plan and to continue treatment when the transfer takes place in the middle of the ongoing treatment . The subsequent clinician may opt to deviate marginally from the prior orthodontic strategy and implement certain adjustments, thereby inherently prolonging the duration of treatment. It is hypothesized that the second cause pertains to psychological variables. The patient’s willingness to cooperate may be compromised when they are transferred to another resident against their own volition. The aforementioned factors may reduce the motivation of the patient, eventually leading to disappointing outcomes . While most post-treatment variables did not show significant differences between the control and transfer groups, “occlusal contacts” approached significance with a p -value of 0.054. Although not statistically significant, this result may suggest a trend indicating that patients treated by a single orthodontist achieved slightly better occlusal contact scores (mean 1.96) compared to those treated by multiple residents (mean 2.93). This could reflect the potential benefits of continuity in care for achieving finer occlusal adjustments, as treatment consistency may facilitate more precise tooth movements. However, given the lack of statistical significance, this observation should be interpreted cautiously. Future studies with larger sample sizes may help clarify whether this trend represents a meaningful clinical difference. This study possesses certain limitations due to its retrospective design, absence of blinding during the analysis, having large potential for residual confounding and was conducted at a single center. It is worth noting that several other variables, such as patient adherence, treatment plan implementation and surveillance level could have impacted the treatment outcome, and protracted duration of therapy in the transfer group. Further research is needed to explore these variables and their specific impacts on treatment quality and duration in greater detail. This study outlines the intricate relationship between orthodontist transitions, treatment duration and treatment outcomes. Nonetheless, a deeper understanding of the factors at play, both clinical and psychological, is essential to provide comprehensive care to patients undergoing orthodontic treatment in situations involving resident transitions.
There was no difference observed in the post-treatment outcomes between the transfer and control groups. The transferred patients had longer treatment duration and more follow-up visits compared to the control group patients. Treatment duration may have lengthened due to resident changes, but this did not impact the quality of orthodontic treatment.
|
Trauma-informed care in the emergency department: concepts and recommendations for integrating practices into emergency medicine | e43872fe-57c3-4e1a-9a34-29c03c8f8acf | 9946309 | Patient-Centered Care[mh] | In the United States and worldwide, exposure to one or more traumatic events is a highly prevalent experience among the general population and is associated with both psychological and physical sequelae . Based on a survey study of 24 nations, the global prevalence of exposure to one or more traumatic experiences is estimated to be 70.4%, with the majority of those having multiple traumatic exposures . Prevalence of a history of traumatic experiences and the type of experience varies across and within populations, with increased prevalence among Black, Indigenous, and other communities of color as well as those in poor and urban areas [ , , ]. It is estimated that two-thirds of all individuals have experienced at least one traumatic event before the age of 18 . Experiences of traumatic events can have varied and lasting physical and mental health effects on patients, which can be further exacerbated by interaction with the health care system . Medical events in themselves can be a cause of traumatic stress, both for children and adults . One study found that 10–20% of adult physical trauma patients admitted for care developed PTSD post-incident and an additional 14–28% went on to develop acute stress disorder (ASD) . Another study found that admitted injured pediatric trauma patients showed rates between 10–69% for PTSD and 14–28% for ASD . With this prevalence of trauma, we can recognize that many of the patients we meet in the emergency department (ED) have been impacted by traumatic events at some point in their lifetime and it is likely that many of these patients continue to experience negative impacts of these past events. Even before the COVID-19 pandemic, individuals with a more substantial history of psychological or physical trauma utilized emergency care at higher rates than the general population . We must also acknowledge that in the case of the acute traumas we care for every day in our clinical encounters the trauma experienced by the patient does not end when they are dispositioned out of the ED. As such, it is imperative that emergency medicine recognizes the growing body of literature on the benefits of trauma-informed care (TIC) to better understand the context in which a patient presents to the ED, methods to provide TIC, and how to mitigate retraumatization. There have been longstanding calls to bring a trauma-informed approach to other aspects of medical care, and it is time we hear and respond to this call for emergency medicine . As we recognize the prevalence of exposure to traumatic events in our current society, we recommend that TIC be instituted as universal precautions in all emergency care, just as we consider hand hygiene and appropriate personal protective equipment .
The Substance Abuse and Mental Health Services Administration (SAMHSA) defines individual trauma in the context of the ‘Three E’s’ as ‘the result of an event , series of events, or set of circumstances that is experienced by an individual as physically or emotionally harmful or life threatening and that has lasting adverse effects on the individual’s functioning and mental, physical, social, emotional, or spiritual well-being.’ A traumatic exposure may be experienced directly, witnessed by a loved one, or can be vicarious as in those witnessed by first responders responding to traumatic events .
Traumatic exposures encompass a diverse group of experiences at the individual, interpersonal, collective/community, and historical level. Traumatic events can include both impersonal trauma (ecological events such as hurricanes, tsunamis, and landslides), or interpersonal traumas (events between people, such as child neglect and abuse, intimate partner violence, sexual assault, and human trafficking) . We know that those who experience interpersonal violence, especially those who experience violence over time (rather than a one-time acute event), tend to show worse long-term outcomes . The literature also shows that for survivors of interpersonal violence, the closer the person who inflicts the violence is to the survivor, the more impactful the event (for instance, stranger assault vs. assault by a family member) . We treat survivors of both types of traumatic events in emergency medicine, whether in the acute aftermath or years later in related or unrelated health or mental health emergencies. Community violence involves the ‘exposure to intentional acts of interpersonal violence committed in public areas by individuals who are not immediately related to the victim’ and include gang violence, public shootings, war, and terrorist attacks . Historical trauma refers to the ‘complex and collective trauma that is experienced over time and across generations by a group of people who share an identity, affiliation, or circumstance’ . Historical trauma pertains to populations whose relatives and ancestors were impacted by mass traumatic events, generally violent in nature, such as enslavement in the US, the Holocaust, Japanese internment camps, and racism and racist policies towards Black, Indigenous, and other People of Color .
The 1998 Adverse Childhood Experiences (ACE) study elucidated the ways in which a history of traumatic experiences during childhood predisposes people to the development of chronic physical and mental health conditions. The study utilized a ten-question ‘yes’/‘no’ questionnaire which sought to quantify and identify the traumatic experiences participants were exposed to in childhood. The study found that approximately two-thirds of respondents reported one ACE, and that the likelihood of additional exposure increased by 87% if the person had one exposure of any type. 16.67% of respondents reported 4 or more ACEs. The study also revealed a positive correlation between the number of ACEs a person has with poor health outcomes, including higher rates of depression, suicide attempts, substance use including cigarette smoking, liver disease, heart disease, and chronic obstructive pulmonary disease . The physiological response to chronic stress and traumatic exposures is complex and is dependent on a variety of factors including age, genetics, history, and available support and resources . Dysregulation of the hypothalamic-pituitary-adrenal (HPA) axis inhibits the return from a stressed/activated state to a homeostatic baseline and may occur due to chronic activation of the HPA axis . This chronic exposure to endocrine and neural responses to stress is related to the development of chronic health problems such as those identified in the ACEs study [ , , ]. It is critical that emergency health providers understand the prevalence of experience of traumatic events, as well as the potential long term negative healthcare impacts.
There is often increased utilization of emergency and same-day services by those with a more significant history of trauma and decreased utilization of primary care and mental health services . However, it is important for clinicians to remember that interaction with the health care system can be a highly stressful and potentially triggering experience for survivors of trauma, and these experiences may lead to further medical trauma and/or re-traumatization. Medical trauma pertains to the psychological and physiological response of patients and their families to pain, injury, serious illness, medical procedures, and invasive or frightening treatment experiences . Re-traumatization can occur due to a previous history of trauma in healthcare, fear and confusion, lack of privacy, stress associated with undergoing procedures, physical touch and removal of clothing, vulnerable physical positions, triggering during interviews, and financial stress .
Trauma-informed care (TIC) is defined by SAMHSA as ‘an organizational structure and treatment framework that involves understanding, recognizing, and responding to the effects of all types of trauma’ . The framework of TIC emphasizes physical, psychological and emotional safety for patients and providers, and it helps survivors rebuild a sense of control and empowerment. These principles can be applied universally to all clinical interactions, and trauma-informed training and policies have been implemented in other health care fields as well as in medical education . The development and implementation of a TIC model requires the acknowledgment of the ‘4 R’s’ as defined by SAMHSA: Realization of the widespread impact of trauma and understanding of potential paths for recovery Recognition of the signs and symptoms of trauma in patients, families, staff, and others involved in the system Response by fully integrating knowledge about trauma into policies, procedures, and practices Active resistance against re-traumatization Furthermore, SAMHSA advises that the development of a TIC model requires sensitivity and prioritization toward the following key principles : Safety Trustworthiness and transparency Peer support Collaboration and mutuality Empowerment of voice and choice Cultural, historical, and gender issues SAMSHA’s principles assume an underlying goal of TIC is to create an environment of physical and psychological safety . In the ED, this means access to healthcare with mitigation of potentially retraumatizing experiences. Trust and transparency should be maintained throughout the system, with patients able to understand what is happening at each stage of the encounter and why. Peer support should be utilized among providers and patients alike, allowing for an environment of supportive engagement throughout the ED. Patient care should utilize practices of interprofessional team-based care and patient-centered collaboration and mutuality that empower the patient to participate in the plan of care, with specific focus on patient voice and choice within that plan. Lastly, a TIC approach also recognizes the intergenerational and systemic violence and traumatization that continues to occur within our institutions, with specific focus on cultural , historical , and gender-based violence . The systemic implementation of TIC requires the cooperation of multiple domains within the health care system, including focus on the physical environment, direct care, and administrative practices. A 2022 systematic review evaluated the current data regarding TIC interventions specifically in the emergency department and found that educational interventions, collaboration between patients, health professionals, and community resources, and patient and clinician safety were the prevailing themes . Here we seek to expand upon existing literature with detailed recommendations for TIC interventions in the emergency department setting.
The general background for defining trauma and trauma-informed care is based on SAMHSA’s structure and definitions as they are a leading entity in the field . The recommendations provided here are based on results from SCOPUS and PubMed searches performed using search terms including ‘trauma-informed care’, ‘trauma-informed care AND emergency medicine’, and ‘trauma-informed care AND acute care’. A narrative review approach was utilized. Articles which pertained only to physical trauma or were markedly outside of the scope of emergency medicine, such as long-term outpatient management, were excluded. While some of these recommendations may be thought of as general best practices in patient care or fastidious, we include them here with the intention that they be considered in the context of treating patients with history of traumatic exposures.
TIC acknowledges pertinent patient history related to past or current traumatization in assessment and plan of care, while trauma-denied care ignores such pertinent history. Trauma-informed practices should be considered universal precautions and can be modified to work within the unique environment of the ED. In the individual physician-patient encounter, TIC skills can be implemented using the following strategies, linked to the specific principles as outlined by SAMHSA: Safety, trustworthiness and transparency, collaboration and mutuality, empowerment of voice and choice, and consideration of cultural, historical, and gender issues. These recommendations are described in .
The emergency department, as a critical point of access to care, is often seen as an opportunity to screen patients who may otherwise be less likely to have recommended screenings for individual and population health measures due to poor access to primary care . In order for screening in the emergency department to be beneficial and efficient, several factors must be considered, including but not limited to minimizing burden on the department and health care system, appropriate follow-up for results, adequate resources for addressing positive screens, and overall patient benefit . Routine and universal screening for traumatic experiences should be performed if the personnel and resources are available to appropriately manage a positive screen . However, even if formal universal screening is not to be performed, the provider can ask the patient if there is a portion of the encounter or exam they are most concerned about and can address those concerns informally. These queries may be accompanied by phrasing which makes it clear that these are questions asked of all patients . Some language for screening could include: ‘Have you ever experienced trauma in your previous medical or other life encounters?’ or ‘Have you experienced anything that makes seeing a doctor difficult or scary for you?’
Trauma-informed care is also critical in fostering a workplace which minimizes the trauma of those who work there. People who work in the helping professions are more likely to have experienced both personal and workplace trauma than the general population . Additionally, by the nature of the work in the ED, clinicians, nurses, and all others who work in the environment are susceptible to workplace trauma, vicarious trauma, and re-traumatization of their own histories of traumatic experiences. This susceptibility to vicarious trauma is further increased by the chronic work stress, burnout, and emotional fatigue that is experienced by clinicians. Furthermore, in such emotional states, clinicians are less able to provide care which is empathetic and sensitive to the psychological needs of the patient, and as such, quality of care declines and there is increased risk of error . This is likely to be exacerbated by the COVID-19 pandemic, as supported by studies which report increased anxiety, posttraumatic stress disorder, and other mental health disorders in health care workers on the front lines of the pandemic [ , , , , ]. To improve the capacity of clinicians to provide TIC, attention to their own psychological wellb e ing is also of utmost importance. Workplaces can become trauma-informed by participating in increased knowledge sharing and training in TIC, instituting practices such as appropriate paid time off and leave policies, implementing peer support groups to facilitate peer discussion to prevent secondary traumatization, and providing reasonable spaces for rest.
While some aspects of TIC can be more readily implemented in the ED, there are challenges which may arise due to the limitations of this environment. Notable challenges include the potential urgency of a presentation, limited duration of the encounter, and the single-visit nature of the ED. The treatment of the psychological effects of trauma often requires multiple appointments over extended periods of time. As such, the treatment of trauma-associated mental health disorders and the processing of trauma is often addressed in a longitudinal relationship with a behavioral health provider or in the setting of other long-term relationships, such as with a primary care provider. While this type of relationship and treatment is not applicable in most ED visits, there is the opportunity for the emergency clinician to utilize TIC practices as outlined above to mitigate the possibility of retraumatization, and to minimize medical trauma experienced while the patient is in the clinician’s care. It is also an opportunity to provide referrals to services through which the patient may develop long-term relationships to address their trauma.
Exposure to traumatic experiences, with or without the development of lasting physical and mental health sequelae, is extremely common in the general population, and even more so in the population with chronic physical and mental health conditions and substance use disorders. Trauma affects all of us- patients, clinicians, and our communities. The utilization of TIC practices in the ED is critical to mitigate active traumatic exposures and to prevent traumatization and retraumatization during interaction with the health care system. While the urgency of some encounters in the ED presents challenges to the practice of TIC at all points of care, the trauma-informed framework should be utilized throughout the encounter. By applying TIC practices as universal precautions- applying them to all patient encounters, regardless of knowledge of a patient’s trauma history, we can prevent re-traumatization and protect survivors.
|
Commentary: Viewing Alzheimer's disease from an ophthalmologist's eyes | 14218fe9-e097-4bd6-9bdb-38fb10def449 | 7210853 | Ophthalmology[mh] | In 2011, the clinical diagnostic criteria for Alzheimer's disease dementia were revised, and research guidelines for initial stages of the disease were characterized to depict a deeper understanding of the disorder. Development of the new guidelines was led by the National Institute on Aging-Alzheimer's Association (NIA-AA) in the United States. In their guidelines AD is defined by its underlying pathologic processes that can be documented by postmortem examination or in vivo by biomarkers. It recognizes three general groups of biomarkers based on the nature of the pathologic process that each measures. Biomarkers of Aβ plaques, biomarkers of fibrillar tau and biomarkers of neurodegeneration or neuronal injury. These biomarkers can be fluid biomarkers or imaging biomarkers. Cerebrospinal fluid (CSF) biomarkers are the only variety of fluid biomarkers utilized in the early diagnosis of AD. The imaging biomarkers utilize modalities like structural magnetic resonance imaging (MRI), functional MRI, 18F-2-fluoro-2-deoxy-D-glucose (FDG) positron emission tomography (PET), and amyloid-PET to identify AD.
Ocular fluid biomarkers Noninvasive ocular imaging biomarkers Pupillometry is a low-cost, noninvasive technique that may be useful for monitoring cholinergic deficits which generally precedes the memory and cognitive disorders. Kerbage et al . have described noninvasive methods to identify AD signatures in the human crystalline lens using in vivo technique of laser scanning along with a fluorescent ligand. Retinal optical coherence tomography (OCT) has been used to measure the retinal thickness and retinal vascular measurements as the noninvasive screening biomarkers. A review article published in ophthalmology this year confirmed the associations between retinal measurements of spectral domain SD OCT and AD, highlighting the potential usefulness of SD OCT measurements as biomarkers of AD. They assessed the associations between AD and measurements of ganglion cell-inner plexiform layer (GC-IPL), ganglion cell complex (GCC), macular volume, and choroidal thickness, in addition to retinal nerve fiber layer and macular thickness using SD OCT. Whereas another study published in the Alzheimers Dement (Amst), representing the largest optical coherence tomography cohort with amyloid-proven AD cases questions these claims. The authors show that retinal thickness does not discriminate AD from controls, despite evident changes on clinical, neuroimaging, and CSF measures, querying the use of retinal thickness measurements as an AD biomarker. They recommended future studies, including longitudinal measurements of retinal layer thickness and specific molecular biomarkers such as amyloid, tau, and neuroinflammation, to assess the retina as a potential source of noninvasive AD biomarkers. The same group has shown in another study that the retinal vasculature does not discriminate AD from control participants, despite evident changes on clinical, neuroimaging, and cerebrospinal fluid measures, querying the use of retinal vasculature measurements as AD biomarker. Ongoing clinical trials using noninvasive ocular biomarkers like retinal thickness and vasculature measurements using OCT in proven AD cases can only answer these queries in the forthcoming years. We as ophthalmologists can study and present the indicators/biomarkers of the disease in the eye, but their validation, clinical application, or utilization as a research tool is up to the neurologists.
CSF biomarkers cannot be recommended for preventive screening due to their limited accessibility and the invasive nature of CSF collection. On the contrary, the biomarkers obtained from the blood oral, ocular, and olfactory fluids/tissues are easily accessible and less invasive to procure. Anterior ocular fluid and CSF share many features, such as the similarity between blood–aqueous and blood–CSF barriers. Despite that till date, aqueous biomarkers in the AD patients have not been studied as acquisition of the sample is an invasive procedure. Aqueous humor of individuals with AD may reveal higher levels of Ab42, the main biomarker found in neocortical deposits. One study has been done to evaluate the vitreous for the AD-related biomarkers utilizing the vitreous samples from the patients undergoing planned vitrectomy for other pathologies. These patients were evaluated for cognitive impairment using the mini mental status examination scores. The authors found that the patients with poor cognitive function have significantly lower vitreous humor levels of AD-related biomarkers Aβ40, Aβ42, and tTau. These biomarkers do not correlate with underlying eye conditions, suggesting their specificity in association with cognitive change. At this point the correlations of AD with alternative biomarkers obtained from ocular fluids is uncertain and considered invasive as a screening tool.
|
Validation of the new pathology staging system for progressive supranuclear palsy | 662b1431-bcaa-4223-a87d-04ab546ae45b | 8043892 | Pathology[mh] | Below is the link to the electronic supplementary material. Supplementary file1 (PDF 99 KB) Supplementary file2 (PDF 1261 KB) Supplementary file3 (PDF 109 KB) |
Cardiac | 07ff865e-c832-44aa-a7c3-72ed1a0056b8 | 11932160 | Cardiovascular System[mh] | Introduction Atrial fibrillation (AF) is the most common type of tachyarrhythmia, which shows increasing incidence and prevalence with the aging population. Based on the Global Health Data Exchange (GHDE), there were 37.574 million AF patients worldwide in 2017 (0.51% of the global population), exhibiting a prevalence increase of 33% compared with 20 years ago . In a community cohort study of 50,684 patients, AF showed an annual 3% increase in its incidence from 2006 to 2018 . AF can lead to cardiac dysfunction on the one hand, and a 5‐fold risk of stroke compared with the normal population on the other . Its corresponding consequence lies in disability and high mortality, posing a serious burden on human health and the social economy. It is currently believed that AF will occur under the condition of trigger factors, and its maintenance depends on the atrial matrix. On the one hand, an ectopic excitatory focus is only a trigger factor for AF. On the other hand, atrial dilation, fibrosis and ultrastructural changes caused by various pathophysiological reasons are crucial for AF, such as myolysis, glycogen accumulation, changes in nuclear chromatin, mitochondria and the sarcoplasmic reticulum . These two factors jointly alter the electrical and structural remodelling of the atrium, thereby promoting the occurrence and maintenance of AF . Recent years have seen great advances in the minimally invasive or interventional treatment of AF. However, partial cases of AF exhibit a high recurrence rate and lack effective treatment approaches, which can be attributed to the indistinct pathophysiological mechanism underlying the occurrence and maintenance of AF. The discovery of key molecules involved in atrial remodelling is of great significance for revealing the pathogenesis, early diagnosis and targeted intervention of AF. FGF23, a newly discovered member of the endocrine FGF family, is predominantly secreted by osteocytes. Numerous cohort studies have suggested that the increased level of FGF23 is correlated with the occurrence of AF . A study quantitatively measured more than 40 cardiovascular biomarkers in 638 patients and found that FGF23 was second only to the classical biomarker B brain natriuretic peptide (BNP), establishing it as a predictive biological marker for AF . Several researchers have proposed that FGF23 may be a new key among indicators that can induce AF . Understanding the role of the key molecule FGF23 may reveal additional causes of AF, which hold significant implications for the early prediction and elucidation of AF mechanisms. The generation of AF requires rapid ectopic triggering involving foci and AF matrix. Abnormal intracellular calcium disposal figures prominently in the formation of ectopic rhythm during AF [ , , ]. Intracellular calcium overload during the diastolic period is the main reason that facilitates the occurrence of DAD and causes ectopic rhythm . Increased calcium inflow and Sarco/ER Ca 2+ ‐ATPase (SERCA) activity in the SR can increase the calcium load of the SR. Calcium ion leakage during diastole may account for increased frequency of RyR2 opening caused by higher SR load, elevated expression and activity of RyR2 protein as well as excessive activation of RyR2 channels due to phosphorylation induced by protein kinase A (PKA) and calmodulin‐dependent protein kinase II (CAMKII) . In addition, the reduced interaction between RyR2 and its stable subunits also modulates the likelihood of RyR2 opening . All of the above factors can lead to intracellular calcium overload and activate instantaneous transient inward current (I ti ), thereby promoting the occurrence of DAD. TA will be induced at a certain threshold, generating ectopic rhythm and promoting the occurrence of AF. FGF23 can increase intracellular calcium levels in isolated cardiomyocytes, as well as the contractility of primary mouse cardiomyocytes and ventricular muscle bundles . FGF23 may cause intracellular calcium overload by directly or indirectly regulating the expression and function of calcium‐regulated proteins, leading to increased ectopic rhythm. From this point, FGF23 exerts an important role in the occurrence and maintenance of AF. Traditionally, it is believed that external stimulation causes the bone to release FGF23 into the circulation, thus resulting in cardiotoxic effects. Recent studies have indicated that the expression of FGF23 is not limited to bone, but can also be expressed in cardiomyocytes and up‐regulated in pathological cardiac remodelling and other environments , suggesting that the cardiotoxicity of FGF23 may be partially due to the paracrine or autocrine effects of cardiac‐derived FGF23. The regulatory effect of cardiac FGF23 on intracellular calcium remains unclear. In the present study, we generated a mouse model with cardiac‐specific knockout of FGF23 to elucidate the effect of the cardiac FGF23 itself on the progression of AF. Meanwhile, we isolated NMAMs overexpressing FGF23 via lentivirus gene transfer to investigate the electrophysiological effects of cardiac FGF23 on intracellular calcium in atrial cells and identify its underlying mechanism.
Materials and Methods 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 2.10 Statistical Analysis Continuous variables were presented as mean ± standard error (SE). One‐way ANOVA with Bonferroni post hoc analysis or student's t ‐test was used for comparison between groups. Fisher's exact test was used to compare the incidences of atrial tachyarrhythmia and oscillation in membrane potentials (DADs) between the groups. SPSS software (version 20.0) was used for data analysis. Statistical significance was set at p < 0.05.
Animal Model Generation Mice were housed and cared for at PLA General Hospital Laboratory (Beijing, China). FGF23 floxed mice were generated based on CRISPR/Cas9 technology by Saiye Biotechnology Co., LTD (Suzhou, China). To obtain mice with cardiac myocyte‐specific FGF23 deletion (FGF23 f/f MyHC Cre/+ , FGF23‐CKO), FGF23 floxed C57BL/6J mice were crossed with transgenic MyHC‐Cre C57BL/6J mice. Transgenic MyHC‐Cre C57BL/6 J mice were used as controls. The study showed that the level of FGF23 protein from cardiomyocytes in the heart of healthy adult mice increased in an age‐dependent manner . Furthermore, oestrogen can affect the expression and function of ion channels in arrhythmias, thereby affecting the calcium current of myocardial cells. Consequently, male mice aged 3–6 months with stable cardiac FGF23 protein levels were selected for the experiment.
Echocardiography Echocardiography was performed using the Vevo 2100 Imaging System (FUJIFILM Visual Sonics Inc.) as previously described . Anaesthesia was induced with 3% isoflurane and maintained with 1% isoflurane over a breathing mask. Echocardiographic views were acquired in the parasternal long axis in B‐ and M‐modes, and the short axis in M‐mode. All echocardiographic images were analysed using the Vevo Lab 3.2.0 software.
Intraesophageally Burst Pacing and Induction of AF All experiments were performed as previously described . Briefly, mice were anaesthetised with an intraperitoneal injection of 1% pentobarbital sodium, and their ECG recordings were instrumented with subcutaneous electrodes (Power Lab 16/35, ad Instruments, Castle Hill, NSW, Australia). Intraesophageal burst pacing was used to assess susceptibility to AF. The electrode was inserted through the oesophagus and placed at the site with the lowest threshold for atrial capture. Atrial pacing was performed at twice the diastolic threshold value using two poles on the pacing catheter. First, the pulse was paced for 10 s with a circle length of 100 ms, followed by burst stimulation for 10 s with a circle length of 30 ms until 10 consecutive burst stimulations or AF were induced. AF was defined as irregular, rapid atrial activation, with varying electrogram morphology lasting ≥ 10 s. All mice were allowed a 3 min of recovery in sinus rhythm between stimulations for respiratory and circulatory recovery. Given the difficulty in inducing AF in healthy mice using only tachypacing, an intraperitoneal injection of 1.5 mg/kg isoproterenol (Iso) (Sigma, USA) was performed to increase susceptibility to AF. The occurrence and duration of AF in each group were observed and recorded.
Adult Atrial Myocytes Isolation As previously described, atrial myocytes were isolated using enzymatic digestion . Briefly, the heart was quickly removed from the thorax. Subsequently, aortic cannulation was performed to flush the residual blood. The well‐perfused heart tissues were digested using trypsin(Gibco, USA) and collagenase(Worthington, USA). The Langendorff system facilitated the procedures described above. After digestion, the atrial myocytes were centrifuged and resuspended. Ca 2+ was gradually reintroduced into the 10 mL cell suspension in a stepwise manner to avoid calcium paradox and calcium overload. Then, a total of 50 μL 100 mM CaCl 2 (at every 5 min interval of 5, 10, 15 and 20 μL, respectively) was gradually added to the cell suspension. Atrial myocytes were transferred into a 1.8 mM Ca 2+ ‐containing Tyrode's solution for measuring intracellular Ca 2+ and membrane currents/potentials. The atrial myocytes were enzymatically isolated. After calcium reintroduction, there are about 70%–80% of myocytes that remain in the survival state. Only myocytes with a rod shape, clear striations, and stable contractions were selected for the experiment. All experimental procedures were limited to 6 h. The isolation solution used in this study contains (in mmol/L): 113 NaCl, 4.7 KCl, 0.6 KH 2 PO 4 , 0.6 Na 2 HPO 4 , 1.2 MgSO 4 , 12 NaHCO 3 , 10 KHCO 3 , 10 HEPES, 15 taurine, 5 glucoses and 10 2,3‐butanedione monoxime.
Isolation, Culture and Gene Transfer of NMAMs Neonatal C57/BL6J mice (1–2 days old) were sacrificed by decapitation. Hearts were quickly excised, and the atria were dissected and transferred into ice‐cold Hank's balanced salt solution (HBSS) without Ca 2+ and Mg 2+ (Gibco, USA), then rapidly minced by three to five cuts with a fresh scalpel blade and placed in a 10 mL Penicillin vial containing 5 mL of enzymatic dissociation medium. The trypsin dissociation medium contained 0.08% trypsin (Gibco, USA) and 0.001 mg/mL collagenase II (Worthington, USA) dissolved in CMF HBSS. The collagenase dissociation medium contained 2 mg/mL collagenase (type 2; Worthington, USA) and 1 mg/mL BSA(Sigma, USA). The supernatant from each 5 min dissociation cycle was filtered through a 100‐μm cell strainer (Falcon, USA) in 5 mL cell culture media supplemented with 10% FBS (Gibco, USA)and 100 U/mL penicillin–streptomycin (Invitrogen, USA). After the fourth dissociation cycle at 37°C, the remaining tissue was triturated by gentle pipetting and strained, and the cell suspension was pelleted by centrifugation at 500 g for 10 min. The pellet was resuspended in 10 mL of culture media, and myocytes were enriched by 90 min of differential cell adhesion at 37°C in a 100‐mm culture dish. The supernatant was pelleted by centrifugation and then resuspended. Then 3 × 10 5 NMAMs were seeded on 35‐mm cell culture plates coated with 0.5% gelatin (Sigma, USA)and cultured in DMEM containing 1 g/L glucose plus 15% FBS and 1% penicillin/streptomycin (Gibco, USA)for follow‐up experiments 48 h later. FGF23 overexpression lentivirus vector was constructed by Saiye Biotechnology Co., LTD (Suzhou, China). FGF23 sequence was inserted into EGFP/T2A/Puro lentivirus vector and driven by EFS promoter. NMVMs were infected at a multiplicity of infection (MOI) of 50. Before transfection, the original medium was replaced with fresh and complete culture containing 10 μg/mL Polybrene (Sigma, USA)co‐transfection reagent, and the cells were pretreated for 30 min. After pretreatment, an appropriate amount of virus suspension was added according to the infection complex MOI determined in the previous pre‐experiment, and then put back into the cell incubator for further culture. 16 h after infection, fresh complete culture medium was replaced and continued to culture at 37°C. Two days after transfection, the cells were used for follow‐up experiments.
Patch Clamp Experiments Patch‐clamp experiments were used to record action potentials and I Ca,L in atrial myocytes. Membrane potential and membrane currents were recorded at the physiological temperature (35°C–37°C) using the patch‐clamp technique in whole‐cell recording mode. Temperature was controlled with a TC‐344C heater controller (Warner Instruments). A P70 horizontal puller (Sutter Instruments) was used to pull the borosilicate glass pipettes (World Precision Instruments) to a resistance of 2–4 MΩ. Membrane potentials were sampled at 10 kHz using a 1440A Digidata (Axon Instruments) controlled by pCLAMP programs (version 10.2). Cell capacitance, series resistance (compensation, 70%–80%), and junction potentials were compensated using the circuitry of the Axon Multiclamp 700B Amplifier (Molecular Devices, USA) and low‐pass filtered at 5 kHz. Protocols, data acquisition, storage, analysis, current fitting, and offline subtraction were performed using Clampfit 10.4 (Molecular Devices), and all curves were fitted with Origin (Microcal software). Action potentials were recorded after applying a 2.5 ms/1 nA depolarization stimulation in the current‐clamp mode. Delayed after depolarizations were defined as the presence of spontaneous depolarization of the impulse after full repolarisation occurred. Micropipettes were filled with a solution containing (in mM) KCl 20, K aspartate 110, MgCl 2 1, Mg 2 ATP 5, HEPES 10, EGTA 0.5, NaGTP 0.1 and Na 2 phosphocreatine 5, titrated to a pH of 7.2 with KOH. Cells were bathed in a solution that contained NaCl 140 mM, CaCl 2 1 mM, MgCl 2 1 mM, HEPES 10 mM, KCl 4 mM and glucose 5 mM. Its pH value was 7.36 and adjusted with CsOH. I Ca,L was recorded in the voltage‐clamp mode from a holding potential of 80 mV. Pulse‐60 mV was depolarized to +50 mV with a 10 mV voltage step, and the depolarization pulse was set to 200 ms. These recordings were used to plot the current voltage relationship ( I‐V curve), steady‐state activation (SSA), or steady‐state inactivation (SSI) curves. To record the I Ca,L , the pipette solution contained CsCl 120 mM, CaCl 2 1 mM, MgCl 2 5 mM, EGTA 11 mM, HEPES 10 mM, and Na 2 ATP 5 mM, and was adjusted to pH 7.2 with CsOH. The external solution contained NaCl 140 mM, CaCl 2 2 mM, MgCl 2 1 mM, KCl 4 mM, HEPES 10 mM and glucose 10 mM and was adjusted to pH 7.4 with CsOH. Tetrodotoxin (5 μM) was added to block sodium current when recording calcium currents. Current amplitude data of each cell were normalised to its cell capacitance (current density, pA/pF), and the I‐V curve was plotted. Voltage‐dependent activation and steady‐state inactivation profiles were fitted to a Boltzmann equation a = 1/{1 + exp. [−( V m‐ V 1/2)/k]}, where a is the normalised conductance, V m is the test potential, V ½ is the potential at which current is half activated/inactivated, and k is the slope factor. Electrophysiological data were analysed through Clampfit 10.4 (Axon Instruments) and Origin (Microcal software).
Cardiomyocyte Ca 2+ Imaging Isolated atrial myocytes were incubated with 2 μmol/L Fluo‐4 AM (Invitrogen, USA) for 20 min and then equilibrated in fresh Tyrode's solution with 1.8 mM Ca 2+ containing 250 μmol/L probenecid (Sigma, USA) for 20 min to allow dye de‐esterification in laminin‐coated dishes. The Myocyte Calcium & Contractility Recording System (IonOptix, Westwood, MA, USA) was used to record transient changes in Ca 2+ concentrations. Atrial myocytes were pre‐conditioned with field‐stimulating at 1 Hz to achieve a steady state of at least 20 beats, and Ca 2+ sparks were acquired over a 10‐s rest period. For determination of SR Ca 2+ load, rapid delivery of 10 mM caffeine was used. Diastolic events (sparks) were obtained using confocal microscopy (SP5, Leica Microsystems, German). Only recordings that showed no spontaneous Ca 2+ waves were included in the analysis.
Western Blotting Western blotting was used to investigate the expression of calcium transport‐related proteins. The left atrial tissue was collected from liquid nitrogen and lysed in RIPA buffer (Solarbio, China). Protein content was quantified using a BCA reagent kit(Solarbio, China). Protein samples from each group were separated by 4%–12% SDS‐PAGE and then transferred to PVDF membranes (Millipore, USA). The membranes were blocked with 5% non‐fat milk (Sigma, USA) for 1 h at room temperature and then incubated with the specified primary antibody overnight at 4°C (anti‐RyR2, 1:1000 dilution, Abcam, UK; anti‐Cav1.2, 1:1000 dilution, Abcam, UK; anti‐NCX1.1, 1:1000 dilution, Abcam, UK; anti‐SERCA2a, Abcam, UK, 1:1000 dilution; anti‐FGF23, 1:2000 dilution, R&D, USA; anti‐GAPDH, 1:5000 dilution, Abcam, UK). After being washed three times with TBST, the membranes were incubated with the secondary antibody (1:20000 dilution, ZSBIO, China) at room temperature for 1 h. Particular signals were revealed by the chemiluminescence detection reagent Western lightning plus‐ECL. GAPDH (1:5000 dilution, Proteintech, USA) was used to normalise the protein sample loading. Finally, ImageJ software was used to analyse the gel images.
RNA Isolation and qRT ‐ PCR Analysis A Cell Total RNA Isolation Kit (FOREGENE, China) was used to prepare total RNA from NMAMs. cDNA was generated using a reverse‐transcription system (Thermo Fisher, USA). The obtained cDNA was amplified using a SYBR Premix Kit (Thermo Fisher, USA) on a BIO‐RAD CFX96 Real‐Time PCR System (Bio‐Rad Laboratories, USA). Relative gene expression values were calculated with the 2 − ΔΔC t method using GAPDH as a housekeeping gene.
Results 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 Mice With Cardiac Myocyte‐Specific Knockout of FGF23 Reveal Normal Cardiac Function Whole‐gene FGF23 knockout mice survived no more than 13 weeks of age . A cardiac myocyte‐specific FGF23 knockout mouse model was established. Cardiac‐derived FGF23 is in more detail in the development of AF (Figure ). The two LoxP sites were respectively inserted upstream and downstream of exon 2, which was deleted after recombination enzyme activation. Compared with Cre mice, western blotting showed lower FGF23 levels in cardiomyocytes of conditioned knockout mice (Figure ). The parameters of cardiac structure were measured by echocardiography (Figure ); there was no difference in cardiac function after FGF23 conditional knockout (Figure ). Following infection of C57Bl/6J mice atrial myocytes with FGF23‐overexpressing lentiviral vector (FGF23‐OE) and control vector, qRT‐PCR and Western Blot analysis showed that upon overexpression of FGF23, the expression of FGF23 mRNA and FGF23 protein in rat atrial myocytes was significantly increased (Figure ). Decreases the susceptibility to AF in FGF23‐CKO mice, as shown in Figure . AF was observed after rapid transesophageal atrial pacing and could spontaneously convert to a normal sinus rhythm shortly thereafter. After acute intraperitoneal injection of Iso (1.5 mg/kg) for 15 min, the incidence of AF in the FGF23‐CKO group (4.34%, n = 1/23) was significantly decreased compared with the Cre group (33.3%, n = 5/15) ( p < 0.05) (Figure ). Figure is a local magnification diagram of the recovery of sinus rhythm from AF rhythm in Figure . No difference was found in the duration of AF between the two groups (FGF23‐CKO 41.45 ± 0 s vs. Cre 40.08 ± 7.73 s, p > 0.05) (Figure ).
Increased TA in FGF23 Overexpression of NMAMs A FGF23‐overexpression model of neonatal cardiomyocytes was established. Atrial myocytes in each group were electrically stimulated under the current clamp mode, and a higher incidence of triggering activity was observed in FGF23 overexpression of NMAMs than in the vector, which may result in the occurrence of AF (FGF23‐OE 65% (13/20) vs. Vector 30% (6/20), n = 20/6 ~ 10 cell/neonatal mice, p < 0.05) (Figure ).
Increased I Ca,L in FGF23 Overexpression of NMAMs After overexpression of FGF23, I Ca,L were significantly increased in neonatal atrial myocytes in mice (FGF23‐OE −14.17 ± 1.09 pA/pF vs. Vector 9.96 ± 0.53 pA/pF, n = 10, p < 0.05) (Figure ). The gating mechanism characteristics of I Ca,L showed that the overexpression of FGF23 had no effect on the activation curve of calcium currents. SSI half‐inactivation voltage of calcium current shifted to the right, with a statistically significant difference ( p < 0.05) (Figure ). The recovery kinetics process after calci‐electric loss was accelerated when FGF23 was overexpressed in studies on the recovery kinetics after the inactivation of I Ca,L (Figure ).
Increased Ca 2+ Transients in FGF23 Overexpression of NMAMs Calcium release and uptake in two groups of NMAMs were recorded (Figure ). The amplitude of calcium release was significantly increased after overexpression of FGF23 in neonatal atrial myocytes (FGF23‐OE 1.29 ± 0.02 vs. Vector 0.83 ± 0.01, n = 20, p < 0.05) (Figure ). Calcium release peaked half the time; calcium elimination half time and calcium elimination time constant did not differ significantly (Figure ).
Decreased DADs in Atrial Myocytes of FGF23 ‐ CKO Mice To verify whether FGF23‐CKO mice have a similar phenotype to neonatal cardiomyocytes, the atrial myocytes were isolated from adult mice to observe the above‐mentioned indexes. The incidence of DAD/TA in the FGF23‐CKO group was significantly decreased after Iso treatment of atrial myocytes (FGF23‐CKO + Iso 24% (6/25) vs. Cre + Iso 56% (14/25), p < 0.05) (Figure ).
Decreased I Ca,L in Atrial Myocytes of FGF23 ‐ CKO Mice I Ca,L was recorded in two groups of atrial myocytes in voltage clamp mode (Figure ). The current density was obtained by current amplitude/film capacitance (pA/pF), and the I‐V curve of I Ca,L was obtained by plotting the current density against the stimulation voltage. As shown in Figure (right), the I‐V curve in FGF23‐CKO group is significantly upward compared with the control group. The I Ca,L current densities decrease significantly between −30 mV and + 30 mV. The shape of I‐V curves in the two groups does not change, showing a typical ‘inverted bell shape’. When the clamping voltage is 0 mV in both groups, the current density reaches the maximum, and the I Ca,L ‐peak current density of FGF23‐CKO mice in the control group decreases from −9.42 ± 0.78 pA/pF to −5.75 ± 0.39 pA/pF ( n = 10, p < 0.05). The I Ca,L gating mechanism showed an increased slope k factor of the steady‐state activation curve of FGF23‐CKO mice (FGF23‐CKO 3.30 ± 0.34 vs. Cre 5.97 ± 0.91, n = 15, p < 0.05) (Figure (middle)). There was no significant difference in the half‐inactivation voltage and slope factor of the steady‐state inactivation curve (Figure (right)). The recovery rate of I Ca,L in FGF23‐CKO mice was decelerated after inactivation, and the recovery time constant increased in studies on the recovery kinetics of I Ca,L (FGF23‐CKO −169.21 ± 6.26 ms vs. Cre −114.33 ± 2.50 ms, n = 10, p < 0.05) (Figure ).
Reduced Systolic Ca 2+ Transients, Diastolic Spontaneous Ca 2+ Leak in Atrial Myocytes of FGF23‐CKO Mice The calcium release amplitude of FGF23‐CKO mice was decreased compared with that of Cre mice (FGF23‐CKO: 1.48 ± 0.06 n = 15 vs. Cre 2.88 ± 0.14, n = 22, p < 0.05) (Figure ) There was no significant difference in half of the time for calcium release to reach its peak, half of the time for calcium elimination, and the elimination time constant (Figure ). In addition, the effect of cardiac FGF23 on diastolic calcium disposal was observed. The calcium spark frequency in the FGF23‐CKO group was reduced (Figure (left)). FDHM remained unchanged (Figure (right)) and FWHM was decreased in the study on Calcium spark kinetics (Figure (middle)). The effect of cardiac FGF23 on SR calcium content was studied. After field‐stimulating atrial cardiomyocytes to reach the steady state, caffeine was applied to deplete the SR of calcium (Figure ). There was no significant difference in the amplitude of caffeine‐evoked calcium transients, an estimation of SR calcium load between the two groups (Figure ). The time constant of calcium elimination was increased when Caffeine was continuously injected in the FGF23‐CKO group (Figure (middle)).
Changes in Expression of Ca 2+ Handling Proteins in Adult Atrial Myocytes of FGF23 ‐ CKO The effects of FGF23‐CKO on calcium handling proteins were detected in mouse atrial muscle tissue of two groups. FGF23‐CKO and Cre mice shared similar expression of SERCA2a protein, which is responsible for SR calcium uptake (Figure ). However, compared with Cre mice, Lower expression of RyR2 protein responsible for calcium release in the SR was presented in FGF23‐CKO mice (Figure ). Additionally, a significant 80% decrease in the expression of Cav1.2, a cell membrane protein responsible for calcium handling, was observed in the FGF23‐CKO group, while there was no significant change in the expression of NCX1.1 (Figure ). Consistently, the expression of RyR2 protein was up‐regulated, but the protein level of Cav1.2 remained unchanged in the FGF23‐overexpressed NMAMs (Figure ).
Discussion Previous studies have demonstrated an extremely low rate of AF induced by burst stimulation in healthy young mice, making AF induction challenging . In this study, Iso was used to increase the susceptibility of mice to AF. The incidence of AF was increased to 33.3% in the control Cre group, while that was only 4.34% in the FGF23‐CKO group, suggesting that FGF23‐CKO mice were less susceptible to AF under the condition of acute Iso‐induced β adrenergic stress. Electrophysiological recordings from two groups of mice revealed that the incidence of Iso‐induced DAD or TA was significantly reduced in FGF23‐CKO mice as compared with the Cre group. Isoproterenol, by binding to cardiomyocyte β receptors, activates calcium channels through the intracellular AC‐cAMP‐PKA signalling pathway, thus increasing calcium inflow and enhancing the autotrophic conductive contractility of cardiomyocytes. Our previous studies have demonstrated that Iso can increase the influx of calcium ions in L‐type calcium channels and enhance the sensitivity of RyR2 channels in SR, which further promotes calcium ion leakage during diastole. As a result, intracellular calcium ions are increased to activate the activity of sodium‐calcium exchange bodies in the cell membrane, thereby generating instantaneous inward Iti current and increasing the occurrence of DAD . Relevant studies have shown that DAD‐mediated TA contributes to the occurrence of AF , which explains that a decrease in the incidence of AF in the FGF23‐CKO group is due to the reduction of the incidence of ISO‐induced DAD. Conversely, opposite results were found after overexpression of FGF23 in neonatal atrial myocytes in the present study. The incidence of TA was increased when FGF23 was overexpressed in atrial myocytes. Unlike adult cardiomyocytes, neonatal mouse cardiomyocytes undergo a more rapid dedifferentiation‐redifferentiation cycle, typically resulting in spontaneous beating of the cells 20 h after plating, while adult cardiomyocytes typically require pacing to induce contraction. Therefore, newborn mouse cardiomyocytes do not need to be induced by drugs to generate triggering activity. DAD results from the disturbance of intracellular calcium circulation. Physiologically, when cardiomyocytes depolarise, a small amount of extracellular calcium ions rapidly flows into the cell through the L‐type calcium channel in the cell membrane. Calcium ions bind to the RyR2 receptor in the SR to open the calcium channel, resulting in the release of a large amount of calcium ions from the SR into the cytoplasm, which causes the contraction of cardiomyocytes. This process is called calcium‐induced calcium release (CICR) . During myocardial diastole, excessive intracellular calcium ions are removed through two main mechanisms: reuptake into the SR through the sarcoplasmic reticulum calcium ATPase pump (SERCA), or excretion of intracellular calcium ions into the extracellular through the membrane sodium‐calcium exchange (NCX) . Any problem in any of these links can lead to the disturbance of intracellular calcium circulation. Consequently, further investigation into intracellular calcium disposal revealed that the peak calcium current and I Ca,L inflow were significantly reduced in the FGF23‐CKO group. The gating mechanism of I Ca,L in the FGF23‐CKO group exhibited an increased slope factor of the activation curve, indicating that I Ca,L activation was slowed down, and the number of active channels activated was decreased at the same voltage. Additionally, it was observed that the recovery time after the inactivation of L‐type calcium channels was prolonged, which is the gating mechanism for the reduction of I Ca,L . Immunoblotting showed that the protein expression of Cav1.2 in the atrial muscle tissue of the FGF23‐CKO group was nearly 40% lower than that of the Cre group. Coupled with the above reasons, it mediated a reduction in I Ca,L , thereby reducing the inflow of intracellular calcium ions and maintaining a low intracellular calcium environment. After overexpression of FGF23, the peak current density of I Ca,L was significantly increased. The gating mechanism of I Ca,L showed that the inactivation curve moved significantly towards depolarization, and the inactivation of I Ca,L was slowed down, suggesting an increase in the number of effective channels of I Ca,L under the same voltage. Despite no difference in the recovery time constant, the recovery curve after overexpression of FGF23 was significantly shifted to the left after inactivation. These results indicate that the inflow of calcium ions into L‐type calcium channels, as well as the intracellular calcium ion level, can be increased by cardiac FGF23. However, there was no significant difference observed in the protein level of Cav1.2 in vitro, which was inconsistent with the results in vivo. Furthermore, The results demonstrated that there were obvious effects on the gating mechanism of I Ca,L in both vivo and vitro groups, suggesting that cardiac FGF23 mainly increases intracellular calcium influx by regulating the gating mechanism of L‐type calcium channels. In 2013, Touchberry et al. found that intracellular calcium ion levels could be increased in cardiomyocytes treated with FGF23 in vitro and further found that the increase of intracellular calcium ions could be blocked by verapamil, a calcium channel blocker. This strongly suggests that the increase in intracellular calcium ions may be due to increased inflow of I Ca,L . In 2014, Kao et al. also found that FGF23 could increase I Ca,L in HL‐1 atrial cells, but they did not further explore changes in mechanisms underlying the changes in I Ca,L . Graves et al. found that FGF23 perfusion into the heart can cause the increase of intracellular calcium in the ventricular myocytes, which indicates that FGF23 can open the calcium channel of the cell membrane to promote the inflow of calcium ions and thus cause the increase of intracellular calcium, which produces similar results to this experimental study. However, the objects of their study were mainly ventricular muscle strips and isolated hearts, not single cardiomyocytes. The results may also be influenced by the effects of FGF23 on other cells and cell–cell interactions. In addition, they found that FGF23 can induce a prolonged QTc interval, suggesting that FGF23 may increase the risk of arrhythmia by affecting the repolarization process of cardiomyocytes. The continuous inflow of L‐type calcium current can prolong the action potential plateau, which may be one of the reasons for the prolongation of QTc interval caused by FGF23. However, the repolarization of cardiomyocytes is also related to other ions such as potassium ion, which may be the result of the combined action of multiple ion currents. The action potential plateau of ventricular muscle is longer than that of atrial muscle cells, and FGF23 may be more likely to cause repolarization‐related Ventricular arrhythmia. However, in 2019, findings from Navarro‐Garcia et al. were inconsistent with ours. They demonstrated that FGF23 in vitro treatment of cardiomyocytes reduced I Ca,L . This discrepancy may be attributed to the inconsistent concentration and treatment time of FGF23 used for treating cardiomyocytes. Variations in I Ca,L may be dose‐dependent and time‐dependent. Simultaneously, detection of intracellular calcium ion release showed that the transient amplitude of calcium in the FGF23‐CKO group was decreased under 1 Hz stimulation, indicating a reduction in calcium‐induced instantaneous calcium ion flow triggered by calcium ion inflow, but the recovery function of calcium ions in the SR was not affected. The expression of RyR2 protein in atrial muscle tissue of the FGF23‐CKO group was observed to be lower than that of the Cre group. A reduction of calcium ions was released from the SR during the systolic period by decreasing the expression of RyR2 in the FGF23‐CKO group. The opposite results were observed after the overexpression of FGF23 in neonatal mouse atrial myocytes, with an increase in the amplitude of calcium transient and the expression of RyR2 protein. RyR2 typically remains closed during diastole, but a small portion of RyR2 channels will spontaneously open during diastole. In this study, the frequency of calcium spark during the diastolic period of FGF23‐CKO mice was reduced. Calcium spark dynamics showed that FDHM remained unchanged and FWHM was decreased, indicating that the diffusion range of calcium spark was mainly affected in the diastolic period rather than the duration in FGF23‐CKO mice. Decreased RyR2 expression results in reduced distribution density of RyR2 in the SR, thereby decreasing the spreading efficiency of RyR2 clusters. Numerous studies have shown that in addition to the increased sensitivity and instability of RyR2 leading to increased calcium ion leakage during diastole, non‐calcium spark mediated diastolic calcium leakage also plays a certain role. The underlying mechanism lies in that a single RyR2 spontaneously opens in the RyR2‐releasing cluster, while other channels in the cluster remain inactive , isolated non‐clustered RyR2 is activated . A decrease in the expression of RyR2 can also reduce non‐calcium spark‐mediated RyR2 calcium leakage during the diastolic period, thereby reducing the occurrence of AF. Increased amplitude of calcium transients and increased load of calcium ions in SR after FGF23 treatment of HL‐1 cells were also shown in previous studies, which is consistent with the results of this study . In the caffeine‐induced release of calcium pool, there was no difference in the SR calcium pool capacity between the two groups. The magnitude of calcium transients is related to the SR calcium load , Extracellular calcium inflow and SERCA activity can affect SR calcium load . An increase in SR load can increase the frequency of calcium sparks during diastole, thereby enhancing the activity of NCX during diastole and increasing the occurrence of DAD, which promotes AF. We observed that the calcium elimination time constant increased and calcium recovery slowed down in the FGF23‐CKO group after caffeine‐induced calcium release. During the 1 Hz stimulation phase, the decrease in calcium transients is attributed to SERCA in the SR and NCX in the cell membrane, and the decay time constant of calcium transients (Tau‐1 Hz) reflects the combined activity of SERCA and NCX. SERCA failed to build up SR calcium reserves during the caffeine infusion phase. Thus, the decrease in caffeine‐induced calcium pulses (Tau‐caffeine) was primarily attributed to NCX, which indicated a weakening of NCX function in the FGF23‐CKO group. However, this study did not find any difference in the level of NCX protein. It is evident that the quantity of NCX remained unchanged, but the function of NCX was weakened, thereby reducing the instantaneous inward Iti current and the occurrence of DAD in FGF23‐CKO mice. However, no significant differences were observed in sarcoplasmic reticulum calcium load between the two groups of mice, which may be attributed to the fact that the experiment was conducted under physiological conditions, and the impact of cardiac FGF23 on calcium load under pathological conditions remains unexplored. Despite numerous studies conducted in various laboratories assessing the intracellular calcium ion environment, it has been consistently demonstrated that FGF23 can increase the intracellular calcium ion concentration. However, there is no comprehensive report in relevant literature targeting the specific source of calcium ions at the current stage, and the impact of FGF23 on I Ca,L and RyR2 remains controversial. Kao et al. found that 25 ng/mL of FGF23 could increase I Ca,L in HL‐1 atrial cells and heighten the amplitude of calcium release, which further increases the calcium ion load of SR. However, Navarro‐Garcia et al. used 100 ng/mL of FGF23 to irrigate adult Wister rat cardiomyocytes, and decreases in I Ca,L of cardiomyocytes, the amplitude of calcium release, and the SR calcium load were observed. It was also found that the calcium spark frequency of cardiomyocytes was increased when incubated with FGF23. The results were completely inconsistent, and a reasonable explanation was that FGF23 embraced different acute and long‐term effects on cardiomyocytes. FGF23, like other well‐defined stress hormones such as norepinephrine, epinephrine, and angiotensin II, can increase I Ca,L inflow and excitocontractile coupling to improve myocardial contractile force. However, long‐term exposure to FGF23 may lead to the disturbance of calcium circulation, which in turn activates the transcriptional remodelling mechanism, resulting in long‐term functional impairment and ultimately leading to heart hypertrophy . The continuous inflow of I Ca,L can increase the SR calcium load. When the sarcoplasmic reticulum load exceeds the threshold, the diastolic calcium ion leakage will also be increased. Meanwhile, pathways such as CAMKII and pathways that promote the formation of intracellular ROS can be activated by FGF23 to increase the opening probability of RyR2. The increase in diastolic calcium leakage aggravates intracellular calcium overload, leading to a gradual reduction in SR load and a reduction of calcium release amplitude induced by calcium release. This also explains why the long‐term effect of FGF23 on cardiomyocytes weakens myocardial contractility, which is mainly due to the remodelling of RyR2 caused by long‐term and persistent I Ca,L entry. However, Navarro‐Garcia did not treat these cells for a long time. After only 1–3 min of treatment, he found that FGF23 could cause intracellular calcium disturbance. However, the high concentration of FGF23 he used indicated that the effect of FGF23 on cardiomyocytes may be concentration‐dependent. In this study, cardiomyocytes were interfered with their overexpression, and FGF23 protein produced by themselves could be enriched around the cells and act on cardiomyocytes in the form of autocrine or paracrine. However, the concentration of FGF23 generated by transcription is unpredictable, and the effect of FGF23 on cardiomyocytes is still limited by this method. 4.1 Limitations FGF23 can be produced by cardiomyocytes, and non‐cardiomyocytes such as cardiac fibroblasts and endothelial cells can also express FGF23 protein. Up‐regulation of cardiac FGF23 in a cardiac remodelling environment may also be responsible for non‐cardiomyocytes. This study only investigated the mechanism of FGF23 and AF in cardiomyocytes, while not in other cells. Gender differences play an important role in arrhythmia. However, this study exclusively involved male mice in animal experiments, leading to a lack of in‐depth discussion on gender differences when discussing the mechanism of cardiac FGF23 on the occurrence of atrial fibrillation. The SERCA protein, as the primary calcium removal mechanism for cytosolic calcium, is regulated by phospholamban (PLN). This study only investigated SERCA protein expression and its functional alterations and did not explore the regulatory mechanisms of phospholamban on SERCA activity.
Conclusion Cardio‐specific FGF23 knockout mitigates susceptibility to ISO‐induced atrial fibrillation. The vulnerability to atrial fibrillation is associated with a reduction in the occurrence of DAD, which arises from impaired NCX function. This alteration primarily stems from a decrease in intracellular calcium levels. Diminished membrane L‐type calcium current results in reduced calcium‐induced calcium release and decreased sarcoplasmic reticulum calcium load, ultimately leading to a decline in diastolic calcium ion leakage. These changes in calcium handling are correlated with the expression of Cav1.2 and RyR2, proteins involved in calcium transport regulation.
Xiao‐Qian Li: investigation (equal), writing – original draft (equal). Mei‐Qiong Wu: data curation (equal), investigation (equal), writing – review and editing (equal). Li‐Hua Fang: investigation (equal), writing – original draft (equal). Qian Chen: data curation (equal), investigation (equal). Zhi‐Jie Chen: data curation (equal), investigation (equal). Zhu‐Hui Lin: investigation (equal). Jian‐Quan Chen: data curation (equal). Panashe Makota: investigation (equal). Yang Li: conceptualization (equal), funding acquisition (equal), project administration (equal), writing – review and editing (equal). Jian‐Cheng Zhang: conceptualization (equal), funding acquisition (equal), project administration (equal).
All our experimental procedures were approved by the Ethics Committee of PLA General Hospital and performed in accordance withPublication No. 23, revised 1996).
The authors confirm that there are no conflicts of interest.
|
Clinical prediction tools for patient-reported outcomes in gastrointestinal cancer: a scoping review protocol | edcad873-94cc-4b8e-b18a-36171d3c5b27 | 11950955 | Patient-Centered Care[mh] | Cancer poses a significant burden on global health. Gastrointestinal (GI) cancers account for 26% of the global cancer incidence and 35% of all cancer-related deaths. Cancer diagnosis and treatment can cause significant physical and emotional distress, which, if not appropriately addressed, can lead to a diminished quality of life. Timely identification and appropriate management of patient-reported symptoms have been shown to improve patients’ quality of life and overall survival as they promote patient-centred care, a core component of cancer care. The National Cancer Institute defines patient-reported outcomes (PROs) as information about a patient’s health that comes directly from the patient. Examples include a patient’s description of theirPatient-reported outcome measures (PROMs) are ‘measurement tools that patients use to provide information on aspects of their health status that are relevant to their quality of life, including symptoms, functionality, and physical, mental and social health’. PROs are important to understanding whether healthcare services and procedures make a difference to a patients’ health status and quality of life. They also provide insight on the effectiveness of care from the patient’s perspective. With advancements in cancer care, PROs are increasingly recognised as providing valuable and essential information in achieving health system goals and outcomes. Incorporation of PROMs into clinical care not only enhances patient–provider communication and shared decision-making, but also can inform health services programming, planning and policies. Prognostic prediction models play an important role in cancer care. Innumerable decisions made by patients, family members, oncologists, surgeons and other care providers depend on assessing the probability of future events. Over the recent years, significant efforts have been made to improve and formalise prediction models based on statistical methods to provide a quantitative estimate of the probability of a specific event for an individual patient. Such prediction models have a goal to improve information sharing and shared decision-making with cancer patients, while supporting the synthesis of complex information for the care plan for individual patients. To date, there is a wealth of literature and prediction models for cancer patients of all diagnoses, with a focus on survival and recurrence. There is, however, a lack of knowledge regarding if and how PROMs are used in prediction models for cancer patients. Therefore, the aim of this review is to identify and describe clinical prediction tools developed for PROs and quality of life in adult patients diagnosed with GI cancer, examine the outcomes and predictors used within these prediction tools, and assess how clinical usability, applicability and equity have been evaluated in relation to these tools.
A scoping review methodology will be used to explore the literature describing clinical prediction tools for PROs in patients with GI cancer. This review will follow the guidelines outlined in the JBI Manual for Evidence Synthesis and the expanded Arksey and O’Malley framework for scoping reviews. Reporting will align with the Preferred Reporting Items for Systematic reviews and Meta-Analysis extension for Scoping Reviews. Objectives Eligibility criteria Population Context Study details Search strategy and information sources Study selection Data extraction Data analysis Patient and public involvement Ethics and dissemination The scoping review protocol will answer the following research questions: What clinical prediction tools have been developed and their characteristics for PROs and quality of life for adult patients diagnosed with GI cancer? Which outcomes and predictors are used by these prediction tools? How has clinical usability, applicability and equity been assessed in these prediction tools?
Studies will be eligible for inclusion if they include adults ≥18 years with primary GI cancer diagnosis and if they developed or validated a clinical prediction tool for PROs or quality of life ( ). PRO is defined as any ‘measureTo ensure the outcomes reflect the patient’s perspective, only studies using self-reported PROMs will be included, where information is reported directly by the patient or, if necessary, by a relative or proxy. This does not include outcomes solely assessed or interpreted by a physician or clinician. Additionally, studies must provide information on the specific PROM used, such as a survey or questionnaire, along with details on how the outcome was assessed to confirm whether it aligns with the definition of self-reported PROs.
The population of interest includes adults (≥18 years) diagnosed with GI cancer, defined as solid malignancy in the oesophagus, stomach, small intestine, colon, rectum, pancreas and biliary system.
The context is to capture literature developing, validating and/or updating a clinical prediction tool for any PROs or quality of life measures for patients diagnosed with GI cancer. For this study, a PRO is any information reported by a patient or their proxy about a patient’s health. It includes any measure of symptoms, their satisfaction with care and how a disease or treatment affects their physical, mental, emotional, spiritual and social well-being.
A broad list of study designs will be included. Any study that includes the development or validation of a clinical prediction tool will be included, regardless of statistical methods and patient recruitment strategies. Grey literature will be excluded. All geographical regions will be included. However, for the purposes of this review, we will only include studies published in English. The search strategy was first developed on 17 May 2024, and will be updated periodically to ensure the review remains current. The planned completion date for the study is 1 July 2025.
In consultation with a senior Health Sciences librarian at the University of Toronto, we developed a search strategy that included keywords and medical subject headings for GI cancer, PROs and prediction tool. Each set of search terms was modified for the specific search engine. The search strategy for Ovid Medline is illustrated below in . To ensure thoroughness, we will systematically search Ovid Medline, Embase and CINAHL, and we plan to hand-search key article reference lists and reviews for additional relevant citations ( for full search strategies). No age filters will be applied due to potential limitations in sensitivity. Language restrictions will be applied solely during the selection stage, rather than at the literature search phase, to address challenges related to accessibility and the accuracy of translating non-English PROMs. Grey literature will be excluded. A previous systematic review evaluating trends in PROMs within healthcare found that publication of PROM literature began emerging in the 1990s. As such, a temporal limit of 1990 was set to the search strategy. We will use the search holdings of University of Toronto, University of Western Ontario Libraries and Queen’s University, as well as three hospitals’ network libraries (Sunnybrook Health Sciences, University Health Network and Unity Health Toronto) to obtain the full test. Reference will be managed using Covidence, a systematic review software. Box 1 Search strategy OVID MEDLINE Database: Ovid MEDLINE(R) ALL <1946 to May 17, 2024> Search Strategy 1 Gastrointestinal Neoplasms/ or Digestive System Neoplasms/ (24374) 2 (Gastrointestinal adj2 (cancer* or tumor* or tumour* or neoplasm* or malignancy or carcinoma)).tw,kf,ti,ab. (30738) 3 Esophageal Neoplasms/ or Stomach Neoplasms/ or Pancreatic Neoplasms/ or Intestinal Neoplasms/ or Colorectal Neoplasms/ or rectal neoplasms/ or anus neoplasms/ or Biliary Tract Neoplasms/ or liver neoplasms/ or Colonic Neoplasms/ (650057) 4 (((digestive or esophageal or esophagus or gastric or stomach or pancreas or pancreatic or intestinal or intestine* or colon* or colorectal or bowel or liver or hepatic or rectal or rectum or biliary or cholangio* or hepatocellular) adj3 (adenocarcinoma or carcinoma or cancer* or tumor* or tumour* or neoplasm* or malignancy)) or (hepatoma or hepatocarcinoma or cholangiocarcinoma)).tw,kf,ti,ab. (760505) 5 1 or 2 or 3 or 4 (944472) 6 models, statistical/ or likelihood functions/ or linear models/ or logistic models/ or nomograms/ or proportional hazards models/ (441472) 7 clinical decision rules/ (960) 8 (((statistical or linear or logistic or hazard*) adj2 model*) or (nomogram* or (table* adj2 partin)) or (likelihood adj2 function*)).tw,kf,ti,ab. (349427) 9 ((rule* adj3 clinical adj3 (decision* or predict*)) or (predict* adj3 tool)).tw,kf,ti,ab. (21831) 10 6 or 7 or 8 or 9 (723974) 11 Patient Outcome Assessment/ or Patient Reported Outcome Measures/ (21258) 12 health care surveys/ or "quality of life"/ or Diagnostic Self Evaluation/ or "Surveys and Questionnaires"/ (829143) 13 Outcome Assessment, Health Care/ or Symptom Assessment/ (90346) 14 (((patient reported or self reported or self-reported or patient or self) adj3 (outcome* or symptom* or survey or health or measure* or assessment* or experience or perspective)) or ((patient outcome* or symptom) adj3 (measure* or assessment*))).tw,kf,ti,ab. (413806) 15 (HRQoL or health related quality of life or (quality adj2 life) or (wellbeing or nausea or pain or depression or anxiety or fatigue or shortness of breath or appetite or drowsiness or tiredness or bowel function or quality-of-life)).tw,kf,ti,ab. (1868962) 16 (PRO or PROM or QOL or HRQoL or HRQL or ePROM or e-PROM).tw,kf,ti,ab. (353046) 17 11 or 12 or 13 or 14 or 15 or 16 (2935837) 18 5 and 10 and 17 (3380) 19 limit 18 to yr= “1990-current” (3378)
Study selection will follow the guidelines set out by the JBI manual for Evidence Synthesis and the expanded Arksey and O’Malley framework. A pilot phase for testing the eligibility criteria will be conducted using a random sample of 50 titles and abstracts, evaluated independently by two reviewers. The reviewers will then compare their selections, resolve any discrepancies through discussion and adjust the eligibility criteria as needed. Study selection will officially commence once an inter-rater reliability of at least 75% is reached. We will follow a two-stage study selection process. In the first stage, titles and abstracts will be screened independently and in duplicate (ie, two reviewers). In the second stage, full texts of any potentially relevant citations for inclusion will also be screened independently and in duplicate. At all phases of the review, disagreement will be resolved by consensus and adjudicated by a third reviewer. The inclusion and exclusion criteria will be reviewed and may be modified following the pilot testing phase and iteratively throughout the search during research team meetings.
Data extraction will be performed using standardised extraction tables designed by the research team, informed by the JBI Manual for Evidence Synthesis, Arksey and O’Malley’s framework and the TRIPOD statement for prediction models. These tables will be aligned with the research objectives to ensure the collection of relevant information, including study characteristics (eg, design, setting, sample size), PROs assessed, predictors included, statistical methods used and the performance of the prediction models ( ). A pilot data extraction will be conducted on the first 10 studies by two independent reviewers to test and refine the extraction tables. Any discrepancies identified during the pilot phase will be resolved through discussion, with inputs from a third reviewer if necessary. Adjustments to the extraction tables will be made iteratively throughout the review to accommodate unanticipated data or insights that emerge. The final extracted data will be reviewed by the research team to ensure consistency and completeness.
The extracted data will be analysed descriptively, with a focus on mapping trends, identifying gaps and summarising the characteristics of prediction tools for PROs and quality of life in GI cancer. The following steps will guide the data analysis: Descriptive synthesis: A narrative summary will describe the study characteristics, prediction models and PROs. This will help identify patterns in the use of PROMs across studies. Categorisation of PROs: PROs will be grouped into broad domains (eg, physical, mental, emotional and social well-being) to explore which domains are most predicted. Analysis of predictors: The predictors included in the models will be compared across studies to identify common factors and explore their clinical relevance. Mapping of statistical methods: Identifying and mapping of statistical techniques used for model development. Examination of model performance measures: Review of model performance measures, as well as methods employed for both internal and external validation (where applicable). Equity considerations: We will assess whether the populations used to develop and validate each model represent the broader GI cancer population, evaluate if equity was incorporated in model methods and identify factors allowing stratification by subgroups (eg, socioeconomic status, race, ethnicity). Trend analysis: Trends in the development and validation of prediction models over time will be explored, including changes in statistical methodologies, the use of PROMS and any equity considerations. The findings will be synthesised to provide an overview of key insights on the use of PROMs in prediction models, identify gaps in the current literature and suggest directions for future research. The results will be summarised using tables, figures and a narrative format based on cancer types, allowing a better targeted insight. Subgroup analysis by cancer type will be performed as necessary. Where applicable, we will reference the TRIPOD guidelines to ensure our review thoroughly addresses essential aspects of prediction models, facilitating the synthesis of relevant literature. While TRIPOD is not specifically designed for scoping reviews, its components can be adapted to enhance the rigour and transparency of our analysis. By using TRIPOD as a guide, we can ensure that our scoping review effectively covers critical elements of prediction models, thereby identifying important areas for future exploration.
This study recognises the importance of involving patients and stakeholders to ensure the research is relevant, meaningful and aligned with real-world needs. Engaging individuals with lived experience brings unique perspectives that enrich the study and ensure that outcomes resonate with those affected by GI cancer. Following best practices in patient and public involvement, we actively integrated feedback from patient partners, healthcare providers, and decision-makers at key stages of this review. Two patient partners with lived experience of GI cancer (EK and TT) are core members of the research team. Their involvement began at the project’s inception, ensuring that the study objectives and design align with the needs and concerns of the patients. They will continue to contribute throughout the entire study, including data analysis, interpretation of findings and dissemination of results, ensuring clinical relevance and patient-centredness. We will also conduct consultations with healthcare providers and other stakeholders to validate our preliminary findings, identify any gaps and gather feedback to refine the results. These engagements will help ensure that the review outputs are relevant to clinical practice and can guide future research initiatives.
As this study involves a review of existing literature, formal ethical approval is not required. This review is the first to explore prediction models for PROs in GI cancer. The findings will provide valuable insights into existing prediction tools and serve as a foundation for future model development, guiding the creation of clinically relevant tools that integrate patient-centred outcomes. The results will be disseminated through a peer-reviewed publication and presented at relevant academic and clinical conferences. We will also engage with patient advocacy groups, healthcare professionals and decision-makers to share key findings, ensuring they inform both practice and policy. Patient partners involved in this study will co-author publications and participate in presentations, ensuring the patient perspective is reflected in all communications. In addition, we will use non-academic channels, including newsletters and social media, to reach patients, caregivers and the general public. This multifaceted dissemination strategy aims to maximise the impact of our findings, promoting the integration of PROs into future prediction models and supporting shared decision-making in GI cancer care.
10.1136/bmjopen-2024-097966 online supplemental file 1
|
Heterogeneity of T cells regulates tumor immunity mediated by | d6abaf62-f22a-4168-84c6-4ada944cc89a | 11954285 | Cytology[mh] | Helicobacter pylori ( H. pylori ) has been classified as a Group I carcinogenic pathogen because its infection is epidemically and etiological associated with the oncogenesis of gastric cancer (GC) . Hence, it is well accepted that eliminating H. pylori infection is an effective way to prevent GC. However, GC develops through a multistep process, whether H. pylori play a carcinogenic role throughout this progression is controversial. Masanori summarized a hit-and-run mechanism that H. pylori is not required for the maintenance of a neoplastic phenotype in established cancer cells in which pro-oncogenic actions are successively taken over by a series of genetic and/or epigenetic alterations during long-standing H. pylori infection . In addition, it is noteworthy that eradication of H. pylori appears to be ineffective for the prevention of GC in two trials that included subjects with precancerous lesions, including low to high-grade dysplasia at baseline . One reason for this may be that the elevated pH caused by H. pylori facilitates the intrusion of oral or intestinal bacteria into the stomach, which may additionally contribute to the development of mucosal lesions , while the damaged environment becomes unsuitable for H. pylori survival, especially in GC . The gut microbiota and the immune system have coevolved and affect each other directly via metabolic crosstalk . For example, Beura and colleagues discovered that wild mice, pet store mice, and adult humans have a highly differentiated memory CD8 + T-cell compartment in their blood, whereas “clean” laboratory mice and human neonates do not. Especially, multiple studies have provided strong evidence that immunotherapy may be an effective treatment for cancer if tumor microenvironment (TME) components are properly understood and judiciously targeted . Phenotypic differences in T cell infiltration within the TME can be categorized into three types: “immune-inflamed”, “immune-excluded”, “immune-desert”, depending on CD8 + T cells infiltration status . Recently, Montalban-Arques et al. confirmed that commensal Clostridiales strains could enhance the immunity and transform the TME from an “immune-desert” phenotype to an “immune-inflamed” phenotype by increasing the frequencies and activity of tumor-infiltrating IFN-γ + CD8 + T cells, which ultimately improved the prognosis of colorectal cancer patients. In this context, it has been shown that the efficacy of cancer therapies depends on the composition of the microbiome . In GC, there is no consensus regarding the foe or friend role of H. pylori infection in patients’ prognosis. Georgios et al. have reported that infection with H. pylori is associated with higher relapse-free survival and overall survival in patients who have curative resection without residual, local, or metastasized tumors. They speculated that cancer immunity might be suppressed in H. pylori negative patients, conversely, both innate and adaptive immunity have been significantly boosted due to H. pylori infection. Recently, a retrospective study has demonstrated that H. pylori infection prolongs survival in GC patients within the PD-L1-treated group . However, the detailed mechanisms remain unknown. Here, we aim to identify whether H. pylori infection contributes to the “immune-inflamed” TME, which immune cell components are responsible for building up such a TME, and how the differential immune cells influence the prognosis of GC patients. After performing a prognostic analysis of GC in Chinese population, we isolated intratumoral CD45 + immune cells from human GC tissues with or without H. pylori infection and addressed these questions by analyzing H. pylori -specific immune cells through single-cell RNA sequencing. Our work enhances the understanding of heterogeneity between patients with different H. pylori infection status and provides a basis for individualized treatment for GC.
Study population Definition of Serology assay for Helicobacter pylori ( H. pylori ) infection H. pylori infection status was determined using an H. pylori IgG ELISA kit (IBL International, Hamburg, Germany) according to the manufacturer’s instructions. Briefly, plasma H. pylori IgG antibody was measured using 5 µL plasma and each sample was quantified using a calibration curve. Positivity was determined when the H. pylori IgG antibody titer of a sample was > 10 U/ml. Individuals were regarded negative for H. pylori if they had no history of H. pylori infection on questioning and if they were also negative in the H. pylori IgG ELISA test. Sensitivity and specificity for the H. pylori IgG ELISA, as provided by the manufacturer, were 96.0% and 96.0%, respectively. For further details regarding the materials and methods, please refer to the supplementary information.
Between 2006 and 2016, 1,198 consecutive patients with GC were recruited from the First Affiliated Hospital of Nanjing Medical University, Northern Jiangsu People’s Hospital, and the Affiliated Hospital of Yangzhou University. All cases were newly diagnosed, with no prior treatment, including radiotherapy or chemotherapy. Each case was histopathologically or cytologically confirmed to have GC by at least two local pathologists. After signing informed consent, we collected 5 mL of venous blood from the patients and conducted a face-to-face interview concerning demographic data (e.g., age and sex) and lifestyle information (e.g., smoking and drinking) at the time of recruitment when available as previously described . Furthermore, for patients who underwent curative resection, an independent reviewer assessed the following clinical information post-surgery: tumor size and localization; depth of tumor invasion; lymph-node metastasis; histological grading; and tumor type according to Laurén classification. The clinical stage was classified by the anatomical nodal site according to the 6th edition of the tumor-node-metastasis (TNM) classification. All patients were followed up from the time of enrollment until death or the last follow-up (last follow-up: June 2019) every 6 months. The follow-up data included treatment information (chemotherapy or radiotherapy) and survival status (alive or dead, time of death, and cause of death). The latest medical records from their treating physicians were checked as a complement, and additional follow-up data were also obtained by contacting family physicians.
H. pylori infection status and the inclusion criteria Based on the dataset, 207 patients were excluded from the study because they had metastatic disease or didn’t underwent surgery ( n = 150) and without any available follow-up data ( n = 57). In order to ensure the accuracy of H. pylori infection status, patients without clinical records of urease or breath test results, which represents present infection of H. pylori , were further excluded ( n = 349). After combing the results of serology assay for H. pylori infection which represents past infection of H. pylori , 76 patients were excluded due to lack of blood samples ( n = 35) or hemolysis ( n = 41). Then, patients with discordant results between urease or breath tests records and serology results were also excluded ( n = 78). Only patients with positive results of both urease or breath tests records and serology tests were defined as H. pylori positive, while patients with negative results of both urease or breath tests records and serology tests were defined as H. pylori negative. Finally, a total of 488 patients who had both available follow-up and clinical information were enrolled, in which 264 patients were H. pylori positive and 224 were negative for H. pylori (Supplementary Fig. ). For single-cell RNA sequencing, the H. pylori infection status of gastric cancer tissues was determined intraoperatively using urease testing. Single-cell sequencing was subsequently conducted on samples obtained directly from the same region of the tested tissue, thereby ensuring a precise representation of the infection status. Finally, three H. pylori- positive and three H. pylori- negative individuals were included for single-cell RNA sequencing analysis (Supplementary Table ).
H. pylori H. pylori Single-cell transcriptomic profiling of the T cells in gastric cancer tumors among different H. pylori T infection was associated with better survival in Chinese gastric cancer patients To examine the effect of H. pylori infection on the survival of GC, 488 patients were enrolled in the study and divided into two groups according to the H. pylori infection status (Supplementary Table ). Among these patients, there were 372 men and 116 women, with a median age of 63 years. Of these, 224 were negative for H. pylori (hereafter referred to as “ H. pylori -”), while the remaining 264 patients were positive for H. pylori (hereafter referred to as “ H. pylori +”). Additionally, the H. pylori + group had a higher proportion of smokers and drinkers, whereas the proportion of patients with stage IV was relatively lower than that in the H. pylori - group. As shown in Table , the median survival time was 36.33 months, and 190 patients (38.93%) died of GC in our cohort. The median survival time for H . pylori + patients was 142.3 months, compared to 82.1 months for H . pylori- patients (Fig. A, HR = 0.64, 95% CI = 0.48–0.85, P = 2.35 × 10 − 3 ). Additionally, we found that age, gender, clinical stage, and radiotherapy were significantly associated with the survival of our subjects with GC (Table ; Fig. B and Supplementary Fig. ). In multivariate analyses, we revealed that positive H . pylori status was significantly associated with a better prognosis for GC after adjusting for age, gender, clinical stage, and radiotherapy status. Furthermore, age and gender were also identified as prognostic factors for overall survival (Table ). When adjusted for all variables in our study, only clinical stage and H . pylori status emerged as independent prognostic factors for survival. In this analysis, patients with positive H . pylori status had a significantly longer survival time compared to those with negative H . pylori status (Table , HR = 0.74, 95% CI = 0.55–0.99, P = 0.045). Notably, for patients with early and intermediate stage GC (i.e., American Joint Committee on Cancer (AJCC) I, II and III), we observed a significant difference in overall survival between H. pylori + patients and H. pylori- patients ( P = 0.024). However, no such association was found for patients with advanced cancer (i.e., AJCC IV, P = 0.979, Fig. C). Similar findings were noted when stratifying patients by other classifications for early and intermediate versus advanced disease. For instance, H. pylori + patients exhibited significantly higher survival rates than H. pylori- patients only among those with tumor depth not invading the visceral peritoneum or adjacent structures (i.e., T1, T2 and T3, P = 0.030, Supplementary Fig. A) and for those with nodal involvement of less than 2 (i.e., N0 and N1, P = 4.43 × 10 − 3 , Supplementary Fig. B). Therefore, we identified H. pylori as an independent, beneficial prognostic factor, with the effect being most pronounced in patients with early-stage cancer.
affect gastric cancer mainly targeting CD8 + T cells within tumor microenvironment It has been suggested that tumor-specific immune responses are upregulated in GC patients positive for H. pylori . To evaluate the immunological mechanism of H. pylori in the GC microenvironment, we first explored the content of tumor-infiltrating lymphocytes (TILs) and found that CD3 + T cells were the dominant TIL population in both the H. pylori - and H. pylori + groups (Supplementary Fig. A). Given that CD8 + T cells play a pivotal role in clearing intracellular pathogens and tumors , we next examined the frequency of naive and memory subsets of CD3 + CD8 + T cells based on CD45RA and CCR7 expression. We found that effector memory T cells (T EM , CD45RA – CCR7 – ) were the most prevalent subset, followed by central memory T cells (T CM , CD45RA – CCR7 + ). Furthermore, the frequency of T EM in H. pylori + GC was higher compared to H. pylori - GC (Supplementary Fig. B), indicating that H. pylori + GC generates a stronger immune response than H. pylori - GC (Supplementary Fig. C).
H. pylori infection status To elucidate the complexity of tumor-infiltrating T cells in GC in an unbiased manner, we conducted 3’ droplet-based scRNA-seq (BD Rhapsody) on 18,717 flow-sorted CD3 + CD45 + T cells freshly isolated from three H. pylori - and three H. pylori + GC patients (Fig. A and Supplementary Table ). T cell clusters were visualized using t-distributed stochastic neighbor embedding (t-SNE) following preprocessing, normalization and batch correction (Supplementary Fig. ). Overall, we identified ten unique clusters based on their gene expression profiles, which included six distinct CD8 + T cell clusters and four distinct CD4 + T cell clusters (Supplementary Fig. A-C). Each of these ten clusters harbored differentially expressed genes (DEGs) representing distinct cell types or subtypes (Supplementary Tables and Supplementary Fig. D). To address the intrinsic heterogeneity of T cells, we applied unsupervised re-clustering based on t-SNE and identified ten CD4 + and twelve CD8 + clusters (Fig. B, Supplementary Fig. ). We further examined the expression and distribution of canonical T cell markers among these clusters (Fig. C, Supplementary Fig. ). Among the ten CD4 + T cell clusters (Supplementary Table ), we identified the CD4-C2-FOXP3, CD4-C3-TNFRSF4, CD4-C8-COL5A3, and CD4-C9-IFIT1 clusters, which represented regulatory T cells (Tregs) with high expression levels of FOXP3 , IKZF2 and IL2RA , as well as co-inhibitory molecules such as TIGIT and CTLA4 (Fig. C, Supplementary Fig. A). Cells in the CD4-C5-CXCL13 and CD4-C6-TOX2 clusters exhibited high expression levels of PDCD1 and CXCL13 (Fig. C, Supplementary Fig. A), suggesting that they represent follicular T helper cells involved in the formation of ectopic lymphoid-like structures at inflammatory sites . Two clusters of CD4 + T cells (CD4-C1-CCR7 and CD4-C7-FBLN7) were characterized by a gene signature that included CCR7 , LEF1 , TCF7 , and SELL (Fig. C, Supplementary Fig. A), which are typical features of naive T cells. Notably, T cells from the CD4-C0-CCL5 and CD4-C4-SLC4A10 clusters exhibited high expression levels of CD69 , which has been reported to be elevated in activated MAIT cells of patients with COVID-19 . Given that MAIT cells can display effector functions involved in the defense against infectious pathogens , we found that the proportion of these two clusters were relatively increased in H. pylori + patients (Supplementary Fig. B), suggesting they may be activated by H. pylori . When focusing on the different CD8A + clusters (Supplementary Table ), we observed that the CD8-C7-KLF2 cluster was characterized by a gene signature associated with naive T cells, including CCR7 , LEF1 , TCF7 , and SELL (Fig. C, Supplementary Fig. C). Among the identified CD8 + T EM , characterized by low expression of CCR7, we identified several distinct clusters: CD8-C0-CXCL13, CD8-C5-XCL2, CD8-C8-HSPA1B, and CD8-C10-IFIT1, which represent activated-state CD8 + T EM expressing effector molecules such as IFNG , CCL4 , and CCL5 (Fig. C, Supplementary Fig. C). Additionally, the CD8-C6-FGFBP2 cluster exhibited features of natural killer (NK) cells, expressing genes such as NKG7 , FGFBP2 , and FCGR3A ; we refer to these cells as ‘CD8 + T EM NK-like’ (Fig. C, Supplementary Fig. C). Interestingly, CD8-C0-CXCL13 and CD8-C10-IFIT1 also displayed variable expression of exhaustion markers such as LAG3 , HAVCR2 , and PDCD1 (Fig. C, Supplementary Fig. C), indicating a potential activation-coupled exhaustion program possibly induced by both H. pylori infection and tumor cells. Furthermore, cytotoxic CD8 + T cells (CD8 + T CYTOTOXIC ) from the CD8-C1-ZNF683 and CD8-C9-KIR2DL3 clusters showed high expression levels of cytotoxicity-related genes such as GNLY , GZMB , PRF1 , and tissue residency gene ZNF683 (HOBIT) (Fig. C, Supplementary Fig. C). We also noted that the CD8-C4-AREG cluster exhibited expression of molecules suggestive of tissue-resident memory T cells (T RM ) with high expression of ITGAE ( CD103 ), ITGA1 , and CD69 , while displaying low expression of SELL , S1PR1 , and KLF2 (Fig. C, Supplementary Fig. C), akin to T RM cells described in humans and mice . The CD8-C11-TRDC cluster was assigned to γδ-T cells, expressing TRDC , TRGC1 , and cytotoxicity-associated genes, including GNLY , while the CD8-C2- KLRB1 cluster was notable for high expression of KLRB1 , recognized as hallmarks of mucosal-associated invariant T cells (Fig. C, Supplementary Fig. C) . Unlike CD4 + T cells (Supplementary Fig. B), the clusters within CD8 + T cells exhibited distinct distributions. Notably, the CD8-C0-CXCL13 cluster predominantly comprised cells from H. pylori + patients, while the CD8-C4-AREG and CD8-C7-KLF2 clusters were almost exclusively populated by cells from H. pylori - patients (Supplementary Fig. D). We further analyzed the developmental trajectories of these differentially distributed cells using the Monocle 2 algorithm to establish a pseudotemporal ordering reflective of cell lineage. Given that the CD8-C7-KLF2 cluster is associated with naive T cells, we observed two major developmental trajectories (Fig. D), with T RM -like and T EM activated-state cells positioned at opposite ends of the pseudotemporal path, corroborating their distinct gene expression profiles. Additionally, we calculated cytotoxicity and exhaustion scores for each cell cluster. Along the trajectory, most CD8 + T cells exhibited progressively increasing cytotoxic activity, accompanied by a gradual rise in exhaustion levels (Fig. E). Notably, the cytotoxic activity score was downregulated in trajectory 1 toward the CD8-C4-AREG cluster, while it increased in trajectory 2 for both the CD8-C0-CXCL13 and CD8-C10-IFIT1 T cells (Fig. F), suggesting that these cells retained their ability for active division in the TME. Thus, single-cell analysis reveals that the CD8 + population is heterogeneous with distinct subsets.
infection promotes intratumoral immune activation with enhanced interaction between CD8 + T cells and epithelium Focusing on the CD8-C0-CXCL13 cluster, we observed that, in addition to the high expression of CXCL13 , this cluster specifically expressed genes such as MYO7A , TOX , and PHLDA1 (Fig. A). We subsequently identified differentially expressed genes (DEGs) between the CD8-C0-CXCL13 group and the other groups. A total of 192 up-regulated and 118 down-regulated genes were detected in CD8-C0-CXCL13. Gene ontology (GO) functional enrichment analysis revealed that the up-regulated genes in CD8-C0-CXCL13 T cells were enriched in signaling pathways related to T cell activation, regulation of lymphocyte activation, and regulation of T cell activation (Supplementary Fig. B). Additionally, KEGG analysis identified enrichment in crucial gene sets associated with anti-pathogen responses, including Th1 and Th2 cell differentiation, Th17 cell differentiation, as well as antigen processing and presentation (Supplementary Fig. C), which was potentially associated with H. pylori infection. To further elucidate the molecular characteristics distinguishing CD8-C0-CXCL13 T cells in the context of H. pylori infection, gene set enrichment analysis (GSEA) was conducted. T cells from H. pylori + patients were found to be associated with cell activation involved in immune response and regulation of response to cytokine stimulus, suggesting a potential protective role against local H. pylori infections (Fig. B). We then investigated the expression levels of various genes based on H. pylori infection status. Notably, we observed that cytotoxicity-associated genes, such as IFNG and GZMB , were upregulated in H. pylori + patients, while the expression of exhaustion marker PDCD1 was downregulated (Supplementary Fig. D). Flow cytometry data corroborated these findings, revealing that the levels of IFN-γ and Granzyme B in CD8 + CXCL13 + T cell were significantly higher in H. pylori + patients compared to H. pylori - patients, whereas PD-1 expression exhibited the opposite trend (Fig. C). As expected, higher levels of IFNG and GZMB correlated with better prognosis in GC patients (Supplementary Fig. E, F). It is well-established that gastric epithelial cells from H. pylori -infected patients exhibit elevated levels of TNF receptors, which contribute to the activation of an adaptive immune response against invading infection. Utilizing dataset GSE134520, we evaluated TNF-dependent T cell functions in CD8 + T cells and epithelium. We found that the interaction between TNF and TNFRSF1A, as well as TNF and DAG1, was enhanced in the stomach with H. pylori infection (Fig. D), suggesting that such molecular interactions play a crucial role in generating an immune-activated TME in response to H. pylori infection. Notably, TNFRSF1A , rather than DAG1 , exhibited a positive correlation with the CD8 + T cell signature and T EM signature, while demonstrating a negative correlation with naïve T cells in the TCGA STAD cohort, as assessed using the TIMER2.0 webserver (Supplementary Fig. G). This indicates that TNFRSF1A may play an important role in promoting antitumor immunity against GC. Importantly, analysis of bulk RNA sequencing data from normal gastric mucosa tissues showed that the expression of TNFRSF1A was significantly increased in samples from H. pylori -infected patients, whereas DAG1 levels remained unchanged (Fig. E). Collectively, our results suggest that H. pylori infection may recruit T cells into the tumor microenvironment through TNFRSF1A - TNF interaction, thereby enhancing immune activity. As our data indicate that the CD8-C0-CXCL13 T cell is a highly prevalent effector subset within the GC microenvironment, we hypothesized that the single-cell-derived gene signature from the CD8-C0-CXCL13 cluster (Supplementary Table ) would offer valuable prognostic information. Analysis of the available gene expression data revealed that the CD8-C0-CXCL13 signature was significantly associated with improved overall survival (Fig. A). Additionally, multicolor immunofluorescence staining demonstrated that both CXCL13 and CD103 were expressed in CD8 + T cells in the stroma and tumor tissues of H. pylori -infected samples (Fig. B), supporting the presence of these activated cells in GC.
RM cells marked by PTGER2 worsen prognosis in H. pylori negative gastric cancer Previous study has revealed that virus- or other pathogen-specific (bystander) CD8 + T RM -like cells in tumors can be re-activated to induce antitumor immunity . However, we identified 151 down-regulated genes in CD8-C4-AREG T cells (p.adj ≤ 0.01 and|log 2 FoldChange| ≥ 0.25) (Supplementary Fig. A and Supplementary Table ), which were enriched in signaling pathways such as T cell activation, response to IFN-γ, and antigen processing and presentation (Supplementary Fig. B), indicating an immunosuppression state in CD8-C4-AREG T RM cells. Although CD8-C4-AREG T RM cells shared similar gene expression profiles between H. pylori - and H. pylori + (Fig. A), GSEA analysis revealed that T cells from H. pylori + were associated with the inflammatory response and cytokine-mediated signaling pathways (Fig. B, Supplementary Fig. C). In contrast, the response to steroid hormone pathway was enriched in T cells from H. pylori - (Supplementary Fig. C), with increased expression levels of AREG and PTGER2 (Fig. C). Further in vitro validation experiments also confirmed that AREG and PTGER2 were down-regulated in the CD8 + T cells pulsed with H. pylori (Fig. D). Similarly, further immunohistochemistry staining also confirmed that AREG and PTGER2 were both inactivated by H. pylori infection in stroma and tumor tissues (Fig. E). Prognostic data from TCGA and GSE15459 show that the PTGER2 can discriminate between patients with high CD8 expression, with high PTGER2 expression significantly associated with worse prognosis (Fig. F and Supplementary Fig. D). These data suggest that T RM cells with high expression of PTGER2 may serve as a therapeutic target to improve clinical outcomes in GC.
Overall, we conducted a prognostic analysis of GC patients with different H. pylori infection statuses from a population level. Single-cell transcriptome profiling at the molecular level was performed, allowing for the characterization of different T cell subpopulations and H. pylori -activated clusters, as well as the delineation of transcriptional changes in H. pylori -activated T cells and the identification of co-stimulatory ligand expression in H. pylori + GC cells. The results of this study shed light on the mechanisms underlying target cell-dependent T cell responses induced by H. pylori , as well as the potential mechanisms underlying the responses of GC patients to H. pylori infection, and indicate that PTGER2 may serve as a potential target for treating patients with GC. Although it is well established that H. pylori infection significantly contributes to the carcinogenesis of GC, the role of H. pylori infection in predicting the survival of GC patients is still less well understood. Interestingly, a prospective study has demonstrated that GC patients with positive H. pylori infection frequently showed better relapse-free survival and overall survival . Recently, another study confirmed that H. pylori infection status is one of the most potentially important independent factors in predicting prolonged survival . Additionally, other studies , particularly a meta-analysis, have shown that H. pylori infection is an independent protective factor for GC progression, with this protective effect being consistent across different ethnic groups, H. pylori evaluation methods, and quality assessment measures . In line with these results, our study also confirmed that H. pylori is an independent, beneficial prognostic factor, especially in patients with early-stage GC. This may be attributed to the fact that H. pylori infection is not the only factor affecting prognosis, and various therapies are available for advanced or relapsed GC patients . The suppressive effect of H. pylori on GC progression may be attributed to the induction of certain antitumor immune responses. The activation of T cells, the main immune effector cells for acquired immunity, is directly influenced by H. pylori bacterial products, such as VacA and arginase . Notably, Capitani et al. demonstrated that a specific H. pylori protein named HP1454 is a crucial bacterial factor that exerts its pro-inflammatory activity by directly modulating T-cell responses . This finding prompted researchers to explore the mechanism of H. pylori -activated T cells in relation to the pathogenic factors of H. pylori . Additionally, several naturally occurring immunodominant CD4 + T cell responses in H. pylori -infected individuals have also been identified and characterized . Although we did not observe significant differences among CD4 + T cells, we identified two clusters exhibiting distinct distributions of CD8 + T cells across different H. pylori infection statuses. Immunodominant T cells are believed to be more effective and play a central role in the host adaptive immunity against pathogens, as has been well demonstrated in various viral, bacterial, and tumor contexts . Previous research has shown increased gastric T-cell infiltration in situ with a typical T-helper (Th)1 phenotype during H. pylori infection and has identified several antigen-specific T-cell responses, such as HpaA-specific mucosal CD4 + T-cell responses with a Th1 profile, primarily occurring during the precancerous lesions stage . However, in cancers, including GC, CD8 + T cells are essential for the immune eradication of cancer cells, although they often become dysfunctional over the course of tumorigenesis . To address this issue, researchers have developed a series of immunotherapies, including immune checkpoint blockade , adoptive T cell therapy , and chimeric antigen receptor T cell therapy , to restore the functions of CD8 + T cells. Therefore, we speculate that these specific CD8 + T cells driven by H. pylori may contribute to the better prognosis observed in H. pylori + GC patients. Recent technological advances have provided important insights into the heterogeneity of CD8 + TILs, demonstrating that distinct T cell subsets exist with different transcriptional programs and functional states . When focusing on the CD8-C0-CXCL13 cluster, we noted that, in addition to the high expression of CXCL13 , it also specifically expressed genes like TOX , a key transcription factor for CD8 + T cell differentiation during chronic viral infections and cancer , and PHLDA1 , a required transcription factor for the regulation of the TLR-mediated immune response . This indicates that these cells possess a strong immunological activity state. Consistent with the characteristic, pathway enrichment analysis revealed that the upregulated genes in CD8-C0-CXCL13 T cells were enriched for T cell activation pathways. Since the epithelial cells are central to the cellular interaction network in GC , we evaluated the interactions between epithelial cells and matched CD8 + T cells. Tumor cells with high expression of TNFRSF1A may directly interact with TNF produced by CD8 + T cells, enhancing CD8 + T cell activity through the production of GZMB and IFNG , which suppresses tumor progression in GC with H. pylori . More importantly, a series of immune cell signatures have been identified by single-cell sequencing in various cancers, such as breast cancer , lung cancer , and melanoma , which were associated with patient prognosis. Similarly, we confirmed that CD8-C0-CXCL13 T cells may serve as key mediators of the improved clinical outcomes observed in human GC patients with H. pylori infection. Recently, T RM cells have been shown to both prevent and exacerbate various pathologies . The involvement of T RM cells in a range of malignancies makes the design of therapeutic strategies that can modulate either their production or activity an attractive goal. Additionally, further research has confirmed that T RM cells display transcriptional features specific to individual tissues, allowing for their survival and long-term retention . In this study, we identified a T RM cell cluster marked by AREG , a member of epidermal growth factor family, and PTGER2 , a receptor for prostaglandin E2, which was predominantly populated by cells that are H. pylori -negative. AREG has been shown to associate with type 2 immune-mediated resistance, tolerance mechanisms to infection, and immune suppression within the TME , while PTGER2 enhances the pathogenic phenotype by regulating the balance of cytokines, such as IFN-γ/IL-10, in a context-dependent manner . More importantly, previous studies have revealed that PTGER2 is involved in tumor cell proliferation, invasion, and prognosis across various malignancies . Here, we propose that high PTGER2 expression is significantly associated with worse prognosis exclusively in patients with high CD8 expression in GC. This finding suggests that T RM cells exhibiting high PTGER2 expression may represent a promising new prognostic factor and therapeutic target for anti-cancer therapies. There are several limitations to the present study. First, to investigate the effect of H. pylori on the survival of patients with GC, we chose a longitudinal study design. However, because our data are derived from a single center, the results require further validation. Second, it remains uncertain whether H. pylori still exist in the cancer tissues of GC patients, which could lead to misclassification of H. pylori infection status. Additionally, we did not measure indicators such as antibody titers or other markers to assess the active status or quantity of H. pylori infection. Third, our focus was on TILs in GC, and we performed scRNA-seq of T cells. The ligand-receptor interactions were inferred from a public database that primarily comprises normal tissues. To enhance the robustness and credibility of these inferences, further integration of scRNA-seq data from tumor cells and TME cells from GC patients, along with functional assays, would provide more compelling evidence. Fourth, due to the lack of immune-competent animal model for GC, the direct or indirect mechanisms involved in vivo remain unclear. In summary, we utilized single-cell RNA sequencing to analyze the T cell landscape of GC patients, achieving high resolution regarding different H. pylori infection statuses. We identified two CD8 + T cell clusters: one consisting of activated CD8 + T EM cells, likely preceding activation by H. pylori and associated with better prognosis in GC, and the other comprised of T RM cells, which were predominantly populated by H. pylori -negative cells and likely indicate exhaustion, correlating with worse prognosis in GC. Our findings provide valuable resources for deciphering the gene expression landscapes of heterogeneous cell types in GC and offer deep insights into cancer immunology for future drug discovery.
This study elucidates the molecular heterogeneity of tumor-infiltrating T cells in gastric cancer patients with different Helicobacter pylori ( H. pylori ) infection statuses. Through comprehensive single-cell RNA sequencing and functional analysis, we identified distinct CD8 + T cell clusters, including an activated CD8 + T cell cluster (CD8-C0-CXCL13) associated with improved patient survival and a tissue-resident memory T cell cluster (CD8-C4-AREG) linked to worse prognosis in H. pylori -negative patients. Our findings provide valuable insights into the complex interplay between H. pylori infection and the tumor immune microenvironment, highlighting potential targets for personalized immunotherapy in gastric |
Case study: Acute plasmoblastic leukemia presentation following effective haploidentical hematopoietic stem cell transplantation therapy | 7f5cd388-8eb5-4e18-8a26-34caf0abe967 | 11954571 | Surgery[mh] | Plasma cell leukemia (PCL) is a rare and challenging plasma cell dyscrasia characterized by an unfavorable prognosis. The taxonomy of PCL includes primary PCL (pPCL) and secondary PCL (sPCL), with the latter developing after a prior diagnosis of multiple myeloma (MM). pPCL is frequently observed, constituting 60%–70% of all PCL cases, whereas the incidence of sPCL ranges from 30% to 40%. This secondary manifestation typically arises in the advanced stages of MM and is diagnosed in approximately 0.5%–4.0% of all patients with MM. In adherence to the diagnostic standards set forth by the International Myeloma Working Group (IMWG) and the World Health Organization (WHO), PCL diagnosis is validated by meeting either of the two criteria. Specifically, the presence of ≥20% circulating plasma cells in the peripheral blood or an absolute plasma cell count surpassing 2 × 10 9 /L is considered sufficient for the conclusive diagnosis of PCL. The clinical presentation of PCL differs significantly from that of MM in several aspects, as reflected in . Key laboratory abnormalities in PCL include hypercalcemia, defined as a total (unionized) calcium level of >2.75 mmol/L or an ionized calcium level of >1.45 mmol/L. – Typically, pPCL manifests in a demographic population skewed toward younger patients, predominantly during the advanced phases of the disease, often accompanied with extramedullary lesions such as hepatomegaly, splenomegaly, and involvement of other organs. Conversely, osteolysis is less frequently observed. Given the heightened prevalence of extramedullary lesions, the IMWG has advocated the inclusion of 18 F-fluorodeoxyglucose positron emission tomography (FDG-PET/CT) as a crucial diagnostic, evaluative, and monitoring tool for PCL. This recommendation underscores the importance of advanced imaging modalities in comprehensively characterizing the disease presentation, progression, and response to therapeutic interventions. Owing to the aggressive trajectory of PCL, the imperative for prompt treatment initiation has been underscored. Typically, patients in this cohort demonstrate a prompt response; however, the aggressive nature of the disease leads to early relapses, a characteristic hallmark posing challenges in the therapeutic landscape of PCL. – , – Conventional cytostatic chemotherapy has yielded unsatisfactory outcomes in the treatment of PCL. Subsequent to the advent of hematopoietic stem cell transplantation (HSCT), coupled with the integration of the proteasome inhibitor bortezomib and immunoregulatory drugs such as thalidomide and lenalidomide, a certain degree of prognosis enhancement has been achieved. Nevertheless, the outcomes still fall short of the optimal therapeutic objectives. This underscores the ongoing imperative for refining treatment strategies in the relentless pursuit of improved outcomes for individuals afflicted with PCL. The reporting of this study conforms to the Case Report (CARE) guidelines.
A female patient in her 50s was diagnosed with PCL in 2022, following the onset of pain in the lumbosacral region with irradiation to the left lower limb, limited mobility, weakness, and numbness in the left lower limb. In March 2022, she sought medical help at her place of residence, where the diagnosis was verified as “PCL, FLC kappa.” Examination revealed the presence of free kappa light chains (kappa LCC), free kappa chains (3.71 μg/mL) (C), free lambda chains (0.46 μg/mL) (C); kappa/lambda index of 8.1275; and presence of plasma cells on evaluation of a normal peripheral blood smear (48%), with the characteristic immunophenotype CD138+/CD38-/Kappacyt+/Lambdacyt-/CD117-/CD56+/CD19-/CD20-/is (72.0%). Furthermore, it revealed the presence of ≥10% monoclonal plasma cells in the bone marrow with the characteristic immunophenotype CD138+/CD38-/Kappacyt+/Lambdacyt-/CD117-/CD56+/CD19-/CD20+/is (61.2%). The patient had splenomegaly, serum β2-microglobulin level of 12.8 mg/L, and albumin level of 36.1 g/L; moreover, immunofixation of blood and urine did not reveal any findings. Based on these indications, the patient was diagnosed with stage III according to the international staging system. Following the confirmation of diagnosis, two cycles of chemotherapy, adhering to the “VDT-PACE” regimen, were administered. Subsequent reassessment revealed a “complete objective response” in accordance with the criteria outlined by the IMWG. General blood analysis: leukocytes, 2.2/L; erythrocytes, 3.52/L; hemoglobin, 114 g/L; and platelets, 149/L. Myelogram: plasma cells, 1%. Conclusion: The presented preparations from the bone marrow punctate were hypocellular. Differential count of myelocaryocytes was performed in 200 cells. Immunophenotyping of peripheral blood. Conclusion: In the examined peripheral blood sample, the population of cells exhibited the immunophenotype CD138+/CD38-/Kappacyt+/Lambdacyt-/CD117-/CD56+/CD19-/CD20-/is (0.0%). Immunofixation of blood serum and urine were negative. Given this favorable response, the decision was made to proceed with autologous HSCT (auto-HSCT). In November 2022, mobilization of peripheral hematopoietic stem cells (HSCs) was initiated using the etoposide and colony-stimulating factor regimen. The initial CD34+ count on day +11 was insufficient for collection, prompting continued stimulation. The subsequent counts on days +12, +13, and +14 demonstrated incremental improvement, culminating in a count of 3.13% ± 0.06%. Despite these efforts, mobilization failure was confirmed. Mobilization by the CXCR4 chemokine receptor antagonist plerixafor was not possible due to the unavailability of this drug in Kazakhstan. This drug is purchased by patients themselves, but not everyone has the opportunity to purchase it given the cost, which limits its use. In response to this, the therapeutic approach shifted to maintenance therapy with lenalidomide from August 2022 to May 2023. Considering the disease status marked by a complete response, unsuccessful HSC mobilization, aggressive disease course, and elevated risk of relapse, alongside the presence of a partially compatible donor, the inclusion of haploidentical T-cell replete graft stem cell transplantation (haplo-TGSCC) from the son was contemplated as the subsequent step in the treatment plan ( ). In February 2023, a retrospective study conducted by the Center for International Studies revealed a 3-year overall survival (OS) of 39% for the specific diagnosis under consideration with allogeneic HSCT (allo-HSCT). Subsequently, in May 2023, the patient was admitted to the Bone Marrow Transplantation department of the National Research Oncology Center in Astana for haplo-HSCT from her son. The transplantation procedure involved high-dose chemotherapy (HDCT), specifically using fludarabine (30 mg/m 2 ) and melphalan (MEL; 140 mg/m 2 ). Haplo-TGSCC from her son was performed, with a CD34 count of 7.7 million/kg. Cyclophosphamide was administered on +3 and +4 days at a dose of 50 mg/kg/day, followed by tacrolimus initiation from +24 days onward at a dose of 0.03 mg/kg/day to mitigate graft-versus-host reaction (GVHR). Subsequent adjustments to tacrolimus dosage were made based on blood concentration levels. Neutrophilic engraftment occurred by +17 days. After achieving complete donor chimerism at +30 days (99.9%), a control bone marrow puncture and donor chimerism assay were conducted at +60 days post haplo-TGSCC. The results indicated an absence of detectable plasma cells in the bone marrow, with the preservation of complete donor chimerism at 98.67%. Immunosuppressive therapy (IST) with tacrolimus was continued until day +195; however, given the absence of GVHR and the status of the underlying disease, IST was discontinued. Medical history A female patient in her 50s was admitted to the Center for Hematology Oncohematology and Bone Marrow Transplantation with a confirmed diagnosis of PCL, prompting the consideration of haplo-HSCT. This procedure involved utilizing stem cells from her son, who had achieved a “complete response” following two courses of chemotherapy (HT) using the “VDT-PACE” regimen (bortezomib, dexamethasone, thalidomide, cisplatin, adriamycin, cyclophosphamide, and etoposide). The response was assessed based on the criteria established by the IMWG. The decision to perform haplo-HSCT was influenced by the patient’s prior unsuccessful mobilization of peripheral HSCs, rendering auto-HSCT unfeasible. Faced with the unavailability of a fully human leukocyte antigen (HLA)–matched donor, the patient underwent successful haplo-HSCT from her son, marking a milestone at 10 months post-diagnosis during the initial remission phase. Signed consent for treatment as well as for publication of the study results was obtained from the patient.
PCL represents a rare malignancy characterized by an aggressive clinical course and unfavorable prognosis. The scarcity of comprehensive therapeutic outcome data, coupled with a dearth of randomized studies, complicates the decision-making process for treating this disease. This case report illuminates a successful outcome in a patient who underwent haploidentical T-cell replete graft stem cell transplantation (haplo-TGSCC) from her son. In the dynamic landscape of modern medicine, continuous evolution brings forth new diagnostic techniques and therapeutic approaches, necessitating a nuanced consideration of disease-related, patient-related, and prior therapy-related factors to tailor an optimal treatment strategy. Disease-related factors encompass the quality and duration of response to previous therapies as well as the aggressiveness of relapse. Concurrently, patient-related considerations encompass pre-existing toxic effects, comorbidities, quality of life, age, and performance status. A notable advancement in recent times is the success achieved with haploidentical transplantation. With the advent of improved GVHR prophylaxis, this approach has become accessible for nearly all eligible patients and has demonstrated efficacy comparable to allo-HSCT. Haplodonors, in this context, share one of their two HLA-A, -B, -C, -DRB1, -DQB1, -DPB1 haplotypes with the patient through genetic inheritance, whereas the other haplotype remains distinct. This breakthrough in haploidentical transplantation presents a promising avenue in the intricate landscape of PCL treatment. Consequently, haploidentical donors typically exhibit mismatches with the patient in six of the 12 HLA alleles, although the specific number may exhibit variability. Genetic recombination, which occurs at a frequency of up to 1% if the patient and donor lack a direct familial relationship, introduces the potential for mismatches even on the shared haplotype. In the case of our patient, the haplodonor candidate is a son, obviating the necessity to account for recombination. Notably, however, the consideration of excluding recombination is generally advised for all haplodonor candidates, except for those who are the mother, father, son, or daughter of the patient. The Center for International Blood and Marrow Transplant Research (CIBMTR) study, published in May 2012, reported on outcomes comparing the results of a total of 147 patients, including 97 patients with pPCL who underwent auto-HSCT and 50 patients with pPCL who underwent allo-HSCT within 18 months of diagnosis between 1995 and 2006, the majority of whom (68%) received a myeloablative conditioning regimen. – The median ages of the two groups were 56 and 48 years. Progression-free survival (PFS) at 3 years was 34% (95% CI, 23%–46%) in the autologous group and 20% (95% CI, 10%–34%) in the allogeneic group. The cumulative recurrence rate after 3 years was 61% (95% CI, 48%–72%) in the autologous group and 38% (95% CI, 25%–53%) in the allogeneic group. The OS after 3 years was 64% (95% CI, 52%–75%) in the autologous group and 39% (95% CI, 26%–54%) in the allogeneic group. Recurrence-free mortality (RFM) after 3 years was 5% (95% CI, 1%–11%) in the autologous group and 41% (95% CI, 28%–56%) in the allogeneic group. The lack of a positive effect on OS in allo-TGSCC may be partly due to a higher treatment-related mortality (41% in allo-TGSCC versus 5% in autoTGSCC). This study demonstrated that although allo patients had a significantly lower recurrence rate, their RFM was significantly higher with no improvement in OS after 3 years. In 2020, CIBMTR reported a further analysis of 348 patients with pPCL transplantation between 2008 and 2015. There was an increase in hematopoietic cell transplant utilization from 12% in 1995 to 46% in 2009; however, the outcomes remained poor, with no increase in OS in the allo group compared with that in the previous study. Initial comparisons were also made between allo-first and first autograft patients (regardless of subsequent second graft insertion). For the first auto-HSCT, the estimated 3-year PFS was 14.3 months, with an OS of 33.5 months. For the first allo-HSCT, the estimated 3-year PFS was 11.7 months, with an OS was of 17.5 months. After 60 months, the probabilities of OS and PFS were similar: OS: allo (34.6%), auto (31.3%); PFS: allo (19.9%), auto (14.3%). This study confirms that significant PFS occurs in patients undergoing allo-HSCT as a first transplant. In June 2022, the European Society for Blood and Marrow Transplantation published an article presenting a comparative analysis of auto- and allo-HSCT strategies in patients with pPCL, employing dynamic prediction modeling. Through retrospective analyses across distinct time periods, the study substantiates a significant risk of mortality within the initial 100 days for allo-primary transplantation, advocating the preference for tandem transplantation strategies. However, it emphasizes that the disease status, specifically the remission status, and the attainment of complete response before the initial transplant plays a pivotal role in determining the optimal form of treatment for patients with pPCL. Despite advancements attributed to the integration of novel agents, pPCL persists as a formidable challenge for clinicians. This retrospective study contributes valuable evidence to guide transplant clinicians in their decision-making processes, offering insights to patients regarding the most appropriate approach tailored to their circumstances following effective induction therapy. The discernments derived from this study have the potential to inform and optimize clinical decision-making processes related to transplant strategy for patients suffering from with the complexities of pPCL.
PCL stands out as a distinct entity from MM, both clinically and genetically. Despite this distinction, therapeutic strategies for PCL remain inadequately defined, leading to the adoption of regimens originally developed for MM. The prognosis for PCL is notably poor, with a median OS of <1 year. Nevertheless, advancements have been witnessed with the introduction of HDCT supported by stem cells, along with the inclusion of bortezomib and lenalidomide in treatment protocols. Our clinical observations underscore a positive treatment outcome associated with the adoption of haplo-HSCT as a salvage option. In our specific case, the patient underwent HDCT complemented by donor stem cell support, revealing promising results in the context of PCL management. These findings contribute to the growing body of evidence supporting the efficacy of haplo-HSCT as a viable therapeutic avenue in the intricate landscape of PCL.
|
Comparative performance of tuberculin and defined-antigen cocktails for detecting bovine tuberculosis in BCG-vaccinated cattle in natural settings | 461fb94a-8cd5-4114-a629-261a0c475636 | 11802902 | Vaccination[mh] | Bovine tuberculosis (bTB) is a chronic disease of cattle primarily caused by members of the Mycobacterium tuberculosis complex (MTBC), which can infect a wide range of domestic and wild animal species, including humans, posing a significant public health risk in regions with high prevalence . The prevalence of bTB varies by country, influenced by control measures, animal movements and wildlife reservoirs . In Ethiopia, bTB is more prevalent in genetically improved cattle raised for high milk production in intensive farming systems, with prevalence rates of 21.6% in Holstein-Friesians and 9.9% in crossbred cattle . A recent study reported 54.4% of 299 dairy herds and 24.5% of 5,675 cattle were reactive to the comparative intradermal cervical tuberculin test . In the absence of feasible bTB control options, efforts to increase milk productivity through cross-breeding and intensification of cattle farming are likely to increase disease incidence. Currently, control of bTB relies on tuberculin testing of herds and culling positive test reactors. This approach has successfully reduced the prevalence of the disease in many countries and has led to elimination in Australia . However, this strategy is extremely costly and unaffordable for resource-constrained countries like Ethiopia. It is also less effective in countries where wildlife reservoir hosts are present , . Bacillus Calmette-Guérin (BCG) vaccination offers a potentially practical and affordable alternative for countries lacking existing control programs – . Experimental investigations and field studies have demonstrated that while BCG does not protect all vaccinated animals from infection, it significantly reduces the severity of bTB pathology , , , . The BCG vaccine developed from attenuated M. bovis and approved for human tuberculosis, is considered a viable candidate vaccine for cattle. However, the World Organization for Animal Health (WOAH) does not recommend BCG for livestock due to its interference with tuberculin-based tests leading to false-positive reactions in official bTB control programs . To overcome the issue of non-specific tuberculin reactions caused by BCG vaccination, researchers are exploring the use of defined antigens that can differentiate infected among vaccinated animals (DIVA), a crucial tool for effective implementation of BCG vaccination as a method of disease control , . Comparative genomic and transcriptomic studies have identified several antigens present in M. bovis field strains but either absent or not immunogenic in the BCG vaccine strain, enhancing the specificity of tuberculin-based bTB diagnostic tests , . Previous studies have shown that recombinant peptide antigens induce a greater skin response when used in combination than when used alone . Notable candidate antigens that demonstrated significant DIVA capability include ESAT-6, CFP-10, and Rv3615c, indicating a potential path to use along with BCG vaccination for bTB management and control , . The performance of these antigens is subject to variability depending on the geographical conditions, prevailing Mycobacterium lineages, dosage, cutoff values, and features of the examined population. Variability in the levels of interferon-gamma (IFN-γ) and innate immune response among different strains of M. bovis has been reported . Therefore, prior to practical deployment, thorough validation of these DIVA antigens is required across a varied range of cattle breeds and geographic regions to determine their specificity and sensitivity. However, existing research data on the performance of DIVA tests in BCG-vaccinated animals are limited to studies primarily performed in the UK under experimental transmission , , which might not reflect the performance of DIVA antigens in naturally infected cattle. Although other studies have also tested DIVA and tuberculin in BCG-vaccinated and unvaccinated cattle under natural transmission settings, they were limited to evaluating the efficacy of the BCG vaccine , , – or comparing the relative performance of antemortem tests without postmortem confirmation , , . This study examined diagnostic data collected for the evaluation of BCG efficacy via a natural transmission experiment for the assessment of the immunological responses of BCG-vaccinated cattle to mycobacterial antigens. The other similar studies published thus far have relied on the use of the DIVA skin test and IFN-γ release assay (IGRA) to determine the infection status of artificially challenged animals based on predetermined cutoff values. Here, we compared the quantitative responses of vaccinated and control groups to DIVA antigens against tuberculin. The study animals were followed longitudinally, with repeated tests and postmortem confirmation of infection status, providing a unique opportunity to assess the performance of bTB diagnostic tests in animals with confirmed (postmortem) disease status. Here, we evaluated the immune responses of DIVA peptide antigen cocktails on 67 BCG-vaccinated and 67 unvaccinated calves via both skin tests and in vitro IGRA formats and compared the performance of these peptides to that of tuberculin.
Study setting Study animals and design of the experiment Test antigens BCG vaccination Skin and IGRA testing schedules Skin testing procedure IGRA testing procedure Postmortem examination Tissue sampling for mycobacterial culture Mycobacterial culture DNA extraction PCR (polymerase chain reaction) Ethics statement Statistical analyses Definitions and interpretation of diagnostic test performance measures The study took place at the Animal Health Institute (AHI) in Sebeta, which is located approximately 20 km southwest of the capital city, Addis Ababa, Ethiopia. At the AHI premises, three separate barns were constructed. Each barn had the capacity to accommodate 80 adult cattle and had a fenced outdoor area measuring approximately 2,000 to 2,500 m 2 . These barns were approximately 300 m apart from each other. Two of the fenced barns were designated for housing the experimental replicate groups (seeders and sentinels) throughout the entire exposure period. This was done to control for any potential variations in exposure resulting from differences in infectiousness among the seeder animals. The third barn was used to house and acclimate the sentinel calves immediately after they were recruited and before they were exposed to the seeders. To ensure proper waste management, septic tanks were constructed behind each barn. The feces and urine were directed into the septic tanks through large pipes that connected the barns to the exterior premises. The animal attendants were all educated about the zoonotic risk of bTB. They were required to wear personal protective equipment, including N95 masks, disposable gloves, and rubber boots, whenever they handled animals or worked inside the barns. All relevant local biosecurity and safety procedures were strictly adhered to.
A total of 72 adult cattle, naturally infected and identified as reactors through skin and blood tests, were selected as seeders for the natural transmission experiment. The primary objective of our experimental study was to assess the efficacy of BCG vaccination through a natural transmission experiment. In this report, we conducted a secondary analysis of diagnostic data collected from the BCG efficacy study . The experiment was divided into two phases, each involving an equal number of calves and comprising two biological replicate groups housed in separate barns. The reason for conducting the study in two phases was to measure both how well the vaccine prevented infection (direct effect) and how it reduces the infectiousness of infected vaccinated animals (indirect effect) for a more complete understanding of the total efficacy of the BCG vaccine. To find naive sentinels, dairy herds were screened using both skin tests and IGRA. For each phase, 68 male calves, less than three months old, were recruited from dairy herds that tested negative for bTB. These herds were located in Koka, Ziway, Alage, Hawassa, Wondogenet, Jimma, and Ambo towns. The recruited sentinels were housed separately from the seeders to prevent early exposure. After an acclimatization period of three to twelve weeks, the sentinels that tested negative in the bovine (PPD-B) minus avian (PPD-A) tuberculin IGRA test were randomly assigned to either the control or vaccinated groups using a double-blind lottery system. Calves with a positive OD450 nm ≥ 0.1 on the bovine minus avian tuberculin IGRA test were excluded. In the first phase, 33 unvaccinated controls and 34 vaccinated calves were included. In the second phase, 34 unvaccinated controls and 33 vaccinated calves were included, resulting in a total of 67 calves per group across both phases. During the trial period, calves were fed boiled and pasteurized milk until they reached six months of age. Afterward, weaned calves and adult seeders were provided with concentrate feed, hay, green fodder, and water ad libitum . Throughout the twelve-month contact period, 5 vaccinated calves and 2 unvaccinated controls from both phases died, resulting in 127 (62 vaccinated and 65 unvaccinated controls) reaching the endpoint (Fig. ).
The PPD (purified protein derivative) tuberculin were procured from Prionics Lelystad BV, Lelystad, the Netherlands. DIVA peptide cocktails consisting of peptides representing ESAT-6, CFP-10, and Rv3615c were commercially synthesized and previously described .
The vaccine was supplied as freeze-dried preparation and reconstituted in Sauton’s medium as per the manufacturer’s instruction (Green Signal Bio Pharma, Pvt. Ltd., India) in the first phase and (AJ Vaccines A/S, Artillerivej 5, DK 2300 Copenhagen S, Denmark) in the second phase. The vaccinated group of calves were administered a subcutaneous injection of 0.5 ml (1–4 × 10 6 CFU) of live attenuated BCG Danish Strain SSI 1331 (Staten’s Serum Institute [SSI], Copenhagen, Denmark). Conversely, the control group received 0.9% normal saline. A total of 67 calves received the BCG vaccination, while the remaining 67 sentinel calves were not vaccinated. The authors conducting the field work were blinded to the vaccination status of the animals until the conclusion of the experiment.
The sentinel animals underwent skin testing every four months and IGRA every two months using DIVA, avian, and bovine tuberculin antigens in both the skin and IGRA test formats (Fig. ). The skin test and IGRA results analyzed and reported here are from the 12th -month post-exposure test, except for Table ; Fig. , which are based on pre-exposure test result and the Supplementary Fig. , which includes tests of all postexposure-time points.
For the skin test, hair was shaved at two sites on the right and one site on the left mid-cervical area. Bovine tuberculin (30,000 IU/ml), avian tuberculin (25,000 IU/ml), and the DIVA peptide cocktail (100 µg/ml of each peptide) were injected intradermally in 0.1 mL volumes. Skin fold thickness was measured to the nearest millimeter using Irish calipers by the same person before and 72 h after antigen administration. The change in skin thickness was determined by subtracting the pre-injection measurement from the 72-hour post-injection measurement for each antigen. The cut-offs used to determine test status were > 4 mm for the Comparative Cervical Tuberculin (CCT), ≥ 4 mm for the Single Intradermal Tuberculin Test (SIT), and ≥ 2 mm for the DIVA antigens.
The IGRA test was conducted using the BOVIGAM ® kit (Thermo Fisher Scientific Inc., USA). Approximately 5 ml of blood was collected from each animal’s jugular vein in lithium heparin vacutainers. Duplicate wells were stimulated with 25 µl of each antigen and 250 µl of whole blood, resulting in final concentrations of 1 µg/ml DIVA antigens, 250 IU/ml avian tuberculin (PPD-A), and 300 IU/ml bovine tuberculin (PPD-B). Positive and negative controls were stimulated with pokeweed mitogen (PWM) at 10 µg/ml and RPMI (Nil), respectively. Samples were incubated at 37 °C in a humid atmosphere for 18–24 h. The culture supernatants from duplicate samples were pooled and stored at – 80 °C until the IFN-γ test was performed. The results of the IGRA test were reported as the change in optical density (OD) at 450 nanometers (ΔOD450 nm). This was determined by subtracting the OD450 nm of whole blood cultures stimulated with antigen from the OD450 nm of cultures without antigen. Optical Density (OD) refers to the amount of light absorbed by the sample, and 450 nm is the specific wavelength of light used for the measurement. The intensity of the color corresponds to the OD450 nm value and is directly related to the concentration of the target antigen in the sample.
After humanely euthanizing the animals using a captive bolt (CASH ® special 0.22 caliber penetrating captive bolt) followed by immediate exsanguination, postmortem examinations were carried out by a skilled pathologist assisted by two veterinarians with extensive experience in diagnosing bTB lesions. At the end of the natural exposure to infection, all calves were euthanized and examined. Following visual inspection and palpation of each lymph node and tissue in the carcass, the relevant organs for sampling were separated and transferred to the postmortem examination table for detailed evaluation. To assess the true disease status, all the lymph nodes were sliced into 1–2 cm thick sections and meticulously examined. The animal was classified as bTB lesion positive (lesioned) if any of the examined tissues had gross visible lesions consistent with bTB and negative (non-lesioned) if no obvious lesions were found. The severity of the gross lesions was evaluated using the semiquantitative technique developed by Vordermeier et al. taking the number and size of the observed lesions into account.
For bacterial culture, samples were collected from ten different tissues of each animal, including mandibular, lateral retropharyngeal, medial retropharyngeal, bronchial, cranial tracheobronchial, mediastinal, hepatic and mesenteric lymph nodes as well as from the left and right lung lobes, regardless of the presence or absence of lesions. Extra samples were collected when lesions were found in other examined lymph nodes and tissues. The lymph node and tissue samples were collected in sterile 50 mL Falcon tubes containing 10–15 mL of 0.9% saline solution and transported to the laboratory for bacterial culture and isolation. Prior to processing, the tissue samples were frozen at − 20 °C.
For mycobacterial isolation, lymph nodes or tissue samples were cut with sterile blades into thin slices. The pieces were then homogenized with a sterile pestle and mortar and transferred into a sterile plastic bag for further grinding in a stomacher. Then, 5 ml of the homogenate was transferred to a sterile 50 ml Falcon tube, decontaminated by adding an equal volume of 4% NaOH and mixing for 15 min. The mixture was neutralized by adding 40 ml of PBS, followed by centrifugation at 1,106 g for 15 min. The supernatant was discarded, and 0.1 ml of the sediment from each sample was spread onto duplicate slants of Lowenstein–Jensen (LJ) medium (Sigma Aldrich, Co, St. Louis, USA) , one enriched with sodium pyruvate and the other with glycerol. The cultures were then incubated aerobically at 37 °C for up to 8 weeks, after which the growth of the mycobacterial colonies was observed weekly. Slants that did not show growth beyond the 8th week were considered to be negative. Ziehl–Neelsen (ZN) staining technique was used to confirm the presence of acid-fast bacilli (AFB) in the smears of culture-positive colonies.
DNA was extracted from culture isolates using the DNeasy Blood and Tissue Kit (QIAGEN) with some modifications. First, loopfuls of cells were removed from culture-positive LJ media using a sterile loop and transferred to 2 ml Eppendorf tubes containing 300 μl of TE buffer. The tubes were then incubated at 85 °C for 60 min on a heating block to inactivate the bacteria. Lysis buffer was prepared by adding 20 ml of 20 mm Tris-HCl, 12 ml of Triton X and 2 ml of EDTA to one liter of water, after which the mixture was stored at room temperature. Immediately before use, 80 mg of lysozyme was added per ml of lysis buffer to make a working lysis buffer solution. For each reconstituted culture sample, 100 µl of working lysis buffer solution was added. The samples were incubated at 37 °C for 1 h. Then, 180 µl of ATL buffer and 20 µl of proteinase K were added to each sample. The samples were incubated at 56 °C for 3 h and vortexed every hour to ensure complete mixing. Then, all downstream procedures were performed using the reagents provided by the manufacturer as described in QIAGEN , and the DNA was eluted in 100 µl of AE buffer.
The DNA samples were amplified by two-step real-time multiplex PCR at the AHI, Sebeta, Ethiopia. In the first step of the multiplex PCR cycle, three primer-probe sets targeting IS1081, the yrbE3A gene and the ‘RD7’ deletion were used for the collective identification of both animal- and human-adapted, animal-adapted and human-adapted MTBC species. In the second step, three additional primer sets targeting the RD4 gene deletion, SNP at the lepA locus and the SNP at the rskA locus were used for specific identification of M. bovis , M. caprae and M. origis , respectively. The target genes and primer/probe set sequences used in the current multiplex PCR assay are as describe in our pervious study . The amplification was carried out in a 20 µL final reaction mixture, which consisted of 5 µL of DNA template, 10 µL of prime gene expression master mix with ROX dye, 0.5 µL of each primer, and 3.5 µL of nuclease-free water. We used nuclease-free water as the negative control and synthetic gDNA targets (IDT, Iowa, USA) as the external positive controls in duplicate for each PCR run to ensure that the master mix and amplification conditions worked as expected. The IS1081 sequence, a common target for all MTBCs, was used as an internal amplification control. The results were considered invalid if the positive control was not detected or if the negative control tested positive. Amplification was performed using the QuantStudio™ 6 Flex Real-Time PCR System (Applied Biosystems Life Technologies, Thermo Fisher Scientific, Waltham, MA, USA). The thermocycler protocol included one cycle of 3 min at 95 °C for polymerase activation, 40 cycles of 15 s at 95 °C for denaturation, and 40 cycles of 60 s each at 63 °C for annealing/extension steps. For all primer-probe sets, a sample was considered PCR positive when the cycle threshold (Ct) value was ≤ 40.
Prior to commencing the study, ethical approval was granted by the Institutional Review Board of Aklilu Lemma Institute of Pathobiology (reference number ALIPB IRB/44/2013/21). The experiment was conducted according to the Technical Guideline for National Research Ethics Review, ensuring compliance with the guidelines . Additionally, the study adhered to and reported following the ARRIVE guidelines 2.0.
Data analysis was performed using GraphPad Prism version 9.4.0 for Windows (GraphPad Software, San Diego, California, USA), the R statistical language and R Studio . Plots were produced using the ggplot2 and ‘ggstatsplot’ packages . Normality tests were conducted using the Shapiro-Wilk test with the normality() function from the {dlookr} package in R. Additionally, visual inspection of the distributions was conducted using the plot_histogram() and plot_density() functions from the {DataExplorer} package in R. Based on the results, the nonparametric statistical tests were used: the Wilcoxon rank-sum (Mann-Whitney U) test was applied to determine differences in skin and IFN-γ test responses between the two treatment groups; the nonparametric Friedman repeated measures ANOVA with Bonferroni correction for multiple comparisons was used to determine differences in skin and IFN-γ responses between different antigens within a group and the nonparametric Spearman’s rank correlation coefficient was used to determine the strength and direction of association between the different test antigens IFN-γ and skin test responses. The rank biserial correlation coefficients were calculated using the ggbetweenstats() and ggwithinstats() functions from the {ggstatsplot} package in R. The ggbetweenstats() function was used for comparisons between two independent groups (vaccinate vs. control response to each antigen) while the ggwithinstats() function was used for comparisons involving more than two dependent groups (animal immune response to different antigens). Sensitivity, specificity, positive likelihood ratio (LR+), negative likelihood ratio (LR−), and accuracy were calculated using 2 × 2 contingency tables (see Supplementary Table ). In the contingency tables, columns indicate the disease status (positive or negative) determined by M. bovis PCR (Table ) and visible lesions, culture or M. bovis PCR (Supplementary Table S2) used as a reference standard. The rows represent the positive or negative results obtained using the antemortem tests. The cells labeled (a) and (d) represent true positive and true negative test results, respectively while the cells labeled (b) and (c) represent false positive and false negative test results, respectively. The measures of diagnostic test performance were calculated as follows: sensitivity = a/(a + c); specificity = d/(b + d); LR+ = (a/(a + c))/(b/(b + d)); LR− = (c/(a + c))/(d/(b + d)); and overall diagnostic accuracy = (a + d)/N – . Cohen’s kappa statistic, used to measure binary agreement between two tests for classifying animals into positive or negative groups, was calculated using the free online Graphpad web calculator. For statistical significance, P < 0.05 and 95% CI were used.
Sensitivity refers to the proportion of infected animals (true positives) that are correctly identified by the test, while specificity refers to the proportion of animals without the disease that the test correctly identifies as negative. A high sensitivity indicates that the test effectively identifies infected animals, reducing the risk of false negatives, while a high specificity indicates that the test effectively excludes non-infected animals, reducing the risk of false positives. The overall accuracy of a test refers to the proportion of correct identifications computed by dividing the sum of true positives and true negatives by the total number of animals tested . The LR+ indicates the likelihood of observing a positive test in animals with the disease compared to those without the disease. A favorable test has a high positive likelihood ratio. The negative likelihood ratio (LR−) reflects how much less likely a negative test result is in diseased animals compared to healthy ones. A low LR− means that the disease is unlikely, with a negative test. The lower the value of (LR−), the more valuable that test is for increasing certainty about a negative diagnosis , . Both LR+ and LR− close to 1.0 suggest that the test does not distinguish between animals with or without the disease . Kappa was interpreted using a scale as follows: kappa ≤ 0 as indicating no agreement,, and 0.81-1.00 as almost perfect agreement .
Skin test and IGRA responses of experimental calves’ post BCG vaccination prior to exposure to infected animals Skin and IGRA responses in experimental calves postexposure to infected cattle Sensitivity of skin tests and IFN-γ assays compared against Correlations between antemortem diagnostic tests of bTB Overlapping and distinct responses to multiple skin and IGRA test antigens All the three tests: DIVA IGRA, DIVA skin and CCT tests are concordant on 69% (88/127) of all tested animals at the end of the exposure period (Fig. and Supplementary Table S3). The BCG-vaccinated animals had concordant outcomes for 74% (46/62) of animals using the DIVA IGRA and CCT skin tests. The CCT tuberculin skin and DIVA skin tests showed concordant test responses in 86% (56/65) of controls and 82% (51/62) vaccinated animals. Between the DIVA IGRA and DIVA skin tests, a similar concordance of 86% (56/65) was observed in unvaccinated controls while 79% (49/62) was seen in BCG vaccinated cattle (Fig. and Supplementary Table S3). The bovine minus avian tuberculin IGRA and DIVA IGRA test results are concordant in 71% (46/65) of controls and in 63% (39/62) of vaccinated animals (Supplementary Fig. S3 and Table S4). Diagnostic status based on the DIVA IFN-γ and DIVA skin test formats showed substantial agreement, with observed agreement on 83% of the cases and a kappa value of 0.645 (95% CI 0.51, 0.78), suggesting a level of concordance beyond that expected by chance. The bovine tuberculin and bovine minus avian tuberculin IGRA tests and bovine tuberculin skin test (SIT) are concordant on 70% (89/127) of the animals. In the unvaccinated control group, animals that tested positive for all test antigens except for bovine tuberculin IGRA were positive in postmortem tests (lesions, culture, or PCR). However, the bovine tuberculin IGRA test showed positive results in all postmortem test-negative unvaccinated and BCG-vaccinated animals. In BCG-vaccinated cattle, the DIVA skin test, DIVA IGRA test, and bovine minus avian tuberculin IGRA each tested positive in one postmortem test negative animal. Additionally, the bovine tuberculin skin test was positive in two postmortem test negative cattle.
DIVA skin and all tuberculin skin test antigens elicited no positive responses in the unvaccinated control group, demonstrating 100% specificity within this sample (Fig. a; Table ). The DIVA skin test was positive for four vaccinated animals (specificity 94.0 (95% CI: 85.2, 98.1) (Table ). Three DIVA skin test-positive vaccinated calves showed a 2 mm increase in skin thickness, while one showed a 3 mm increase at pre-exposure. However, at the fourth month post-exposure, only one showed a 2 mm increase, while the other three showed no change, with two consistently testing negative throughout the exposure period. Statistically no significant difference was observed between the proportions of vaccinated and control animals that responded positively to DIVA skin test ( p = 0.12) (Table ). However, only one vaccinated calf exhibited a positive response to the DIVA IFN-γ assay (specificity 98.5% [95% CI 91.3, 100]). Although the OD450 nm value of the positive IFN-γ response in this vaccinated calf was 0.1, which falls on the positive cutoff value of the test (Fig. b), the animal tested negative (∆OD450 nm = 0.00) on the same assay at the two months after exposure. In contrast, the proportion of vaccinated animals that responded positively to the tuberculin skin test and IFN-γ assay was significantly greater than that of the controls ( p < 0.001), confirming the lack of specificity of tuberculin antigens (Table ). Normality tests conducted on all measured antigen test responses resulted in p-values of < 0.0001, and visual inspection of the distributions confirmed their non-normality with evident asymmetry, dictating non-parametric statistical tests for comparing responses between treatments and antigens. Eight weeks after vaccination, the BCG-vaccinated calves showed a significant increase in skin fold thickness and IFN-γ response to all tuberculin antigens compared to those of the unvaccinated controls (Fig. a and b). However, for DIVA IGRA test, there was no statistically significant difference in IFN-γ production between the two groups, as indicated by a p value of 0.79. The rank biserial correlation coefficient (r) was close to zero (0.03), suggesting a negligible effect of BCG vaccination on IFN-γ production response to DIVA-specific antigen (Fig. a and b). Friedman repeated measure nonparametric ANOVA with Bonferroni correction for multiple comparisons used to compare the responses to different antigens showed that all tuberculin elicited significantly greater skin and IGRA ( p < 0.001) test responses than DIVA antigens in BCG-vaccinated animals. In contrast, in the controls, the DIVA and all tuberculin antigens elicited similar and low skin test responses (Fig. a). In the unvaccinated controls, avian and bovine tuberculin induced a significantly higher IFN-γ response than the DIVA. However, there was no significant difference between the DIVA and bovine minus avian tuberculin IFN-γ responses ( P > 0.05) (Fig. b).
Out of 112 animals with visible lesions, 54 (48%) tested positive for DIVA skin, 49 (44%) for DIVA IGRA, and 49 (44%) for CCT. Similar DIVA skin, CCT and DIVA IGRA tests positive rates were observed in M. bovis culture PCR-positive animals (Table ). These findings suggest consistent and similar accuracy for DIVA skin, DIVA IGRA, and CCT tests in classifying cattle with confirmed M. bovis infection by culture PCR and cattle with lesions suggestive of disease (Table ). On the other hand, bovine minus avian tuberculin IGRA and bovine tuberculin (SIT) skin tests demonstrated higher sensitivity closer to 60% or higher in both confirmed M. bovis infection cases and cattle with lesions (Table ). The results of the Mann‒Whitney U test indicated that skin and IFN-γ responses to all antigens were significantly greater ( P > 0.05) in animals with visible lesions, except for the IFN-γ response to avian tuberculin ( p = 0.69) (Fig. a and b). The rank biserial correlation coefficient (r), which indicates the effect size, showed very large differences for all antigens. Tuberculin antigens elicited significantly greater skin and IGRA responses than DIVA antigens in animals with visible lesions ( p < 0.0001) (Fig. a). Friedman repeated measures nonparametric ANOVA, with a Bonferroni correction for multiple comparisons, demonstrated a significantly greater skin test response in the SIT ( p < 0.0001) and CCT ( p < 0.01) tests than in the DIVA skin test while the skin test responses to avian tuberculin and DIVA were equivalent ( p > 0.05). However, there were no significant differences in the skin test responses to any of the tuberculin or DIVA antigens ( p = 0.07) among the animals without visible lesions. In contrast, both avian and bovine tuberculin antigens induced significantly greater IFN-γ responses than DIVA IGRA ( p < 0.0001) in the animals without visible lesions, while the bovine minus avian tuberculin and DIVA IGRA test responses were comparable ( p > 0.05) (Fig. b).
M. bovis culture PCR test The performance measures of all skin and IGRA test antigens evaluated at the 12th month postexposure in the BCG-vaccinated and unvaccinated animals were presented in Table . The findings indicate that the CCT at the standard cutoff (> 4 mm), DIVA skin test at cutoff (≥ 2 mm), and DIVA IGRA tests show comparable levels of accuracy and performance, albeit with an overall relative sensitivity of less than 50%. At the end of the exposure period, out of 47 culture PCR-positive unvaccinated cattle, 25 and 23 had positive skin fold thickness responses to the DIVA and CCT skin tests, respectively, whereas out of 42 culture PCR-positive BCG-vaccinated group, 16 and 17 animals had positive skin reactions to the DIVA and CCT skin tests, respectively (Table ). The results indicate that the DIVA and CCT have overall relative sensitivities to culture PCR status of 46% (95% CI 36, 56) and 45% (95% CI 35, 55), respectively. The DIVA IGRA test demonstrated an overall relative sensitivity of 47% (95% CI 37, 57). On the other hand, both the bovine and bovine minus avian tuberculin IGRA tests with the SIT skin test demonstrate comparable accuracy and performance, with significantly greater overall sensitivity of over 60% compared to CCT, DIVA skin, and DIVA IGRA tests regardless of the treatment group. Although the relative sensitivities of all tests were slightly greater in the unvaccinated control calves than in the BCG-vaccinated calves, these differences were not statistically significant. The sensitivity of all skin tests and IFN-γ assays was similar, with no statistically significant differences, when evaluated against the combined visible lesions or M. bovis PCR status reference standard (Supplementary Table S2) compared to the M. bovis culture PCR status reference (Table ).
The correlation between skin and IFN-γ responses to DIVA and tuberculin antigens were evaluated using the nonparametric Spearman’s rank correlation coefficient (ρ). The DIVA skin test response was strongly correlated with the CCT (Spearman’s ρ = 0.65, p < 0.0001) and SIT (Spearman’s ρ = 0.71, p < 0.0001) tuberculin skin tests responses. Similarly, the DIVA IFN-γ test responses were strongly positively correlated with both the bovine minus avian tuberculin (Spearman’s ρ = 0.79, p < 0.0001) and the bovine tuberculin IFN-γ test responses (Spearman’s ρ = 0.82, p < 0.0001). The correlation between DIVA skin and DIVA IFN-γ test responses was also strong and positive (Spearman’s ρ = 0.73; 95% CI 0.64–0.81, p < 0.0001), as was that between the CCT skin and bovine minus avian tuberculin IFN-γ responses (Spearman’s ρ = 0.79; 95% CI 0.71–0.85, p < 0.0001) (see Supplementary Fig. S2). All the above correlations were statistically significant and had large effect sizes, suggesting that both DIVA and tuberculin antigens used either in skin or IGRA induce similar immunological responses in bTB-infected animals (Fig. ).
Incorporating BCG vaccination into control strategies has the potential to offer a cost-effective and sustainable method of controlling bTB in regions where the “test and cull” strategy might not be feasible or persistent wildlife reservoirs pose challenges. Studies show that BCG vaccination reduces bTB pathology and transmission in cattle , , . BCG, the most widely used vaccine against human tuberculosis is approved for some wildlife species such as the Eurasian badger , . However, deployment of BCG vaccination to control bTB in cattle requires validation of a diagnostic assay with reliable sensitivity, specificity, and DIVA capability. This study evaluated immune responses to tuberculin and DIVA peptide cocktails (ESAT-6, CFP10, and Rv3615c) in IFN-γ and skin tests in 67 BCG-vaccinated and 67 unvaccinated controls prior to exposure and in 65 unvaccinated and 62 BCG-vaccinated cattle after one year of exposure to natural infection. Eight weeks post BCG, all DIVA skin and IGRA tests as well as all tuberculin skin tests demonstrated 100% specificity in the unvaccinated control calves. In vaccinated calves, DIVA IGRA and skin tests had specificities of 98.5% and 94.0%, respectively. All DIVA skin and IGRA test positive calves at preexposure were BCG-vaccinated that showed responses close to the marginal cutoff values in their respective tests. However, upon retesting at the 4th and 2nd month post-exposure, respectively, they tested negative in these same tests except one. This suggests that the initial marginally positive test responses before exposure were likely false positives. This aligns with previously reported lower proportions of positive DIVA IFN-γ responses in BCG-vaccinated calves and positive DIVA skin test responses in Gudair-vaccinated calves . However, it is difficult to explain why such erratic positive responses to DVIA antigens in both skin and IGRA tests occur in vaccinated calves. In contrast to DIVA, tuberculin antigens elicited significantly higher skin and IFN-γ responses eight-weeks post-BCG in vaccinated cattle. This elevated pre-exposure response is as expected given that the DIVA antigens were selected specifically not to elicit an immune response in BCG vaccinates in contrast to tuberculin that contains a broader range of antigens that can cross-react with BCG. Several previous studies have also reported a higher skin thickness response to tuberculin than to DIVA antigens in infected cattle with lesions , , , , and greater IFN-γ response to tuberculin compared to ESAT-6 in M. bovis culture-positive cattle than in M. bovis culture-negative animals , . A positive bovine minus avian as well as bovine tuberculin IFN-γ response was also observed in unvaccinated control calves at eight-weeks post BCG and in vaccinated cattle without visible lesions at the end of exposure. Positive tuberculin IFN-γ response in unvaccinated control at preexposure may have been due to nonspecific immune response to environmental mycobacterial exposure. The IFN-γ responses biased towards avian tuberculin, suggest sensitization to environmental mycobacteria. This further suggests that the tuberculin antigens lack specificity in regions where animals are possibly exposed to environmental mycobacteria, leading to immune sensitization. The observed nonspecific positive IFN-γ test response to tuberculin in unvaccinated and noninfected control cattle in the current is consistent with previous reports , . Young cattle are reported to have more NK cells and exhibit nonspecific IFN-γ production in as much as one-third of the calves , . A strong avian tuberculin-biased response has also been reported in Gudair-vaccinated and noninfected control cattle . The DIVA skin and IFN-γ responses were significantly lower than those of tuberculin; however, DIVA tests performed comparably to the comparative cervical test (CCT). These three tests showed less than 50% sensitivity in cattle that were confirmed positive for M. bovis in culture PCR or through the presence of lesions indicative of possible infection. In contrast, the bovine and bovine minus avian tuberculin IGRA tests, as well as the SIT skin test, showed significantly higher sensitivities of about 60% and above in the same groups compared to the CCT, DIVA skin, and DIVA IGRA tests. However, these tests are less specific in unvaccinated controls exposed to environmental mycobacteria and BCG-vaccinated cattle. More than 20% of animals with a positive immune response to all skin and IGRA tests, indicating possible infection, were M. bovis culture PCR-negative. This suggests a lack of sensitivity in mycobacterial culture for detecting potentially infected cattle. Three controls and five vaccinates (100%) of claves that tested negative in all postmortem tests (bTB lesions, culture, or PCR) also tested positive in the bovine tuberculin minus nil IGRA test (Fig. ). This demonstrates the lack of specificity of the bovine tuberculin in the IGRA test. This lack of specificity could be due to environmental mycobacterial exposure or the effect of BCG, which suggests that using the IGRA bovine tuberculin alone in areas with a high likelihood of exposure to environmental mycobacteria is inappropriate. Previous research reported DIVA skin and IGRA test sensitivities higher than 63% in experimentally and naturally infected tuberculous cattle , . Several factors, including regional differences related to herd size, breed, age, stage of disease, repeated tuberculin skin testing, the prevalence of environmental mycobacteria, helminth infections, and pathogens other than mycobacteria causing granulomas, might have contributed to the variations and lower relative sensitivity observed in our study. Repeated tuberculin skin tests have been reported to cause skin desensitization in naturally infected cattle, leading to a lower overall sensitivity , . We also noticed a decrease in the amplitude of the skin test response in the terminal test conducted for the fourth repeated time (see Supplementary Fig. S2). Despite regular deworming, 22% of our study animals had an active liver fluke infection with a lower mean total lesion score compared to animals without active fasciolosis postmortem, suggesting a negative effect of active fasciolosis on lesion severity ( p = 0.07). A reduced magnitude of skin response to tuberculin has been reported in cattle co-infected with tuberculosis and liver flukes . In a study conducted in Ireland, animals with fewer than 3.6 lesions at slaughter exhibited no reaction to the tuberculin skin test . Our study found a strong significant correlation between the DIVA peptide cocktail and tuberculin antigen responses in skin fold thickness increase and ΔOD450 nm values of IFN-γ responses. This finding agrees with the previously reported high correlation between DIVA- and tuberculin-induced skinfold thickness and IFN-γ OD450 nm , . Although the correlation does not indicate accuracy, a significant correlation between the immune responses elicited by DIVA and tuberculin antigens in the skin and IGRA tests suggests that both may measure the same underlying condition. The DIVA skin or DIVA IFN-γ tests used in combination with CCT yielded concordant test outcomes in approximately three-quarters or more of the same individual controls and vaccinated animals. The similar responses to these three tests, regardless of vaccination status, further suggest that the tests are consistent across different treatment groups and have comparable performances in measuring similar underlying disease status. The limitations of this study include the use of diagnostic data collected to evaluate the efficacy of BCG through a natural transmission experiment , that was not specifically designed for diagnostic test evaluation. Although Bayesian Latent Class Analysis is recommended for evaluating measures of diagnostic performance when a gold standard is not available, we used culture PCR test status as a reference standard to calculate the relative sensitivity. However, convergence of the standard Walter-Hui model for latent class analysis of diagnostic tests depends on having a set of at least two diagnostic test results for individuals from two populations with distinct prevalence. This latter requirement is not met for our data, limiting the value and reliability of these methods in this context. The sensitivities of all the tests are not significantly different when using M. bovis culture PCR test status as a reference standard compared to using combinations of visible lesions and M. bovis culture PCR status as a reference (Table , Supplementary Table S3, Table S5, Supplementary Fig. S4). This suggests that the performance of these tests in detecting positive cases remains consistent, regardless of whether the reference standard is culture PCR or a combination of visible lesions and culture PCR test results. However, both tests used as a reference standard are imperfect to be considered as a true gold standard test since lesions are nonspecific and culture is less sensitive. Test sensitivity can vary depending on different epidemiological contexts influenced by breed, immunological status, co-infections, and regional strain differences. Therefore, the relative sensitivity of the tests reported in this study may not accurately reflect the performance of the tests in the field under varying epidemiological conditions. Overall, our results confirm that DIVA antigens can be used to detect tuberculosis infection in BCG-vaccinated cattle with high specificity and sensitivity comparable to that of CCT. Thus, the DIVA antigens used in both IGRA and skin tests hold a promise for practical implementation alongside BCG vaccination. This approach is particularly relevant in areas where the test-and-slaughter strategy is not economically feasible and there is no alternative implementable bTB control option in resource-poor countries such as Ethiopia. In general, these findings underscore the low sensitivity of the current antemortem bTB diagnostic tests and the need for more sensitive diagnostic tests, which are crucial for effective bTB control. In a population with a high prevalence of bTB, a test and slaughter strategy using low sensitivity tests is not expected to be effective in controlling the disease. Both the mean, and median bTB prevalence exceed 30%, in large and medium-sized herds in Central Ethiopia . When a high prevalence of bTB is observed within a herd in some high-income countries, all animals in the infected herd are culled. In countries with lower bTB prevalence and more financial resources, total culling may be feasible and effective in eradicating the disease. However, in Ethiopia, the combination of high bTB prevalence, economic limitations, and logistical constraints makes total culling impractical. Culling the entire herd presents several significant challenges including difficulties in obtaining replacement herds, farmers’ reluctance to cull high milk-producing cows, and financial constraints associated with compensating for the loss of numerous dairy cattle. Given these challenges, vaccination presents a more viable and economically feasible alternative to total culling. Therefore, BCG vaccination of newborn calves and testing replacement livestock before purchasing should be considered important alternative measures to prevent the transmission and spread of the disease. Farmers are also more likely to accept and participate in vaccination programs than cull |
Diagnostic therapeutic care pathway for pediatric food allergies and intolerances in Italy: a joint position paper by the Italian Society for Pediatric Gastroenterology Hepatology and Nutrition (SIGENP) and the Italian Society for Pediatric Allergy and Immunology (SIAIP) | 57ab2e7f-8afc-4e79-ad58-53698699e52b | 9188074 | Internal Medicine[mh] | This joint position paper has been prepared by a group of experts in pediatric gastroenterology, allergy and nutrition from the Italian Society for Pediatric Gastroenterology Hepatology and Nutrition (SIGENP) and the Italian Society for Pediatric Allergy and Immunology (SIAIP). The paper is focused on the Diagnostic Therapeutic Care Pathway (DTCP) for pediatric food allergies ad intolerances (FAIs) in Italy. The group considered this a priority topic to support clinical practice of healthcare professionals approaching pediatric subjects affected by these conditions. All the activities and procedures, that are considered as a minimum essential level, have been included in a circular continuum of activities provided by different healthcare professionals ( Fig. ) . The need to elaborate this document derives from the significant increase in the prevalence of FAIs in the last two decades which has led to a growing demand for diagnostic and therapeutic services, which are often incongruous and inappropriate (such as the use of non-scientifically validated diagnostic tests and “self-therapy”) with a consequent increase in the overall healthcare costs and diagnostic errors and delays [ – ]. The design of this DTCP is inspired by the logic of governance of activities in which each health care professional figure plays a defined role in following established practices to guarantee the optimal management of the patients throughout the national territory. Objectives Definitions Epidemiology Symptoms Diagnostic approach for pediatric food allergy Therapeutic approach for pediatric food allergies Diagnostic and therapeutic approach for pediatric food intolerances Toward an integrated diagnostic-therapeutic pathway for the pediatric patient with FAI Activities, skills, roles, and responsibilities of healthcare professionals General practitioner/family pediatrician Emergency department physician Tertiary center physician for pediatric FAI Pediatric nurse Dietitian/nutritionist main objectives of the DTCP are: To define the correct criteria to identify subjects with FAIs; To define the appropriate procedures for the diagnosis, treatment, and follow-up of these conditions; To define the skills, roles and responsibilities of the health care professionals involved in the process of “care” of FAIs to reduce delays and diagnostic errors, health care costs, and risks for children affected by FAIs.
Based on a consensus of the European Academy of Allergy and Clinical Immunology (EAACI), subsequently revisited by the American Gastroenterology Association (AGA), adverse food reactions can be classified into toxic and non-toxic , toxic food reactions affect every individual and are dependent on the amount of food ingested contaminated by toxic substances that can be contained in food naturally or may be produced following its handling (e.g., mushroom poisoning, gastroenteritis from bacterial toxins contained in spoiled foods). The DTCP focuses on non-toxic food adverse reactions, linked to individual susceptibility to certain foods, which from a pathogenetic point of view are divided into: food allergies (FA) (immune-mediated). food intolerances (FI) (not immune-mediated). Food allergies can in turn be divided mainly into two categories: IgE-mediated forms, attributable to an initial sensitization process to certain proteins against which the immune system develops IgE class antibodies responsible for acute symptoms onset (usually within 2 hours of food intake); and non-IgE-mediated forms, characterized by the involvement of humoral and/or cellular components with a later onset (few hours to few days after food intake) and with pathophysiological mechanisms not yet fully defined. In some cases the clinical picture may be attributable to a mixed IgE- and non-IgE-mediated mechanism (mixed forms) . Food intolerances are non-immune adverse reactions to food. These reactions can be attributable to an enzymatic defect (e.g., lactose intolerance) or to alteration of transport mechanisms (e.g., fructose intolerance) or to other mechanisms (intolerance to Oligo-, Di-, Monosaccharides and Fermentable Polyols, FODMAPs; secondary reactions to ingestion of vasoactive amines or additives contained in food) .
FAIs are among the most common chronic conditions in the pediatric age and are recognized as a global health problem. Epidemiology of FA has changed during the last two decades with a dramatic increase in the prevalence, severity of clinical manifestations, leading to an increase in hospital admissions, medical visits, treatments, burden of care on families and economic impact, with significant direct costs for the families and the healthcare system [ , , ]. According to Italian National Institute of Statistics (ISTAT) data obtained from sample surveys updated to 2016, subjects suffering from allergic diseases in Italy showed an increasing trend from 9.8% in 2010 to 10.7% in 2016, preferentially involving subjects up to 18 years of age. In Italy during the last 20 years there has been an increase of over 400% in the number of visits to the Emergency Department (ED) due to food-induced anaphylaxis . Furthermore, according to the data released by the Italian Ministry of Health updated to 2017, in Italy the subjects suffering from FA equal 1,800,000 and it is estimated that about 50% are affected by non-IgE-mediated forms in the pediatric age , while lactase deficiency affects an average of 40–50% of the population. This progressive increase reflects the European and global situation. Cow’s milk allergy (CMA) is one of the most frequent FA in children with a prevalence ranging from 2 to 6% in Europe . In Italy, according to the Italian National Institute of Statistics (ISTAT), the CMA estimated costs are about 20 million euros per year, to which about 52 million euros should be added, deriving from the use of special formulas and cow’s milk protein-free foods . Over the years there has also been an increase in the prevalence of “new” clinical manifestations of FAIs of pediatric gastroenterology interest (such as eosinophilic disorders of the gastrointestinal tract or Food Protein-Induced Enterocolitis Syndrome FPIES) that require a multidisciplinary and complex diagnostic-therapeutic planning .
The signs and symptoms that could raise the suspicion of FA or FI are multiple. They depend on the type of pathogenetic mechanism (FI vs IgE- or non-IgE-mediated or mixed forms of FA), involved food, quantity ingested, food preparation method (thermolabile and thermostable allergens), type of contained allergens, exposure mode (concomitant intake of drugs or physical exercise), specific factors of the host (age, eating habits, degree and type of sensitization, presence of other allergic diseases) and presence of any concomitant comorbidities (disorders that cause intestinal mucosa damage are to be considered conditions favoring sensitization and allergic reaction). Some clinical pictures (anaphylaxis, angioedema, asthma, urticaria), particularly when arise acutely (generally within 2 hours) after food contact or ingestion, strongly evoke the suspect of IgE-mediated FA . An exception could be given by FPIES, a non-IgE-mediated FA characterized by an acute clinical picture (repeated vomiting that occurs within 1–4 hours after ingestion of the suspected food accompanied by other symptoms such as lethargy, pallor, hypotension, diarrhea and hypothermia) in the absence of skin or respiratory symptoms typical of IgE-FA (Table ) ( enter Table here ) . Other clinical manifestations, if chronic or subacute and related to the gastrointestinal system, are not pathognomonic and can be symptoms of non-IgE-mediated FA, FI or other chronic conditions (functional gastrointestinal disorders, inflammatory bowel diseases, neoplasms, chronic infections). Thus, in some cases, the suspicion of FA or FI must be placed after excluding other possible diseases in the differential diagnosis. Toxic forms include scombroid syndrome, in which inadequately preserved fish contains large amounts of histamine, derived from the bacterial metabolism of the amino acid L-histidine in the fish muscle, which may cause urticarial rash, systemic symptoms (headache, tachycardia, hypotension) and gastrointestinal symptoms including nausea, vomiting, diarrhea, abdominal pain . Finally, among the pharmacological forms, which occur because the ingested food contains substances with pharmacological-like activity, may determine the onset of gastrointestinal symptoms. An example is intoxication by glycoalkaloids (including α-solanine) contained in potatoes. In particular conditions or in the presence of excessive ripening, the accumulation of these substances can cause a clinical picture with vomiting, severe diarrhea and abdominal pain and other systemic symptoms, due to the inhibition of acetylcholinesterase . In Table are depicted the main symptoms and clinical entities of pediatric FA and in Table the main clinical features of different forms of Non-IgE-mediated FA (enter Table and Table here) .
The initial approach by healthcare professionals operating at ED and/or by the Family Pediatrician (FP) could be relevant for the evaluation of potential indication for a specialist consultation. The ED physician and/or FP should perform an anamnestic evaluation of the patient concerning : main clinical features. recurrence of the clinical manifestations. time between food intake and symptoms onset. duration of symptoms. mode of symptom resolution (spontaneously or after therapy). type, quantity, and cooking method of foods taken in the 24 hours before clinical picture onset. concomitant intake of drugs (painkillers, antibiotics, etc.) relationship with other conditions (gastrointestinal system, etc.) relationship with physical activity after a meal. presence of similar symptoms in other diners. personal or family medical history for FA. If the anamnestic evaluation leads to a suspected diagnosis of FAI, the ED physician and/or the FP should ask for a visit to a tertiary center. First level tests Second level tests Third level tests Skin allergy tests (skin prick tests, SPT) are allergy screening tests commonly used to identify the presence of specific IgE for food allergens and are performed by placing a drop of the allergen extract on the skin (usually the volar portion of the forearm) and then pricking the skin, through the drop, with a metal lancet with a 1-mm disposable tip. The tests are then evaluated 15 minutes after application; wheals at least 3 mm in diameter larger than the negative control are considered positive . Food allergens are composed by several molecules, some stable to heat, storage and digestion and others less stable in which the allergenicity could be significantly reduced when the food is exposed to high temperature . The allergenic extracts for SPT containing proteins stable to heat and gastric digestion such as casein from cow’s milk, ovomucoid from egg etc., have a high negative predictive value . Allergenic extracts for SPT of other foods, such as vegetables, have a low negative predictive value as they contain thermolabile molecules such as profilins. For these allergens it may be useful to use “prick by prick” technique with fresh food . In the case of suggestive symptoms of FA and positive SPT, the diagnosis is certain if the clinical picture is compatible with anaphylaxis, while it is only probable in all other cases. To obtain a definitive diagnosis it is necessary to perform the oral food challenge (OFC) after a diagnostic elimination diet. On the other hand, if the skin test results are doubtful or negative in contradiction with the clinical history, it is possible to proceed with the specific serum IgE assay and with an elimination diet followed by diagnostic OFC (second level diagnosis) .
The measurement of total and food-specific serum IgE are considered second level tests. The determination of total serum IgE is not helpful for the diagnosis of FA. On the contrary, the evaluation of food-specific serum IgE could be helpful in the diagnostic approach when the SPT are doubtful or negative in contradiction with the clinical history, or when it is not possible to perform the SPT (inability to suspend the antihistamine or steroid treatment, presence of skin lesions, dermographism).
The assay of IgE antibodies against the food individual allergenic molecules also called Component-resolved diagnostic (CRD) allows to establish the sensitization profile of each patient, to establish whether it is primary and/or secondary food allergens sensitization to cross-reactive molecules (panallergens). CRD has prognostic value and can potentially contribute to the identification of reaction severity . Furthermore, CRD approach could be helpful in subjects with specific serum IgE false positivity. This could be the case of polyclonal IgE activation with high total IgE serum levels. It may be also due to specific IgE against cross-reactive carbohydrate determinants (anti-CCD IgE) that do not cause clinical symptoms but can determine an increase of specific IgE serum levels for foods or environmental allergens . Therefore, the CRD approach has various implications in clinical practice, especially when a sensitization to multiple foods or between foods and inhalant allergens occurs. These processes are characterized by cross-reactivity between allergenic molecules with high levels of homology, expressed by different foods and/or inhalant allergens such as plant foods and pollen or crustaceans and mites, also called panallergens. The main families of panallergens are responsible for the so-called “pollen-food syndrome” and are the following: Profilins are molecules that are generally inactivated by heat and proteolysis. They are also called Bet-v2 homologous, the birch profilin. They are usually responsible for oral allergic syndrome (itching of the lips, tongue, palate, ears and throat and/or angioedema of the same sites) induced by ingestion of raw foods, Usually, the primary sensitization is towards grasses or birch; this family includes Mal-d4 of apple, Pru-p4 of peach, Heb-v8 of latex . An exception is the celery profilin (Api g 4), which is heat resistant and can consequently cause symptoms even with cooked food. Pathogenesis Related Proteins-10 (PR-10) are molecules partially inactivated by heat and proteolysis, usually responsible for oral allergic syndrome, or for non-serious systemic reactions. They are also defined Bet-v1homologous, and the primary sensitization is generally towards the Bet-v1 (PR-10 of the birch); this family includes e.g., Pru-av1 of the cherry, and Pru-ar1 of the apricot. The only PR-10 known to cause systemic reactions is Gly-m4 from soy . Lipid Transfer Proteins (LTP) are molecules stable to heat and proteolysis, responsible for systemic reactions, also defined as Pru-p3 homologous, the LTP of peach which contains almost all the epitopes of the LTP present in nature. This family includes Mal-d3 of apple, Art-v3 of mugwort, and typically they are found in the peel of rosaceae, almonds and in the edible seeds of kiwi and tomatoes . Seed storage proteins are allergens resistant to heat and digestion, are associated with a high risk of serious reactions and are present in legumes, nuts, and seeds, with partial cross-reactivity between the different species. This family includes Ara-h1, 2 and 3 of the peanuts, Cor-a 9, 11 and 14 of the hazelnuts, Jug-r 1, 2 and 4 of the walnut, Gly-m 5 and 6 of the soy . Serum albumin are cross-reactive proteins present in mammals, responsible for allergic reactions, even systemic, to meat (e.g., “Cat-pork syndrome” for homology between Fel-d2 from cats and serum albumin from pigs) and to milk (e.g., “Milk-beef syndrome” for homology between Bos-d6 of milk and cow) . Parvalbumins are the major allergens of fish with bones. They are heat- and digestive-resistent proteins responsible for severe systemic reactions. Sensitization generally occurs by ingestion but can also occur by contact or inhalation of proteins in aerosols, generated during cooking or food processing; examples of parvalbumins are Gad-c1 from cod, and Cyp-c1 from carp . Tropomyosins are the major allergens of crustaceans. They are heat- and digestive-resistant proteins and are associated with the risk of severe allergic reactions. They have a high cross-reactivity in Phylum Arthropods. The main members of this family are Der p 10 of the dust mite, and Pen-a1 of the shrimp . Galactose-alpha-1,3-galactose (alpha-Gal) is the allergen responsible for allergic reactions, including anaphylaxis, with an IgE-mediated mechanism, but with delayed onset (after 2–8 hours) after ingestion of red meat or jellies. It is an oligosaccharide resistant to industrial treatments; being slowly absorbed with the lipids of red meat, it reaches the bloodstream through the thoracic duct, thus triggering delayed reactions, even severe ones . Gibberellin-regulated protein (GRP) has been identified as trigger allergen in patients with peach allergy (Pru p 7) and cypress sensitization containing a GRP homologous (Cup s 7). The clinical picture could vary from Oral allergic syndrome (OAS) to anaphylaxis. GRP is contained in many fruits such as orange (Ct s 7), Japanese apricot (Pru m 7), pomegranate (Pum g 7), kiwi . Most egg allergens are found in egg white and are Gal d 1 (ovomucoid, being the main allergen, a thermo- and pepsin-resistant protein, marker of a possible severe allergic reaction . Persistent specific IgE towards this component are associated with a greater risk of allergy persistence in adulthood and of subsequent sensitization to inhalants); Gal d 2 (ovalbumin, partially thermostable, well digested at very low pH) ; Gal d 3, (ovotransferrin, thermolabile, partially cross-reacting with chicken serum albumin) ; Gal d 4, (lysozyme, thermolabile on cooking over 80 °C for at least 2 minutes, often hidden allergen because used as additive for its bacteriostatic actions) . The main specific allergens of the yolk are Gal d 5 (livetin, with possible cross-reactivity with livetin of the chicken) and Gal d 6 (detected in many patients with yolk allergy) . The main allergens of cow’s milk are Bos d 4 (alpha-lactalbumin) , Bos d 5 (beta-lactoglobulin), Bos d 6 (whey albumin). They are all thermolabile proteins, therefore many food allergic children could tolerate these antigens in cooked food. Bos d 6 represents the major allergen of beef and is thus associated with the risk of adverse reactions to raw red meat. Bos-d8 (casein), is a thermostable protein, potentially related to severe clinical reactions; represents a potential hidden allergen because it is used as an additive in the food industry; it also has a high homology with sheep and goat caseins . Each food may therefore contain allergenic molecules belonging to different families and, according to the sensitization profile, it can induce reactions from mild to severe such as anaphylactic shock. For example, the allergens Pru-p1 (PR-10), Pru-p3 (LTP), Pru-p4 (profilins) are present in peaches . Therefore, a patient with peach allergy, may show very different clinical reactions: from mild reactions such as oral itching, typical of sensitization towards profilins, to more severe reactions up to anaphylactic shock, possible in the case of LTP sensitization. On the other hand, even if panallergens, such as LTP contained in different foods, have important structural homologies, this does not necessarily imply that cross-sensitization leads to clinical cross-reactivity. CRD can be performed through the quantitative dosage of the single molecules that may be available (singleplex) or through pre-packaged panels through microarrays, more frequently semi-quantitative tests . The multiplex tests allow to detect specific reactivity towards numerous allergenic molecules from inhalants to foods. These tests can be useful in patients with polysensitization allowing the identification of the allergens molecular profile, or in subjects with idiopathic anaphylaxis, where it is not possible to identify the causative food or when the molecule is not available for individual measurement. The costs, the need for continuous updating on the list of available molecules, their characteristics and the results interpretation place molecular diagnostics in a strictly specialized field. The Atopy Patch Test (APT) is a simple, safe, and low-cost test, potentially useful in the diagnostic approach to children with suspected non-IgE mediated FA. It is carried out by applying a drop of fresh food (equal to 50 μl) (e.g., whole cow’s milk, eggs, powdered wheat dissolved in water or saline: 1 g /10 ml) using a hypoallergenic patch containing a 12 mm aluminum well on which to lay an allergen absorbent cellulose disc . Double-distilled water is preferable as a negative control. The test must always be interpreted as part of a careful evaluation of clinical history, response to the elimination diet and the result of the OFC. However, due to the lack of standardization and the variability of the results reported in the literature, there is no consensus on the possibility of using alone in the FA diagnosis . Oral food challenge represents the “gold standard” for the diagnosis of FA, both for the IgE- and non-IgE-mediated form. The OFC should be performed in all cases of suspected FA. Exceptions are patients with suggestive symptoms of FPIES (Table ) after the intake of a single food and patients with clinical symptoms typical of an IgE-mediated and severe FA (anaphylaxis) arising after the intake of only one type of food, resulted positive at SPT . In these cases, it is preferable to postpone the OFC to avoid the onset of severe reactions . In all cases, before the OFC it is necessary to explain both to the child and the parents the cost/benefit ratio of the test, and to acquire the written informed consent. For this reason, the OFC must be performed in a hospital setting with confirmed experience with this procedure and with emergency medication and resuscitation equipment readily available . In the clinical practice, the OFC is commonly performed in open fashion. It constitutes the simplest way to perform the OFC, because both the medical doctor and the patient (and family members) are aware of which food is administered. Open OFC is reliable mostly when the patient aged less than 3 years and an immediate reaction with objective symptoms is expected. In some situation to avoid that the emotional component affects the genesis and/or evaluation of symptoms, blinded tests are used. Blinded OFC can be divided in single blind (only the medical doctor knows the food administered) or double-blind (neither the medical doctor nor the patient are aware of the food administered) in which the allergenic food is administered mixed with other foods, so as not to be recognized. The food is administered gradually at increasing doses every 15–30 minutes, depending on the protocol, until the maximum dose that generally correspond to the usual daily ration of that food. Exception is FPIES in which it has been proposed to administer the food total dose divided in 3 equal parts, at intervals of 30 minutes. Usually, the total dose to be administered should be calculated by multiplying 0.15–0.3 g of the suspected allergenic protein per kg of the patient’s body weight. The maximum dose should not exceed 3 g total of food protein or/and 100 ml for liquids. The OFC is stopped at the onset of the first objective or even subjective symptoms if repeated, to stop the allergic reaction . The OFC is considered positive when clear objective symptoms of a possible allergic reaction arise or if severe and persistent subjective symptoms (e.g., abdominal pain) occur and are repeated at least 3 times. In the case of FPIES, the OFC is considered positive if, in addition to the major criterion, at least 2 minor criteria occur ( Table ). In some cases, the physician could consider the OFC positive if just a major criterion occurs. The OFC is considered negative if no symptoms occur within 2 hours of the food full dose assumption in the IgE-mediated forms, and within 6 hours in the acute FPIES. It should be noted that the symptoms of non-IgE-mediated forms may occurs even after a few days from the start of the food administration; for this reason, the OFC is considered negative if no symptoms occur in the following 7–10 days, by administering the suspect food at home. Finally, the OFC is defined as inconclusive (or conclusive only for partial tolerance) if it is suspended before the assumption of the food total dose and no symptoms occurred. Gastroenterological tests A clinical picture characterized by gastrointestinal symptoms (diarrhea, blood and/or mucus in the stool, abdominal pain, retrosternal burning, regurgitation, vomiting, dysphagia, body growth failure, etc.), in addition to the screening allergy tests, may require the help of gastroenterological tests (endoscopy with histological examination, pH-impedance analysis, manometry, abdominal ultrasounds evaluation) to exclude and/or confirm mixed or non-IgE-gastrointestinal FA conditions . In case of symptoms compatible with eosinophilic esophagitis (EoE) (vomiting, regurgitation, dysphagia, food bolus, retrosternal burning, etc.), in the presence or absence of peripheral eosinophilia (> 700 cells/mm 3 ), it is necessary to carry out esophagogastroduodenoscopy (EGDS) by performing at least 4–6 esophageal biopsies including the proximal and distal parts of the esophagus. Diagnosis of EoE is based on the presence of esophageal dysfunction symptoms, eosinophilic infiltrate in the esophageal mucosa on histological examination ( > 15 eosinophils in at least one high power field, HPF) which persist after at least 8 weeks with a proton pump inhibitor treatment (1–2 mg/kg/day) and when other causes of esophageal eosinophilia (gastroesophageal reflux disease, infectious esophagitis, achalasia, celiac disease and Crohn’s disease, connective tissue disorders, graft versus host disease, hypersensitivity to drugs and hypereosinophilic syndromes) are excluded. In case of symptoms characterized by abdominal pain, diarrhea, blood in the stool, in the presence or absence of peripheral eosinophilia, an endoscopy of the upper and/or lower intestinal tract with multiple biopsies is necessary. The finding of gastric (≥30/HPF to ≥5 HPF) and/or small intestine (duodenum: 50/HPF; jejunum: 2 × 26/HPF or 52/HPF; ileus: 2 × 28/HPF or 56/HPF) or colon (cecum and ascending colon: 2 × 50/HPF or 100/HPF; transverse and descending colon: 2 × 42/HPF or 84/HPF; rectus-sigma: 2 × 32/HPF or 64/HPF) eosinophilia, confirms the diagnosis of gastritis, gastroenteritis or eosinophilic colitis respectively , after excluding the secondary causes of intestinal eosinophilia (infectious enteritis, celiac disease, Crohn’s disease, connective tissue disorders, graft versus host disease, drug hypersensitivity, tumors of the blood compartment, X-linked syndrome of immunedysregulation-polyendocrinopathy-enteropathy (IPEX) and hypereosinophilic syndromes). Performing multiple biopsies of the esophageal and gastrointestinal tract (4–6 biopsies per segment) is essential for a correct diagnostic approach, as eosinophils are distributed in a focal way in enteritis; for this reason, the histological section can result falsely negative. The other non-IgE mediated forms of FA do not generally require instrumental tests, except in cases of persistent symptoms after 2–4 weeks of food elimination diet, to exclude other pathologies in differential diagnosis. Table shows Diagnostic criteria for Eosinophilic Disorders of the Gastrointestinal Tract ( enter Table here ). Figure shows the Diagnostic Algorithm for the child with suspected FA .
Currently, there is no drug therapy capable of preventing the allergic reaction after the ingestion of the offending foods. The standard and most effective FA treatment, once a FA is diagnosed, is the strict avoidance of the offending food or foods, as such and as a constituent of other foods. This strategy results simple when the food is not commonly consumed in the subject’s diet and does not have a high nutritional power. The food elimination diet becomes more complex if the offending food is very commonly consumed in the subject’s diet and has a high nutritional value. So, the certain identification of the offending food is an important goal to avoid life-threatening reactions and nutritional imbalances induced by inappropriate elimination diet. Sometimes the patient can assume the offending food involuntarily because as a constituent of other foods, therefore an adequate nutritional “counseling” plays a fundamental role in this setting. Prescribing a nutritionally adequate and medically safe diet is a focal point in the FA management. In the nutritional management of children with FA it is important to bestow the adequate amount of nutrients (carbohydrates, lipids, proteins, and micronutrients), make sure that there is adequate growth and that the diet allows as much as possible to practice a normal life of social relationship. So, the role of dieticians in the elaboration of dieto-therapeutic schemes is fundamental in the multidisciplinary management of FA pediatric patients. Among pharmacological product for symptoms relief, antihistamines play an important role in case of symptoms of oral allergic syndrome or urticaria/angioedema. Oral corticosteroids are generally effective in treating both IgE-mediated and non-IgE-mediated forms and should always be available to the parents of child with FA along with antihistamines. In case of more severe reactions, the severity of the clinical picture requires treatment aiming the maintenance of vital functions. In case of anaphylaxis, the first-choice drug is adrenaline, which must be hand out by deep intramuscular route on the vastus lateral of the thigh at a dosage of 0.01 mg/kg of body weight, up to a dosage of 0.5 mg in the child, repeatable after 5 minutes in case of symptoms’ persistence or worsening. All patients with FA and with a history of anaphylaxis must be equipped with devices of pre-dosed adrenaline for self-injection to allow prompt intervention directly by the patient or family member (e.g., parents) in case of allergic reaction. Each patient and caregiver must be instructed on use of self-injectable adrenaline and on the procedures to be implement in case of anaphylaxis, alerting immediately the emergency room. The pharmacological products to use for the management of non-IgE-gastrointestinal FA or mixed forms are the proton pump inhibitors and/or topical or systemic steroids in case of eosinophilic pathologies of the gastrointestinal tract; rehydration, antiemetics (ondansetron) and cortisone in FPIES . Elimination diet of offending food/s remains the only current strategy in other non-IgE-gastrointestinal FA forms. Oral immunotherapy (OIT) is a potential treatment for IgE-mediated FA, in particular for milk and egg, in order to increase the oral tolerance threshold and/or induce/accelerate the food oral tolerance process; this procedure can be conducted after the obtainment of parents’ children informed consent in highly specialized centers. Oral tolerance to offending food is naturally reached with growth in over 90% of children affected by cow’s milk protein and eggs allergies; while IgE-mediated FA to fish, mollusks, crustaceans, and nuts resolves in less than 20% of pediatric patients and tends to persist throughout life in most of cases .
Carbohydrate intolerances (CI) are the most common form of FI. Symptoms are mainly due to the lack of enzymes, transporters, or the overload of transport systems in the intestinal epithelium. The non-absorbed carbohydrates recall fluids by osmosis in the intestinal lumen causing osmotic diarrhea and intestinal bacterial fermentation with gas production and consequent distension and abdominal pain, flatulence, nausea and increased intestinal motility. Extra-intestinal symptoms, such as headache, dizziness, memory disturbances and lethargy, may rarely occur. In some case the symptoms can occur in the first stage of life with very severe gastrointestinal picture and early diagnosis even with genetic tests is needed. Glucose-galactose malabsorption Lactose intolerance Sucrose-isomaltose malabsorption Fructose malabsorption Sorbitol intolerance FODMAPs intolerance Non-celiac gluten sensitivity It is a very early onset food intolerance characterized by diarrhea and severe dehydration starting from the neonatal period. A modest glycosuria is also present, while fructose absorption is normal. Glucose-galactose malabsorption is due to a mutation in the SLC5A1 gene, which encodes the glucose-sodium co-transporter SGTL1. Transmission is autosomal recessive. A molecular diagnosis of the condition by specialized centers is possible. An elimination diet of galactose and glucose quickly resolves the symptoms .
Lactose intolerance is the most frequent form of CI in children and is characterized to the inability to digest lactose due to lack or deficiency of the lactase enzyme responsible for the digestion of lactose into glucose and galactose. Based on the etiology, lactose intolerance can be classified into three main forms: Congenital lactase deficiency : a rare autosomal recessive disease in which enzyme activity is absent or reduced from birth. Secondary lactase deficiency : a common consequence of mucosal diseases such as bacterial proliferation in the small intestine, infections, celiac disease, Crohn’s disease or radiation enteritis. Adult-type hypolactasia (also known as lactase non-persistence) : an autosomal recessive condition resulting from a mutation in the product of lactase gene, responsible for the reduced synthesis of the precursor protein. Medical history and lactose breath test are the main tools for the diagnosis of lactose intolerance. The management consists in the exclusion from the diet of foods containing lactose (see Table , column concerning lactose). In adult-type hypolactasia, foods containing lactose are generally excluded for 2–4 weeks, which is the time useful for symptoms solving. Subsequently, a gradual reintroduction of these foods is carried out until the tolerated dose is reached. In secondary lactase deficiency, the lactose elimination diet is required for a limited time. Literature data suggest that adults and adolescents with lactose intolerance can take up to 12 g of lactose in a single dose (corresponding to a cup of milk) in absence of symptoms or with minor symptoms. Subjects with lactose intolerance may be at risk of reduced calcium intake and supplementation may be required in accordance with current recommendations.
It is secondary to a congenital sucrase-isomaltase deficiency (CSID) characterized by an abnormal absorption of oligosaccharides and disaccharides. Breastfed infants or infants fed with formula containing exclusively lactose are asymptomatic. Symptoms such as watery osmotic diarrhea, abdominal distension and vomiting occur when sucrose or starch dextrins are introduced into the diet. Symptom severity can cause stunting, dehydration, and malnutrition. The CSID is inherited as an autosomal recessive trait and is due to mutations of the sucrase-isomaltase complex (SI) necessary for the digestion of sucrose and starch into monosaccharides on the enterocyte apical membrane. The diagnosis is based on the presence of osmotic diarrhea and is confirmed by positive sucrose breath test. In specialized centers a molecular diagnosis of the condition could be also performed. The management consists of a low sucrose and starch diet. The sucrase-isomaltase prognosis is good, because the starch intolerance resolves during the first years of life and sucrose intolerance usually improves with age .
Fructose is a six-carbon monosaccharide naturally present in fruits, vegetables, and honey. High fructose syrup (HFC) can be obtained in food industry through the enzymatic hydrolysis of corn starch, and it could use as a sweetener in soft drinks, candies, and fruit juices. Fructose malabsorption should not be confused with hereditary fructose intolerance (HFI), in which the lack of aldolase B enzyme leads to an accumulation of fructose-1-phosphate in the liver, kidney and intestines, causing hypoglycemia, nausea, swelling, pain abdominal, diarrhea and vomiting. The specific mechanism responsible for fructose malabsorption is not yet known, but this disorder may be secondary to intestinal damage (e.g., induced by diseases such as celiac disease). The diagnosis of fructose malabsorption can be performed through the hydrogen breath test after oral fructose load, although some studies have shown a high percentage of false negative results of this test. The fructose malabsorption management is based on a daily dietary intake of fructose less than 10 g and on the exclusion from the diet of alcoholic beverages .
Sorbitol is a sugar naturally present in fruit and fruit juices and it is also used in commercial products such as drugs, sweets, dietetic foods and chewing gum. The hydrogen breath test after oral sorbitol load is effective in identifying this condition. The main therapeutic approach is characterized by a reduced content of sorbitol in the diet .
Fermentable monosaccharides, disaccharides, oligosaccharides, and polyols (FODMAPs) are a group of short-chain carbohydrates that are poorly absorbed in the intestine. These highly osmotic substances are fermented by the intestinal bacteria and can evoke the onset of gastrointestinal symptoms by distension of the lumen or by direct action on the intestine through not well-defined mechanisms. The therapeutic approach of FODMAPs intolerance is based on an exclusion diet of foods containing FODMAPs (Table - enter here at the end of sentence ). Considering the large number of foods containing FODMAPs and that the FODMAPs tolerance threshold could be different between subjects, the exclusion diet must be carefully tailored by an experienced nutritionist based on clinical history and the result of the hydrogen breath test. FODMAPs-free diet is usually recommended for 4–6 weeks. After this period, patients are invited to try to reintroduce one or more FODMAPs containing foods, to ascertain their intolerance and/or to assay the FODMAPs tolerance threshold .
Non-celiac gluten sensitivity (NCGS) is a syndrome characterized by persistent gastrointestinal and/or extraintestinal symptoms related to the ingestion of gluten-containing foods that resolve after gluten-free diet, in subjects that are not affected by either celiac disease or wheat allergy. Symptoms generally appear within hours or days after the ingestion of gluten-containing foods and disappear just as quickly with the start of a gluten-free diet . In pediatric age NCGS is rare and mainly affects males. The most frequent symptoms are abdominal pain and chronic diarrhea; but vomiting, constipation, bloating, poor growth, asthenia, headache may be also present. Extraintestinal symptoms (chronic fatigue, joint and muscle pain, headache, depression, “foggy mind”, eczema, anemia) are mostly reported in adulthood. The pathogenetic mechanism is unknown. NCGS is associated in one third of patients with food intolerances to other foods (especially lactose intolerance), in 20% IgE-mediated inhalant allergies subjects. The diagnosis is mainly clinical and requires the exclusion of celiac disease and wheat allergy diagnosis. The diagnostic gold standard is the clinical benefit to the gluten-free diet, followed by a double-blind placebo-controlled gluten challenge. In the period prior to the gluten-free diet, the subject must lead a diet containing gluten for at least 6 weeks; the subsequent gluten-free diet must be strict to avoid any contamination and last at least 6 weeks. The daily dose of gluten to take in the double-blind placebo-controlled gluten challenge is approximately 8 g (with 0.3 g of amylase-trypsin inhibitors, ATI) in a vehicle FODMAPs free. The placebo must be completely gluten-free. The first test phase should last at least 7 days, followed by a wash out period of 7 days, and a second test phase of at least 7 days. The NCGS management is based on gluten-free diet; there is no indication to eliminate possible gluten contamination in foods, as is instead necessary in celiac disease .
As depicted in Fig. , the healthcare professionals should make their expertise available to the patients in a circular continuum of activities. The first step is the appropriate recognition of the different forms of FAIs based on the correct evaluation of the anamnestic and clinical features of the patient. The onset of symptoms, with varying degrees of severity, can be acute, chronic, episodic, or recurrent. The patient with severe acute FAI-induced symptoms is commonly observed by the physician operating in the ED. At the ED the patient should receive rapid recognition of the disorder and adequate treatment to obtain a faster symptoms resolution. After full stabilization of the symptoms, the physician operating at the ED can refer the patient to the FP or to the tertiary center for the diagnosis and treatment of FAI (protected outpatient pathway). In any case, at discharge, all patients should receive clear indication regarding home therapy along with indications on the elimination diet pending the subsequent evaluation planned at the tertiary center. If the patient is primarily observed by the FP, he should carefully assess the anamnestic and clinical features of the child and the possible chronological relationship between ingestion of the food and the occurrence of symptoms. Concomitantly, the FP should treat any symptom that may still be present. In case of a well-founded suspicion of a food/symptom cause-effect relationship, the FP should request a specialist assessment by the tertiary center for pediatric FAI according to the priority criteria. Alternatively, the FP can also only take note of the clinical documentation relating to the access to the ED and in this case should refer the patient to the tertiary center. Particular attention should be paid to subjects with acute symptoms that involve multiple organs in rapid succession: it can be an anaphylactic event that must be treated immediately at the ED.
to raise the diagnostic suspicion of FAI, eventually based on the result of SPT and/or serum specific IgE to prescribe symptom therapy (antihistamines and/or steroids) in case of “in progress” symptoms to provide initial indications on elimination diet and the management of any symptom exacerbations to refer the patient to the tertiary center for pediatric FAI
to raise the diagnostic suspicion of FAI to prescribe symptom therapy (antihistamines and/or steroids and/or adrenaline) if it is required to measure serum tryptase if the symptoms have arisen for less than 4 hours in case of anaphylaxis to provide initial diet indications and symptoms exacerbations management to prescribe or equip the patient with self-injectable adrenaline if necessary to refer the patient to the tertiary center for pediatric FAI
to frame the clinical case and coordinate the implementation of the necessary and appropriate diagnostic procedures: OFC and/or laboratory-instrumental analysis (SPT, prick by prick, APT, breath test, endoscopy with histology, pH-impedance analysis, imaging, non-invasive tests of intestinal function, etc.) to perform a conclusive diagnosis of FAI to plan a follow-up program to provide guidance on management, prognosis, therapy, and prevention of new events and, if necessary, to prescribe or equip the patient with self-injectable adrenaline to provide a “nutritional counseling” for an adequate elimination diet
to perform a correct triage when the patient is observed at the ED to monitor the patient carefully looking for any symptoms of local or systemic reactions, to administer first aid drugs in case of adverse reaction, to record the procedure, the patient tolerance, and each detection during the test to know how to perform allergy skin tests to assist and collaborate with the physician in carrying out allergy tests (SPT, prick by prick, APT, OFC) to assist the operator in carrying out instrumental gastroenterological diagnostic tests (breath test, endoscopy, pH-impedance analysis)
to assess the nutritional status and dietary habits (e.g., 3- or 7-days food diary) to prescribe a nutritionally adequate diet eliminating the offending food/s and proposing optimal alternatives to achieve a full adherence to the elimination diet to assess the patient with periodic follow-up
The DTCP is devoted to all healthcare professional approaching pediatric subjects with suspected FAIs. The DTCP will facilitate the early recognition and the management of FAIs in the pediatric age. The appropriate application of this DTCP will reduce not only delays and diagnostic errors, but also health risks for children affected by FAIs facilitating a rationale approach to these conditions with a reduction of socioeconomic costs for the families and the health care system.
|
Analysis of trends in the context of implant therapy in a university surgical specialty clinic: a 20-year retrospective study | 312a0517-c6af-4924-98c8-88e01292672f | 11666676 | Dentistry[mh] | Tooth replacement therapy with dental implants has evolved into a widely established treatment option in contemporary dental practice over the last few decades, providing reliable and satisfactory long-term outcomes . Concurrently, the demographics of the societies have transformed during this period, accompanied by regional variations. Industrialized nations are faced with a progressively aging population, and the substantial cohort of the baby boomer generation is entering advanced stages of life . Therefore, the proportion of patients with medical risk factors, functional limitations, dependency, and frailty is increasing . Simultaneously, teeth in these patients are more predictably maintained in a status compatible with health, and complete edentulism rates have considerably decreased . However, when teeth are lost, patients desire to restore their appearance, function, and quality of life to normal, expecting dental implant therapy to fulfill these needs, which has been shown in a study from Hong Kong with comparable demographics to Switzerland . This is also reflected by an analysis of population trends in the U.S., which observed a significant increase in dental implant prevalence from 0.7% in 1999–2000 to 5.7% in 2015–2016, with the most pronounced growth seen among individuals aged 65 to 74 years . Over the past years, dental medicine has continuously evolved with the goal of improving patient care and also integrating patient-centered outcome criteria as measures for evaluating successful treatment approaches . A progressive shift from conventional, and potentially extensive, clinical procedures to the use of less invasive approaches with lower morbidity, including the help of novel digital technologies, has emerged and consolidated in daily clinical practice. In the context of implant therapy, different approaches such as alveolar ridge preservation have demonstrated their efficacy in attenuating the physiologic bone remodeling that follows unassisted socket healing , significantly reducing the need for invasive ancillary bone augmentation procedures . Narrow-diameter and short dental implants have been found to provide similar or only slightly inferior survival and success rates compared to standard diameter/length implants . They are therefore considered a reliable alternative to minimize the need for augmentation procedures in sites presenting hard tissue deficiencies, while simultaneously lowering patient morbidity. Additionally, modifications in implant micro- and macro-characteristics (e.g., deep-threaded macro designs, micro-rough surfaces with superhydrophilicity) , advancements and incorporation of digital technologies such as 3-dimensional (3D) imaging, virtual treatment planning, and computer-assisted implant surgeries (CAIS) have expanded potential indications for implant therapy, have helped in reducing treatment times from implant insertion to delivery of the final restoration, and have also resulted in more in-depth understanding of the planned intervention for the surgeon. These advancements are typically implemented through standardized, research-based protocols in university settings, where dental implant therapy involves comprehensive treatment planning guided by experienced specialists. This approach addresses a diverse patient population, including surgically or medically complex cases referred by general practitioners, with treatment demands influenced by demographic and epidemiological trends . However, there is only scarce information on related trends in the context of implant therapy regarding type of surgical procedures or patient characteristics over time. Hence, the present study aimed to primarily analyze demographics of the implant patient pool at a surgical specialty clinic for the years 2020–2022 and compare the results to the intervals 2002–2004, 2008–2010, and 2014–2016 to identify potential changes . The secondary aims were to analyze and compare the indications, locationover these two decades. Finally, the null hypothesis was that there is no change in patient demographics (H01), indications (H02), location of therapy (H03), implant characteristics (H04), and surgical techniques (H05) over the two decades analyzed.
This retrospective study is within the continuum of a study series spanning the periods 2002–2004, 2008–2010, and 2014–2016 conducted in the Department of Oral Surgery and Stomatology at the University of Bern, Switzerland and followed the same methodology as reported in the preceding investigations . The present retrospective study assesses anonymized health-related data from patients who gave a general consent. It was independently reviewed by the ethics committee of the state of Bern, Switzerland, which determined that it does not fall under the scope of the Human Research Act. Consequently, no formal approval was deemed necessary (ID 2023 − 01522). The study design follows the Federal Policy for the Protection of Human Subjects and is in accordance with the STROBE guidelines (Strengthening the Reporting of Observational Studies in Epidemiology) . Patient selection Clinical procedures Descriptive analysis Statistical analysis This study included all records from patients who received dental implants in the Department of Oral Surgery and Stomatology, University of Bern, Switzerland, from January 2020 to December 2022. The inclusion and exclusion criteria for implant treatment were described in previous publications . In brief, the inclusion criteria consisted of partially and fully edentulous patients receiving dental implants with adequate bone dimensions as per implant specifications. This could include sites requiring simultaneous or staged horizontal and/or vertical bone augmentation. Exclusion criteria encompassed patients with compromised general health and local conditions contraindicating surgical intervention, such as inadequate oral hygiene, uncontrolled periodontal diseases or diabetes, immunodeficiency, high-dose anti-resorptive therapy, pregnancy, or those aged ≤ 18 years. Implants used for skeletal anchorage in orthodontic treatments, such as palatal implants or temporary anchorage devices, were not evaluated in this study.
All implant placements were performed under local anesthesia. Antibiotic prophylaxis was prescribed two hours before surgery, based on the patient’s needs. The surgical procedures were conducted by 22 surgeons, consisting of eight senior surgeons and 14 residents specializing in oral surgery. Oversight by experienced senior surgeons ensured the quality of procedures performed by residents. Comprehensive information regarding presurgical assessments, surgical techniques, and postoperative care has been reported in previous studies . Postoperatively, patients were prescribed oral analgesics and an antiseptic mouth rinse, unless contraindicated for medical reasons.
Over four months (August-December 2023), three examiners (C.R, E.C.Q, and J.T) gathered data from the patient’s records. The primary outcome variable investigated was the age of the implant patient pool for the years 2020–2022 and the comparison with the ones reported for the intervals from 2002 to 2004, 2008–2010, and 2014–2016. The secondary outcome variables assessed include the following parameters: Indication for implant therapy, classified into a single-tooth gap, extended edentulous gap, distal extension, or fully edentulous jaw; Location of implant therapy, grouped into four regions: anterior maxilla (maxillary canine to maxillary canine), posterior maxilla (premolars and molars in the maxilla), anterior mandible (mandibular canine to mandibular canine), and posterior mandible (premolars and molars in the mandible); Implant characteristics, including length (in mm), diameter (in mm), design (bone-level or soft-tissue-level), and brand (e.g., Straumann, Thommen, Zeramex, Nobel Biocare); Surgical techniques, grouped into (1) standard implant placement (open-flap or flapless implant placement without additional bone augmentation procedures), (2) implant placement with horizontal bone augmentation (HBA, including simultaneous bone augmentation following the principles of Guided Bone Regeneration (GBR) or staged bone augmentation using GBR or autogenous bone block graft), (3) implant placement with sinus floor elevation (SFE) (either simultaneously or staged via a lateral or transcrestal approaches). Additionally, alveolar ridge preservation therapy after tooth extraction and the use of CAIS was also recorded. Postsurgical complications were grouped into hematoma, flap dehiscence, local signs of infection, prolonged postoperative bleeding, and temporary and permanent neurosensory disturbance. Loss of implants was recorded for early implant failures. In line with previous investigations, early implant failures were defined as implants lost during the initial healing period .
All statistical analyses were performed with software R, version 4.10 . The abovementioned variables were grouped as follows: Age: ≤40y, 41-60y, 61-70y, > 80y; Indications: single tooth gap, distal extension situation, extended edentulous gap, fully edentulous jaw; Location: anterior maxilla, posterior maxilla, anterior mandible, posterior mandible; Implant diameter: ≤3.5 mm, 3.5–4.5 mm, and > 4.5 mm; Implant length: ≤6 mm, > 6–8 mm, > 8–10 mm, and > 10 mm; Implant design: bone-level (BL), soft-tissue-level (STL); Surgical technique: standard implant placement, implant placement with HBA, implant placement with SFE, associated application of ARP therapy or CAIS strategies. The data was summarized by bar plots, groups and time. Logistic regression models, involving the calculation of odds ratios (OR), were used to test for trends over time (linear in the parameter). The model’s goodness-of-fit was assessed with the help of the Hosmer-Lemeshow test. If models lacked fit, quasibinomial models were used instead. Throughout, p-values less than 0.05 were considered statistically significant (SS). P-values were variable-wisely corrected using the “Holm” method.
Patient demographics Indications and location of implant therapy Implant characteristics Surgical techniques Complications and early failures The most prevalent postoperative complication was hematoma, which occurred in 11.6% of the patients ( n = 111 patients, 163 implants). Flap dehiscence manifested in 4.4% of cases (42 patients, 55 implants). Local signs of infection were identified in 1.6% of cases (15 patients, 18 implants), and were effectively managed by local antiseptic measures using topical application of 3% hydrogen peroxide and 0.2% chlorhexidine. Prolonged postoperative bleedings were registered in 1.4% (13 patients, 24 implants), all of which were successfully addressed. Transient dysesthesia was documented in 0.4% (4 patients, 4 implants). Notably, one patient experienced permanent hypesthesia affecting the mental nerve after harvesting autogenous bone at an interforaminal mandibular donor site. Nevertheless, the patient reported no impairment due to the hypesthesia. Finally, an early implant failure rate of 0.5% was observed, affecting eight implants across seven patients. Detailed information regarding the lost implants is displayed in Table .
The period from 2020 to 2022 included a total of 1555 implants in 1021 patients. The mean patient age at implant placement was 59.9 ± 15.1 years (median 63 years) with a gender distribution of 50.7% female ( n = 518) and 49.3% male ( n = 503) (Table ; Fig. ). Notably, 60 patients (31 female, 29 male) with a mean age of 65.8 ± 13.6 years, including 137 implants, were enrolled in randomized controlled trials (RCT). Information on the patient cohort excluding the RCT patients is displayed in Supplementary Tables and Supplementary Fig. . When comparing the patient demographics for the four periods 2002–2004, 2008–2010, 2014–2016, and 2020–2022, a SS trend for a decrease in the age groups ≤ 40 years (23.8% vs. 13.0%, OR 0.96, p < 0.0001) and 41–60 years (46.4% vs. 30.9%, OR 0.96, p < 0.0001), whilst an increase in the age groups 61–80 years (28.8% vs. 51.9%, OR 1.06, p < 0.0001), and > 80 years (1% vs. 4.2%, OR 1.08, p < 0.0001) was found.
The 1021 patients presented 1105 indications for implant therapy, acknowledging instances where individual patients presented with multiple indications for dental implant placement. Single tooth gaps (48.9%, n = 540) were the most frequent indication, followed by distal extension (22.9%, n = 253), extended edentulous gaps (17.6%, n = 195), and fully edentulous jaws (10.6%, n = 117) (Table ; Fig. ). A larger number of implants was placed in the maxilla (58%, n = 903) compared to the mandible (42%, n = 652) (Table ; Fig. ). Notably, 59 indications were allocated to the RCT, including 17 posterior single tooth gaps, one posterior distal extension situation, one posterior extended edentulous gap, and 40 fully edentulous jaws in the mandible. Corresponding study implants were located at the region of the lower lateral incisor (n = 38), canine (n = 44), first premolar (n = 36), and first molar (n = 19) . Information on the patient cohort excluding the RCT patients is displayed in Supplementary Table , Supplementary Fig. , Supplementary Table , and Supplementary Fig. . When comparing the indications for the four periods, an SS trend for a decrease in single-tooth gaps (56.1% vs. 48.9%, OR 0.98, p = 0.0005) and an increase for edentulous jaws (5.6% vs. 10.6%, OR 1.04, p < 0.0001) was found. Regarding the implant location, a SS trend for fewer implants being placed in the anterior maxilla (27.5% vs. 21.5%, OR 0.99, p = 0.0006) and posterior mandibula (32% vs. 28.3%, OR 0.99, p = 0.01) was found, whilst an increase in the anterior mandible (8.7% vs. 13.7%, OR 1.03, p < 0.0001) and the posterior maxilla (31.8% vs. 36.5%, OR 1.01, p = 0.005) was registered.
Most of the implants were manufactured by Institut Straumann AG (92.1%, n = 1432), followed by Thommen Medical AG (7.5%, n = 117), 3 M (0.3%, n = 4), Zeramex Dentalpoint AG (0%, n = 1), and Nobel Biocare AG (0%, n = 1). Soft-tissue-level implants were used more frequently (87.2%, n = 1356) compared to bone-level implants (12.8%, n = 199). The most common implant diameter was > 3.5–4.5 mm (45.1%, n = 705), followed by > 4.5 mm (28.2%, n = 439), > 2.5–3.5 mm (21.4%, n = 333) and ≤ 2.5 mm (5.2%, n = 81). Implant lengths included > 8–10 mm (64.1%, n = 997), followed by > 10 mm (23.5%, n = 365), > 6–8 mm (11.2%, n = 174), and ≤ 6 mm (1.2%, n = 19) (Table ; Fig. ). When comparing the implant diameter for the four periods, a SS trend for an increase in implant diameters ≤ 3.5 mm (9.4% vs. 26.6%, OR 1.08, p < 0.0001) was found, whilst a decrease for > 3.5–4.5 mm (55.2% vs. 45.1%, OR 0.97, p < 0.0001) and > 4.5 mm (35.4% vs. 28.2%, OR 0.98, p < 0.0001) was observed. Regarding implant lengths, a SS trend for an increase in implant lengths > 6–8 mm (8.5% vs. 11.2%, OR 1.02, p = 0.0002) and > 8–10 mm (44.5% vs. 64.1%, OR 1.05, p < 0.0001) was found, whilst a decrease for implant lengths > 10 mm (45.7% vs. 23.5%, OR 0.93, p < 0.0001) was observed. When comparing the implant design for the periods 2008–2010 and 2020–2022, a SS trend for less use of bone-level (27.9% vs. 12.8%, OR 0.92 p < 0.0001) compared to tissue-level implants (72.1% vs. 87.2%, OR 1.08, p < 0.0001) was registered.
Standard implant placement protocols without additional bone augmentation were applied in 46.4% ( n = 722) of the cases, while 53.6% ( n = 833) needed an additional bone augmentation procedure. The most frequent bone augmentation procedure was implant placement in conjunction with simultaneous HBA (31.5%, n = 490), followed by simultaneous SFE (14.1%, n = 220), staged SFE (5.3%, n = 82), and staged HBA procedures (2.6%, n = 41). Regarding SFE, the lateral approach was used in 278 (92%) cases compared to the transcrestal osteotome technique in 24 (8%) cases (Table ; Fig. ). Implant placement after alveolar ridge preservation was observed in 4.4% of implants ( n = 68) and the application of guided implant placement using sCAIS was observed in 19.5% ( n = 304) of the implant cases. Regarding implant location, horizontal bone augmentation was necessary in 60.0% of cases in anterior sites (329 of 548 implants) compared to 23.8% of cases in posterior sites (240 of 1007 implants). Flapless implant placement was limited to carefully selected cases (0.8%, n = 13surgical techniques for the periods 2002–2004 and 2020–2022, a statistically significant trend for a decrease in HBA procedures (39.74% vs. 34.1%, OR 0.99, p = 0.02) and an increase in SFE (11.9% vs. 19.4%, OR 1.03, p < 0.0001) was found.
resent study primarily aimed to assess the demographics of a patient pool for dental implant placement for the period 2020–2022 at a surgical specialty clinic and compare the results to previous investigations following the same methodology over 20 years in the same institution. The secondary aims were to analyze changes in indications and locations of therapy, implant characteristics, surgical techniques, complications, and early implant failures for the same periods. For both primary and secondary aims, several trends were found. Therefore, H01, H02, H03, H04, and H05 were rejected. Patient demographics showed a trend for an increasing patient age from 55.2y (2002–2004), 53.6y (2008–2010), 57.2y (2014–2016), and 59.9 years (2020–2022). Notably, this upward trajectory is primarily attributed to the age cohorts exceeding 60 years, with the most prominent growth in the cohort between 71 and 80 years. This is in line with the results from a study analyzing 7 National Health and Nutrition Examination Surveys conducted in the United States from 1999 to 2016, demonstrating the largest absolute increase of implant prevalence found in the age-group 65–74 years . These age cohorts reflect the so-called baby boom period, which compared to the rest of Europe took place earlier in Switzerland reflecting a demographic development 10 years ahead of the rest of Europe . With the advances in medicine, life expectancies continue to grow and are accompanied with a higher quality of life. According to the World Health Statistics 2023 from the World Health Organization, global life expectancy increased from 67 years to 73 years, from 2000 to 2019. This trend is expected to continue in the near future , signaling an increasing need for gerodontologically oriented treatment strategies in older patients. These strategies may prioritize surgical interventions with reduced invasiveness, such as alveolar ridge preservation, utilization of narrow-diameter implants, or CAIS. However, it is crucial to acknowledge that the period 2020–2022 corresponds to the era of the COVID-19 pandemic, marked by governmental restrictions, lockdowns, and considerable discomfort experienced by the population . This is particularly pertinent to patients with increased vulnerability due to systemic factors and/or advanced age. Therefore, older age groups may be underrepresented in the present cohort when compared to the results of the periods 2002–2004, 2008–2010, and 2014–2016. A study by Feher and colleagues evaluated the patient selection and surgical procedures undertaken between March 2020 - December 2020 and compared them to pre-pandemic measures in a specialized implant clinic. Notably, they did not find an effect on patient selection and only a slight effect on surgical procedures . Analysis of implant indications showed single edentulous sites as the most prominent, encompassing nearly half of all implant procedures. Nonetheless, the indications for implant placement exhibited stability across the four investigated periods when considering the patient pool exclusive of RCT participants. This observation appears to contrast with recent analyses predicting a decrease in complete edentulism in the future . A plausible explanation may lie in heightened patient expectations regarding oral health-related quality of life and an increasing acceptance of dental implant therapy over the past few decades. In this context, implant overdentures were found to significantly enhance the quality of life compared to conventional dentures Additionally, it is conceivable that patients screened for the RCTs, who did not meet the inclusion criteria but opted for alternative forms of dental implant therapy, may have contributed to these findings. Interestingly, the location for implant placement remained stable, except for the upper first molar, which exhibited a consistent increase over the investigated periods . This trend may be ascribed to the increased prevalence of SFE established over time or the alternative utilization of short dental implants. Over recent decades, progress in implant design, materials, and surface properties has paved the way for the development of narrow-diameter implants (i.e., ≤ 3.5 mm) and short dental implants (i.e., ≤6 mm) . These innovations aim to facilitate implant placement in scenarios characterized by limited conditions such as reduced mesiodistal spaces and horizontal or vertical bone deficiencies. This trend is reflected in the present analysis, where implants with a diameter ≤ 3.5 mm demonstrate an increase in placement. Nevertheless, implants with a diameter ranging from > 3.5 to 4.5 mm, closely followed by those with a diameter exceeding 4.5 mm, still exhibit a higher prevalence nowadays. In terms of implant length, a notable decline in implants exceeding 10 mm is observed. Interestingly, implants with lengths ≤ 6 mm were exclusively used in situations of limited vertical bone availability in the posterior maxilla and mandible. However, its use was merely to avoid complex and invasive vertical bone augmentations . Remarkably, an upward trajectory is noted in the utilization of soft-tissue-level design implants in comparison to bone-level implants, with the latter being the preferred option mostly for single-tooth anterior esthetic area. One possible explanation may be attributed to the increasing, yet limited evidence for a smaller susceptibility for peri-implant diseases in tissue-level implants compared to bone-level implants . This might be an effect of the coronal relocation of the prosthetic interface, optimizing the peri-implant soft tissue adhesion, and the easier restoration due to a partly predefined emergence profile . In bone-level implants, the emergence profile has been associated with a significant correlation between the contour of emergence and peri-implantitis, a relationship not identified in soft-tissue-level implants . In the present investigation, the predominant surgical technique for implant placement was the standard approach involving no additional bone augmentation, with only a few instances employing a flapless approach. However, approximately one-third of implants underwent additional horizontal bone augmentation procedures, indicating a slight decline over time. This reduction might be due to the increased adoption of alveolar ridge preservation techniques during tooth extraction, which was not applied on a routine basis in the previous intervals and is reported for the first time in the present study. These therapies have shown efficacy in minimizing post-extraction dimensional changes . Interestingly, only a minor portion of sites presented pronounced horizontal bone deficiencies requiring a staged approach, which may result from careful consideration of the timing of implant placement after tooth extraction . Over recent decades, there has been a trend for the simultaneous application of SFE, which was most frequently carried out using the lateral window approach. Advances in implant design like deep-threaded and/or tapered implant designs have contributed to reducing the amount of staged SFE by optimizing primary implant stability. Additionally, CAIS was found established on a broad basis in all the above-mentioned procedures, facilitating improved implant placement accuracy in demanding situations . The overall data represent the shift towards less invasive surgical techniques prioritizing reduced patient morbidity. Positive side effects of this shift include a reduction in treatment time, costs, and risks for possible complications . Nevertheless, the number of early implant failures was constantly low (0.5–0.7%). This is considerably lower compared to the 1.99% as reported in a retrospective study . Nonetheless, the present retrospective study has several limitations. First, the focus on a patient cohort from a single specialized university clinic, largely referred by general dentists, questions the external validity of the findings. The results may not be generalizable to other populations, as patients treated in different clinical settings may have distinct characteristics or access to varying treatment options. Second, regional variations in implant dentistry philosophies may result in other treatment approaches preferred for similar clinical situations, as surgical and reconstructive techniques can vary considerably depending on geographic location and preferred institutional protocols. These differences could influence outcomes and, consequently, limit the external validity of our findings. Third, the variability among the surgeons performing the implant surgeries within and across the study periods introduces potential bias related to personal therapeutic preferences and experience levels. Different surgeons may have different techniques, decision-making processes, and thresholds for intervention, which could influence the results of the study. Fourth, the retrospective design of this study precludes the establishment of causal relationships between confounding factors and determination of the reasons underlying clinical decision-making processes (i.e., reasons for bone augmentation procedures at the time of implant placement) in the included subjects. Further research focusing on the medical aspects of this aging patient population, along with its potential risks and implications for daily clinical practice, is warranted.
Based on the limitations of the present study, it can be concluded that the mean age of patients undergoing dental implant therapy has increased over the two decades under investigation. Furthermore, there is a growing trend toward less invasive surgical techniques, narrower implant diameters (≤ 3.5 mm), and shorter implant lengths (> 6–10 mm), leading to a reduction in the need for bone augmentation procedures before or during implant placement. Additionally, an increasing number of clinical procedures are incorporating computer-assisted technologies.
Below is the link to the electronic supplementary material. ESM1 (DOCX 507 KB)
|
Bioprospecting for plant resilience to climate change: mycorrhizal symbionts of European and American beachgrass ( | 14828c1e-2da3-4a5a-905d-646a4b250f37 | 11166759 | Microbiology[mh] | Climate change and global warming have contributed to increase terrestrial drought, causing serious negative impacts on agricultural production and posing severe threats to worldwide food security (Dai ). Drought is estimated to negatively affect plant growth for more than 50% of arable land by 2050, thus representing an economically and ecologically disruptive event, greatly affecting human life and the world’s food security (Naumann et al. ). Drought stress may be addressed by using novel agronomic practices able to enhance use efficiency of soil natural resources and water and to increase plant antioxidant defence systems. Several studies have been focused on the isolation, selection, and application of beneficial soil microorganisms, such as arbuscular mycorrhizal fungi (AMF), able to enhance drought tolerance in food crops, including cereals, fruit trees, and vegetables, by means of diverse mechanisms beyond improved nutritional status. AMF inoculation may produce changes in root system architecture and functioning and enhance soil water retention in dry sands, thus playing a key role in the performance of plants growing in drought conditions (Augé ; Wu et al. ; Jayne and Quigley ; Pauwels et al. ). Despite the growing recognition of the AMF’s role in plant use efficiency of water and soil resources, the exploitation of specific fungal traits functional to the improvement of plant resilience to climate change remains a significant challenge. Such traits relate to AMF’s ability to germinate, grow, and develop large hyphal networks expressing water and nutrient transporter genes at high temperatures and under drought conditions, as well as to induce the production of antioxidant compounds able to protect plants against oxidative damage caused by abiotic stresses (Rouphael et al. ). Another important trait concerns the production of mucilage and exopolysaccharides by AMF and their associated bacteria, compounds which can increase soil aggregation and water retention, thereby helping plants to face drought (Miller and Jastrow ; Rillig and Mummey ; Püschel et al. ; Kakouridis et al. ; Pauwels et al. ). Moreover, AMF genetic organization represents a further fungal trait possibly affecting mycorrhizal responses, as reported for four potato cultivars colonized by homokaryotic strains, that showed greater host biomass and tuber production, compared with plants inoculated with dikaryotic strains (Terry et al. ). It is conceivable that AMF strains showing the described features may have developed when growing in association with xerophytic plants in maritime sand dunes, a drought-stressed, low-fertility environment for plant growth and development, mainly because of dune instability, low water retention, seasonal extreme temperatures, irradiance, salinity and drought, high evaporation rates, and low concentrations of nutrients and organic matter (Koske and Polson ; Maun ). Indeed, the survival, establishment, and growth of plants in such unfavourable ecosystems are promoted by AMF, representing an effective means of dune revegetation and restoration, in particular under stress conditions (Sylvia ; Gemma and Koske ; Corkidi and Rincón ; Tadych and Blaszkowski ; Gemma et al. ; Koske et al. ; Camprubi et al. , ). Such beneficial effects have been ascribed to extensive extraradical mycelium that functions as an auxiliary absorbing system, with its fine hyphae efficiently exploring the soil and providing host plants with water and mineral nutrients, in particular phosphorus, the most important growth-limiting nutrient in such harsh environments (Read ; Giovannetti ; Kakouridis et al. ; Battini et al. ). For example, in maritime sand dunes, AMF mycelium may reach a dry weight of 34 µg per g of sand, representing up to 30% of sand dune microbial biomass (Olsson and Wilhelmsson ). Several studies have been carried out on European beachgrass ( Ammophila arenaria Link), native to maritime sand dunes of Europe and the Mediterranean basin, and American beachgrass ( Ammophila breviligulata Fern.), found in North American dunes. These two plant species long have been known as dune-building grasses and recently have been valued for their ecosystem engineering traits (Reijers et al. ; Lammers et al. ). They are highly mycotrophic species, as reported by many authors worldwide. In North American Atlantic coastal dunes, from Virginia to Maine, A. breviligulata represented the dominant sand-colonizing plant species, showing high levels of mycorrhizal colonization, from 20 to 80% of root length (Koske and Polson ), while when planted in barren sites in Cape Cod in Massachusetts (USA), 78% of plants were mycorrhizal (Gemma and Koske ). In European coastal sand dunes, the percentage of colonized root length of A. arenaria ranged from 7–33% in Tuscany (Italy) to 26–80% in the Netherlands, 0–30% in the Northeast of Spain, and 5–70% in six different sites among England, Wales, the Netherlands, Belgium, and Portugal (Giovannetti and Nicolson ; Giovannetti ; Kowalchuck et al. ; Rodríguez-Echeverría et al. ; Camprubí et al. ). When A. arenaria plants were experimentally inoculated with native AMF originating from their rhizosphere, mycorrhizal colonization was 55% (de la Peña et al. ). Given the critical role of the mycorrhizal symbiosis for the survival, nutrition, and growth of these two species of beachgrass living in drought-stressed and low-fertility maritime sand dunes, knowledge of the composition of AMF communities colonizing their roots and rhizospheres is of primary importance because some taxa especially could have developed symbiotic traits aiding the host plants to tolerate such harsh environmental conditions. Accordingly, native AMF from A. arenaria and A. breviligulata living in maritime sand dunes might be isolated as potential candidates for inocula promoting crop growth and resilience under climate change. In order to pursue such an objective, in this review we report qualitative and quantitative data on (i) the occurrence of AMF in A. arenaria and A. breviligulata rhizospheres, as detected by morphological studies and trap culture isolation, in some cases followed by molecular identification; (ii) the richness of AMF communities colonizing beachgrass roots and rhizospheres, as detected by molecular methods, such as PCR cloning and sequencing (PCR-CS), PCR-denaturating gradient gel electrophoresis (PCR-DGGE), and Illumina high-throughput metagenomic sequencing; and on (iii) sand dunes native AMF plant-growth promoting properties.
s Early evidence of the occurrence of mycorrhizal symbiosis in A. arenaria was reported in coastal sand dunes of Scotland and England, UK, with the description of the internal and extraradical mycelium of root endophytes, attributed to Endogone , a genus which no longer belongs to the Glomeromycota (Nicolson , , ). Successive works confirmed the mycorrhizal status of A. arenaria and identified Rhizoglomus fasciculatum as its only fungal root symbiont in Scottish sand dunes (Nicolson and Johnston ) (Table ; Fig. ). Here the generic name Rhizoglomus is used, as synonymous with Rhizophagus (Sieverding et al. ; Walker et al. ), and the original names of AMF taxa have been updated following the sites: https://glomeromycota.wixsite.com/lbmicorrizas and http://www.amf-phylogeny.com/ . Successive studies after 1979 found many AMF species from A. arenaria rhizospheres, regardless of geographic location. For example, 25 different AMF species were described from maritime sand dunes adjacent to the Mediterranean Sea in Israeli (Błaszkowski and Czerniawska ), and 21 and 26 from dunes adjacent to the Baltic Sea in Słowiński National Park (Poland) and in Bornholm Island (Denmark), respectively (Tadych and Błaszkowski ; Błaszkowski and Czerniawska ) (Table ; Fig. ). It is interesting to note that several new species from the rhizosphere of A. arenaria growing in maritime sand dunes were isolated and/or molecularly described: Complexispora multistratosa , Complexispora mediterranea , Dominikia achra , Diversispora peridiata , Diversispora slowinskiensis , Diversispora valentine , Entrophospora drummondii , Glomus tetrastratosum , Microkamienskia perpusilla , Rhizoglomus irregulare , Septoglomus jasnowskae (Błaszkowski et al. , , , , , , , ; Guillén et al. , ) (Table ; Fig. ). On the other side of the Atlantic Ocean, only AMF species associated with A. breviligulata were investigated, while no studies were performed on AMF species of A. arenaria in the west coast of the USA, where it is considered an invasive species. The data on A. breviligulata come from only three places in North Atlantic sand dunes, with high numbers of AMF species recovered, 17 and 29 from Cape Cod, MA and the Magdalen Islands archipelago, Québec, respectively (Koske and Gemma ; Dalpé et al. ) (Table ). It is interesting to note that in Rhode Island and Cape Cod sand dunes, A. breviligulata was predominantly associated with Acaulospora scrobiculata and Gigaspora gigantea (Koske ; Koske and Halvorson ), two species that were not found in A. arenaria rhizospheres (Fig. ). Only one new AMF species associated with A. breviligulata was described, Quatunica erythropa (Koske and Walker ), probably because of few studies carried out on such species (10 studies), compared with those on A. arenaria (26) and to the absence of studies in the years from 1997 to 2016 (Table ).
A. arenaria and A. breviligulata roots, as detected by molecular methods The first thorough molecular study of an AMF community associated with A. arenaria was performed by Kowalchuk et al. in Dutch coastal sand dunes, utilizing the polymerase chain reaction-denaturing gradient gel electrophoresis (PCR-DGGE) targeting the 18S rRNA gene (using the AM1 and NS31-GC clamp primer pair). Sequence analysis of excised DGGE bands allowed the detection of at least seven different species, belonging to the genera Glomus and Scutellospora , i.e. F. coronatum, F. fragilistratus , Diversispora spurca , Glomus sp., Racocetra castanea , Dentiscutata cerradensis , and C. pellucida . Although PCR-DGGE is an excellent molecular method able to distinguish even minor levels of sequence variation, AMF species occurring in low abundance were overlooked, such as Acaulospora and Glomus spp. of which a few spores were isolated from sand, while their sequences were not recovered from DGGE bands (Table ). This could be ascribed to the method which allows the detection of populations representing > 1–2% of the total and also ascribed to the primer pair used, unable to provide total coverage of the AMF clade (Redecker et al. ). The richness and composition of AMF communities associated with A. arenaria in Belgium coastal sand dunes, reproduced in trap cultures and investigated by PCR cloning and sequencing using the primers NS31/AM1, produced 31 sequences of the genus Glomus , while no Scutellospora sequences were recovered, although detected in the rhizosphere by morphological observations (de la Peña et al. ). This confirms the difficulty of covering the entire AMF clade by the primers used. The same primers, utilized for cloning and sequencing a fragment of the SSU rDNA extracted and amplified from the roots of A. arenaria in Portuguese sand dunes, allowed the detection of AMF sequences belonging to the genus Glomus , some of which clustered with Rhizoglomus intraradices , R. fasciculatum , and Septoglomus constrictum (Rodríguez-Echeverría and Freitas ). It is interesting to note that the sequencing of spores obtained from trap cultures showed the presence of Racocetra persica , whose sequences were not retrieved from A. arenaria roots, confirming previous findings and suggesting that the low level of root colonization by R. persica could have led to a preferential amplification of the more abundant sequences of Glomus spp. during PCR (Rodríguez-Echeverría and Freitas ) (Table ). The regular occurrence of Racocetra fulgida and Racocetra persica in A. arenaria rhizospheres, as assessed by morphological methods, was confirmed utilizing both NS31/AM1 and ITS1F/ITS4 primers in an in situ collection of coastal sand dunes AMF within a UNESCO Biosphere Reserve in Tuscany, Italy (Turrini et al. ). Two recent studies on sand dunes systems located at Curracloe, Wexford, Ireland, showed that the AMF root communities of A. arenaria comprised Gigasporaceae, Entrophospora , Paraglomus, and Glomus , while AMF spores collected from the rhizosphere showed a greater richness, although a taxonomic assignment to the species level was not provided (Lastovetsky et al. , ) (Table ). The only molecular work investigating AMF diversity in the rhizosphere of A. breviligulata reported the occurrence of G. gigantea , G. albida, G. rosea , Racocetra spp., Scutellospora spp., Cetraspora sp., Acaulospora spp., Dentiscutata sp., and Corymbiglomus sp. in plants growing in North Atlantic maritime sand dunes at Cape Cod National Seashore, MA, USA (Lastovetsky et al. ). It is interesting to note that different AMF species were associated with A. arenaria and A. breviligulata : only 20 species in common have been reported, while 20 were recovered only from A. breviligulata and 33 only from A. arenaria (Fig. ). The most surprising finding is represented by the consistent occurrence of taxa of the genus Gigaspora from A. breviligulata , i.e. G. albida , G. gigantea , G. margarita, and G. rosea , taxa that were never recovered from A. arenaria , although two of them, G. gigantea and G. margarita , occurred in sand dunes of the Paleartic biogeographical realm (Sturmer et al. ). It is possible that such species either did not occur in the investigated sites or showed a selective association with plants other than A. arenaria .
The hypothesis that AMF isolated from maritime sand dunes might have developed symbiotic traits functional to plant survival and growth in such a harsh environment stimulated a few studies, aimed at assessing the performance of native AMF isolated from the rhizosphere of sand dune plants. As early as 1979, Nicolson and Johnston carried out the first plant growth experiment in pots, using unsterile dune sands, utilizing R. fasciculatum from Scottish maritime sand dunes, as compared with F. geosporum from agricultural soils, inoculated on A. arenaria and Zea mays . Both AMF significantly improved A. arenaria growth, compared with control plants: in young A. arenaria plants dry weight increased by 18%, while maize plants dry weights were not significantly different when inoculated with R. fasciculatum or F. geosporum alone, showing significant increases - up to 120% - only when the two AMF were inoculated together (Nicolson and Johnston ). Although the authors concluded “that plants grow in such adverse conditions as sand dunes because they are mycorrhizal”, control plants were able to grow, even if poorly. In a revegetation and restoration program of a dune system where naturally occurring plants previously had been destroyed by grazing livestock and human use, in Cape Cod National Seashore, MA, USA, the inoculation of A. breviligulata with the dominant native AMF G. margarita , R. clarum and S. calospora , produced significant increases in culms (+14%) and inflorescences (+67%), compared with control plants (Gemma and Koske ). The mixture of two AMF, Septoglomus deserticola and Glomus macrocarpum , isolated from sand dunes along the north Atlantic coast of Florida increased shoot dry weight, root length, and plant height by 219, 81, and 64%, respectively, compared with control plants, in a replenishment study with the sand dune grass Uniola paniculata (sea oats) at Miami Beach, Florida (Sylvia ). The higher performance of sand dunes native AMF, as compared with two different commercial isolates, was reported in a study performed in Iceland on another sand dune grass, Leymus arenarius ; interestingly, the foliage and root dry mass were the highest when inoculated with native AMF even compared with added phosphorus treatments (Greipsson and El-Mayas ). Similarly, native AMF isolated from Het Zwin, Knokke-Heist natural reserve, in Belgium, significantly increased biomass (60%), number of tillers (45%), and leaves (26%) in young A. arenaria plants while reducing root infection and multiplication of the root-feeding nematode Pratylenchus penetrans (de la Peña et al. ). A microcosm experiment carried out at the University of Rhode Island, USA, showed that native AMF inoculum significantly increased the survival ability of A. breviligulata in dune sand under drought stress, as 78% of mycorrhizal plants survived, against 20% of non-inoculated ones (Koske and Polson ). As to saline stress, AMF isolates from coastal sand dunes did not enhance lettuce leaf biomass, compared with isolates originating from desert or field soil (Tigka and Ipsilantis ), but three out of six AMF assemblages from Greek coastal sand dunes under high salinity levels helped olive tree cuttings to tolerate the stress (Kavroulakis et al. ). An interesting work was carried out on 15 sand dune plant species that comprised 12 grasses and three shrubs (Camprubi et al. ). The authors assessed the growth effects of a consortium of native AMF isolated from six sand dune plants including A. arenaria in the Mediterranean northeast coast of Spain as compared with no inoculation and with R. intraradices BEG72, a strain isolated from an agricultural soil in Spain that had been shown to be highly efficient in a wide range of experimental studies. They reported that seven plant species showed higher dry weights when inoculated with either BEG72 or the native AMF versus the controls, while five of those plant species increased more when inoculated with the native AMF than with BEG72. For example, Otanthus maritimus and Elymus farctus growth was significantly improved by native AMF but not by BEG72, although the reverse was also found, as only BEG72 boosted Ononis natrix and Armeria maritima growth. Moreover, the study revealed a high mycorrhizal dependency of five of the plant species investigated, suggesting that, overall, the limited growth of control plants could be ascribed to the lack of adequate AMF inoculum (Camprubi et al. ). As the consortium of native AMF was composed of diverse AMF species, it is conceivable that they reflected a high functional diversity, consistent with previous findings (Munkvold et al. ; Mensah et al. ; Turrini et al. ). Overall, the few works performed to test the symbiotic performance of AMF isolated from sand dunes did not investigate the mechanistic aspects of AMF activity, which could uncover the potentially beneficial fungal traits relevant to drought tolerance. Hence, the question as to whether AMF isolated from the rhizospheres of maritime sand dune plants may promote plant growth and resilience, protecting agricultural plants from drought, represents a demanding challenge to be pursued by further extensive and in-depth studies.
This review shows that A. arenaria and A. breviligulata growing in the harsh environment of maritime sand dunes, subject to the selective pressure of seasonal extreme temperatures and drought, host rich and large AMF communities in their roots and rhizospheres. Qualitative and quantitative data are provided on the occurrence of the diverse AMF species in different sites in Europe, the Mediterranean basin, the USA, and Canada. The dominant species belong to Gigasporales and Glomerales, consistent with previous data on their prevalence in maritime sand dunes worldwide (Stürmer et al. ), but members of Acaulosporaceae and Diversisporaceae also are present. Among the 33 AMF species found with A. arenaria , the most frequently recovered are those belonging to the group Rhizoglomus fasciculatum/intraradices/irregulare , occurring in the sand dunes of several countries across Europe and the Mediterranean basin, i.e. Belgium, Denmark, Italy, Poland, Portugal, Spain, and the UK, together with Racocetra persica occurring in Italy, Spain, and Portugal, and Funneliformis coronatum found in the Netherlands and Israel. Among the 20 AMF species recovered from A. breviligulata , Acaulospora scrobiculata , and Gigaspoa gigantea are the most frequent, while R. persica was the prevalent species among those common to the two plant species. Such fungus species could be further studied to assess possible specific traits allowing their host plants to withstand environmental stresses and thrive in hostile ecosystems. As there has been very little testing of AMF isolates from maritime sand dunes with crop plants under drought-stressed agronomic conditions, further investigations should be carried out in microcosms, macrocosms and in the field, under different levels of drought stress, in order to assess the ability of such AMF isolates to survive in the new soil environment and compete with native symbionts, while maintaining their potential beneficial traits. In the years to come, the availability of AMF showing promising beneficial characteristics will allow the formulation of innovative consortia, to be commercially reproduced and utilized as a viable alternative or in addition to current ones. Such newly designed consortia could be used as inoculants for increasing plant water and nutrient use efficiency, in turn enhancing crop productivity and resilience toward drought stress in sustainable food production systems under climate change.
|
Results from a cross-specialty consensus on optimal management of patients with chronic kidney disease (CKD): from screening to complications | 108257b9-f811-4c19-a38c-9c1eb3342862 | 10921537 | Internal Medicine[mh] | Chronic kidney disease (CKD) is defined as abnormalities of kidney structure or function, which are present for >3 months. This is indicated by a glomerular filtration rate (GFR) below 60 mL/min/1.73 m² or the presence of one or more markers of kidney damage. CKD is classified based on cause, GFR category and albuminuria category. Diabetes is the leading cause of CKD, followed by hypertension, suggesting a close relationship between the cardio-renal-metabolic (CRM) systems. The interplay between CKD, cardiovascular disease (CVD) and metabolic diseases such as type 2 diabetes mellitus (T2DM) is significant and has been termed ‘CRM’ disease. It has been suggested that CKD affects around 50% of people with T2DM, around 50% of heart failure (HF) patients, and up to 38% of those with hypertension, suggesting the need for a holistic approach to management of patients with CKD. Between 1990 and 2017, the prevalence of CKD was estimated to rise to 9.1% of the global population, with associated mortality increasing by 41.5%. This is closely associated with an increase in populations with risk factors such as diabetes, hypertension and pre-diabetes. As a consequence of these factors, 1.2 million people died in 2017 as a consequence of CKD. Early diagnosis and referral of CKD are key to reducing or avoiding progression to kidney failure, and reducing morbidity and mortality. The absence of symptoms in early stages of CKD requires that clinicians maintain suspicion in all patients, especially those with risk factors. Accurate screening, diagnosis and risk stratification need to be in place to support the aim of ‘earliest diagnosis’. A diagnosis of CKD is determined by laboratory confirmation of proteinuria or haematuria, and/or a reduction in the GFR, for more than 3 months duration. Two key markers of CKD include albuminuria (defined as a urinary albumin-to-creatine ratio (uACR) of >30 mg/g) and reduction in estimated GFR (eGFR), as calculated by the CKD-EPI based on creatinine alone (eGFR cr ) or on creatinine and cystatin C (eGFR cr-cys ). In practice, these methods may not be used effectively, an analysis of over 123 000 patient records reported a low frequency of uACR testing in patients with CKD, despite strong epidemiological evidence linking increased albuminuria with disease progression, kidney failure, cardiovascular events and premature mortality. Besides lifestyle modifications, disease-modifying therapies (DMTs) for CKD including renin–angiotensin–aldosterone system inhibitors (RAASi) and sodium-glucose co-transporter-2 inhibitors (SGLT2i), as recommended by Kidney Disease Improving Global Outcomes (KDIGO) for use in patients with CKD with hypertension or diabetes, and non-steroidal mineralocorticoid antagonists as add-on therapy in patients with T2DM and residual albuminuria. Despite the availability of these therapies, analysis of two large US healthcare systems showed that even though nearly two-thirds of the adults with CKD had diabetes, hypertension or pre-diabetes, rates of prescribing RAASi were low. Not only was DMT use below guideline-directed levels, but potentially nephrotoxic agents (such as non-steroidal anti-inflammatory drugs and proton pump inhibitors) were used more commonly than RAASi. There is clearly some disconnect between guideline recommendations and real-world practice. Guidelines for the management of CKD may not be clear or made known to primary care practitioners (PCPs). The objective of this project is to use a modified Delphi technique to examine the opinions of healthcare professionals (HCPs) towards aspects of CKD management across 11 countries, report these findings and develop practical recommendations for diagnosis and management of CKD. A steering group of experts (see author list) from 11 different countries convened in September 2022 to discuss current challenges in CKD management. The experts were defined as specialists in nephrology, endocrinology/diabetology, internal medicine and primary care medicine who had achieved an appropriate level of seniority within their field (eg, professors, clinical directors), or had published papers related to the management of CKD/HF/Diabetes, or had been involved in guidelines development. The steering group members were selected to provide representation across a range of development indicator levels external to Europe and North America (to avoid replication of previous published work ). Steering group members were recruited to represent countries outside of North America and Europe. Central and South America, Southeast Asia, Middle East and Africa were initially targeted. The process of recruitment involved identification of potential group members from each of these regions using desk research, followed by a snowball method until all target regions had at least one representative on the steering group. Although a wide representation was aimed, it is a limitation that there were not enough members from low-income countries. The Delphi technique used in this study was guided by Guidance on Conducting and REporting DElphi Studies. While guidelines exist, outcomes for CKD vary between countries, a modified Delphi approach was employed to understand where common care processes differ in local use, and how the attitudes of HCPs to elements of CKD management differ between countries. The overall process is outlined in . Six themes for focus for statement development were agreed , these were discussed further, and statements developed by the steering group working collaboratively. The statements were then collated, and the steering group independently rated the statements as either ‘accept’, ‘remove’ or ‘reword’ with suggested changes (as determined by a simple majority) with the potential for further group consultation for any significant differences of opinion on the fundamental principles of any statement. Once finalised, the steering group was agreed the final set of statements for testing. This constituted the initial round of consensus. The resulting 42 statements were developed into a Likert survey, which was then distributed by a third party (Sermo) in round 2 of the process. Panel members were recruited based on the following criteria: Employed within 1 of the 11 target countries. 25 respondents from each country in a broadly 2:1:1:1 ratio of primary care physician, consultant endocrinologist, consultant cardiologist, consultant nephrologist (or local equivalent). The identity of respondents was not known to the steering group or the independent facilitator. For each statement, respondents were offered a 4-point scale (‘strongly disagree’, ‘tend to disagree’, ‘tend to agree’ and ‘strongly agree’) to indicate their level of agreement with each statement. The survey also captured country, specialty, length of time in role and average number of patients with CKD managed over 3 months. Stopping criteria for data collection were defined as a target of 25 responses from each country (N=275) and a 4-week survey period. The target countries were chosen to reflect the steering group demographic, this would enable each steering group member to provide insight into results from their respective country. Panel members were recruited from the following countries: Argentina (UMI), Australia (HI), Brazil (UMI), Egypt (LMI), Guatemala (UMI), Mexico (UMI), Singapore (HI), South Korea (HI), Taiwan (HI), Thailand (UMI) and Turkey (UMI) (HI—high income, LMI—lower-middle income, UMI—upper-middle income, according to World Bank Classification.) The closing criteria for the study were defined a priori in line with best practice principles as: 90% of the final statement set achieving consensus threshold (defined at 75%, a widely accepted threshold). If these criteria were not met, statements would be modified, and the survey reissued as necessary for a maximum of three rounds. Consensus was further categorised as ‘high’ at ≥75% and ‘very high’ at ≥90%. Completed surveys were analysed using Microsoft Excel software. The responses were aggregated to provide an overall agreement level (ie, the number of responded expressing agreement as a percentage of the overall number of responses for each statement). Patient and public involvement None, the stated objective was to examine the opinions of HCPs towards aspects of CKD management across 11 countries. Of the 45 initial statements created by the steering group, 32 were retained, 8 were modified, 5 were removed and 3 new statements were added. The final set was then sent out to the group via email for acceptance prior to progressing to round 2. In round 2, completed surveys from a panel of 274 were analysed to define the total level of agreement with each of the 42 statements. Respondents were either PCPs or consultants in cardiology, nephrology and endocrinology (including diabetology) . Most responders had more than 10 years experience in role (159/274) and only 13% (n=35) had less then 5 years in role. When asked to estimate the number of patients seen in a 3-month period, 177 responders (65%) stated that they saw more than 50 patients (either as inpatients or outpatients). This suggests the respondent cohort was sufficiently experienced to provide insight into aspects of CKD. Consensus was achieved for all statements, with very high agreement (>90%) in 37 (88%) statements, and high agreement (≥75 and≤89%) in 5 (12%) statements. Overall consensus agreement by statement is shown in and , detailed results showing percentages of agreement at each level can be found in and . 10.1136/bmjopen-2023-080891.supp2 Supplementary data 10.1136/bmjopen-2023-080891.supp1 Supplementary data As none of the statements failed to reach the predetermined threshold of 75%, only one round of survey was required. The results of the survey represent current opinions of the respondents and are not intended to contradict the established evidence base. Anonymised round 2 results data are available in . 10.1136/bmjopen-2023-080891.supp3 Supplementary data Note, in the discussion below, ‘S’ is used to denote ‘statement’ Earlier identification & screening of CKD (S1-9)Risk factors for CKD in CRM patients (S10–15)Holistic management of CKD in CRM patients (S16–27)Guidelines (S28–35)Cross-specialty alignment (Cardiology, Nephrology, Endocrinology, Primary Care & Policy Makers) (S35 - 39)Education of clinicians and patients (S40–42)Strengths and limitationsRecommendationsVery high agreement (>90%) was observed for all statements in this topic, underscoring the key principle that early diagnosis of CKD is key to implementing strategies to slow disease progression. Responses suggest a strong support for the need for national kidney health screening and diagnostic programmes. Universal screening of the general population has been found not to be cost-effective, but systematic review has shown screening to be cost-effective in patients with hypertension or diabetes. Indeed, a KDIGO Controversies Conference concluded that targeted groups, such as those with hypertension, diabetes or CVD should be screened for CKD, and that an individualised approach should be taken to screen others based on a range of factors. Respondents agree that patients with risk factors should be screened for CKD at least annually by using eGFR and uACR where available, but identification of CKD is also challenging where awareness among healthcare staff and health literacy in the general population is poor, it is, therefore, recommended that national initiatives to improve these issues should support any screening programme. A minimal-resource prescreening tool has been developed and globally validated for CKD in people with T2DM. This demonstrates that age, gender, body mass index, duration of diabetes and blood pressure information can be used to identify those at an increased risk of CKD. This model does not require sophisticated diagnostics and can be used to guide cost-effective screening for CKD where resources are limited. All statements in this topic achieved >90% consensus apart from statement 14 (85%). This is interesting and could be due to the wording of the statement and the specific use of COVID-19 as an example. Reference to COVID-19 was included in the statement as it was considered relevant by the steering group at the time of statement generation and evidence was beginning to emerge of potential kidney injury associated with COVID-19. Analysis by country found that Taiwan agreement was at 68% for this statement, noticeably lower than for other countries. Although 93% of respondents agree that acute kidney injury (AKI) is a risk factor for CKD and HF, it is interesting that 7% do not agree given the evidence base. Given the established interplay between the cardiac, renal and endocrine systems, it is heartening to see that 99% of respondents agree that a holistic approach is needed to provide personalised care for individuals with CKD. Where practical, models of care should be developed to deliver integrated multidisciplinary care to patients with CRM disease, as described by the CardioMetabolic Care Alliance, to use comprehensive, patient-centred, team-based approaches for aggressive secondary prevention. Implementing combined clinics to deliver medical care for patients with kidney disease and diabetes or CVD may reduce outpatient healthcare costs without compromising health outcomes. Combined multidisciplinary clinics for diabetes, CKD and CVD are also associated with a slower decline in GFR than usual care, and a significant reduction in the risk for all-cause mortality. However, global variation exists regarding resource availability and access to specialist care, and the use of multidisciplinary teams (MDTs) is not universal. In these regions, HCPs are encouraged to develop local contacts/networks to allow for interdisciplinary discussions of CKD cases where needed. The broad objective of this consensus is to promote the earlier detection and management of CKD, and this principle applies regardless of resource limitations. As CKD-associated healthcare costs increase with disease progression, the economic argument supports this approach, and in patients with limited access to specialist care (including DMT), the role of patient education is paramount in improving health literacy and promoting lifestyle changes to reduce CKD risk. Coupled with this, the health system should look at how interventions such as DMTs can be used to slow progression of CKD and the need for dialysis (and associated costs), cost savings from slower progression to dialysis dependence might be reinvested in detection and diagnosis/national screening of high-risk patient groups. Very high agreement was observed for 10 of the statements in this topic, with 2 statements at lower levels of agreement (S23 and S25, 82% and 75%, respectively). Response to statement 23 is interesting, given the current debate regarding the benefits of stopping RAASi in advanced CKD. Results from a 52-week open-label UK study in patients with CKD stages 4–5 concluded ‘The STOP-ACEi trial did not find any benefit by stopping RAASi in advanced CKD’, and that ‘stopping RAASi in advanced CKD at an arbitrary GFR threshold is not the optimal approach’. Large observational studies have confirmed the cardioprotective benefits of RAASi in advanced CKD (including patients with concomitant T2DM). Linked to this is statement 21, respondents agree that a small decrease in eGFR is to be expected on initiation of DMT for CKD. During initiation and uptitration of RAASi treatment, a decline in kidney function of up to 30% within 4 weeks can be acceptable and it is important to avoid a knee-jerk response and reduce or stop DMT, and consultation with a nephrologist or MDT is recommended. Hyperkalaemia (HK) is potentially life-threatening; it may be acute or chronic and individuals with CKD are at an increased risk, which increases with the later stages of CKD. HK treatment is often stratified by serum potassium level, with borderline levels warranting modification of dietary potassium intake, followed by pharmaceutical management for larger or more sustained increases. Evidence to support recommending low potassium diets to patients with advanced CKD or ESRD is weak. Observational studies report weak associations between dietary potassium intake and potassium concentration and this approach may deprive patients of the beneficial cardiovascular effects associated with potassium-rich diets. When HK occurs, RAASi dose reduction or discontinuation are the most common used therapeutic options, but this approach is associated with worse cardiorenal outcomes and increased mortality. Once stopped, RAASi treatment is rarely reinitiated. In cases of mild-to-moderate HK, DMT should be maintained where possible, and an option for achieving this is using a potassium-lowering therapy such as patiromer or sodium zirconium cyclosilicate (S24, 91%). While the threshold for intervention in HK may vary by country and even individual physician, we can conclude that action may be considered when K + concentration reaches 5.0 mmol/L (S25, 75%) and certainly when at 5.5 mmol/L (S26, 93%) but with a degree of personalisation (in consultation with a nephrologist). Statements 25 and 26 were included to try and understand where the weight of opinion fell regarding interventions to manage HK, both statements achieved consensus, but the strongest agreement was for statement 26. On reflection, these statements could have been worded better to understand what would constitute appropriate action in these circumstances. Respondents very strongly agree that SGLT2i and RAASi therapies have a complementary cardiorenal protective action (S18, 98%) and that in patients with T2DM and HF, early SGLT2i use can prevent the development and progression of CKD (S19, 96%), in fact, SGLT2i may slow progression of CKD in patients without T2DM (S20, 91%). It is, therefore, recommended that SGLT2i and RAASi therapies are initiated as early as possible to both delay CKD progression and to therapeutically address the full CRM continuum—including T2DM and HF. The interplay between CKD, diabetes and CVD has been discussed, but CKD is associated with a number of comorbidities. A prospective UK cohort study of 1741 people with CKD stage 3 found that isolated CKD (without comorbidities) was present in only 4% of patients, and that common comorbidities include ‘painful condition’ (30%), anaemia (24%), thyroid disorders (12%), cerebrovascular disease (12%) and respiratory conditions (10%). These comorbidities should be identified and managed as part of a multidisciplinary approach. While all statements in this topic achieved consensus threshold, the steering group were surprised that 23% of respondents did not agree that implementation of CKD guidelines is suboptimal. The lowest agreement levels were observed in South Korea (68%) and Guatemala (72%). This could be either genuine strength in implementation of guidelines in these countries (which should be investigated and replicated elsewhere), or the need for continuing professional development even among experienced physicians. Due to the vital role of primary care must play in identifying and referring suspected patients with CKD, respondents agree that clear guidelines for referral of patients are considered essential (S32, 98%). PCPs should be provided with clear criteria for referral and followed through with continuing education from an appropriate specialist (ideally a nephrologist). The focus on prevention can only be achieved through a whole-system approach with appropriate investment in awareness, screening and diagnosis programmes (S35, 96%). Very high levels of agreement suggest strong support for multidisciplinary involvement in CKD management, and that PCPs should form part of the MDT. Given the prominent status of CKD as an emerging global health threat, there is a need to ensure that plans and funding are in place to enable optimal care. Stakeholders should engage with both local and national policy-makers to ensure that CKD is appropriately prioritised. Most responders agree on the need for up-to-date education for all HCP roles involved in CKD management, and that PCPs require structured education. Education of patients is a low-cost intervention, to encourage kidney healthy lifestyles and improve awareness of CKD among high-risk populations. 274 responses received with good representation from experienced physicians across 11 different countries is a strong basis for consensus, the study was designed to ensure that each country was represented by an equal number of responders. The majority of responders reported more than 10 years experience in role (159/274), suggesting that the results represent the views of experienced physicians. This study had some key limitations. Very high levels of agreement were achieved across the statements, this which may suggest that the statements were constructed as to achieve agreement (confirmation bias), or that perhaps they represent established good practice, a possible improvement to the process might be to provide survey responders (or a subset) with the opportunity to add or amend the statements prior to the full survey. It must also be acknowledged that associations with pharmaceutical companies may have led to unconscious bias the development of the consensus statements, but the study was designed to minimise the impact of this bias on results: the identity of steering group members was not disclosed to survey responders, and the identity of responders was not known to either the facilitator or the steering group. The lack of representation on the steering group (and subsequently the wider panel) of members from low-income and middle-income countries (LMICs) was a significant limitation. While the survey covered 11 countries of varying economic status, the range was from lower-middle income to high, a clear gap exists, namely those countries classified as ‘low income’ (eg, sub-Saharan Africa). Consequently, the findings of this study may not be generalisable to LMICs. In order to make such recommendations, a further Delphi process would be required specifically to engage with this demographic, this would provide insight into the specific challenges and considerations and allow for comparison with the current dataset. Aspects of patient choice and empowerment and consideration of the patient experience (outside of treatment outcomes) have not been discussed, this can be considered a limitation as the patient perspective may have significant bearing on the practicability and implementation of CKD management. Early screening for CKD in high-risk groups is cost-effective for the health system (where resources are in place to support intervention). GFR (estimated by the CKD-EPI Creatinine Equation) and uACR (using albumin-to creatinine ratio) should be the screening method of choice for CKD. The CKD MDT should include the primary care physician to improve early intervention and decision-making. Intervene early in patients with CKD with proven therapies (ie, SGLT2i, RAASi, DMTs) to delay/prevent progression to kidney failure. RAASi therapies should be optimised in all patients with CKD and HF. Patients with CKD should have their HK managed appropriately when serum potassium level rises above >5.0 mmol/L. Chronic HK should be treated with a potassium binder (eg, patorimer, sodium zirconium cyclosilicate) to allow for the maintenance of DMTs. Guidelines should be practical with an executive summary/checklist to assist implementation by non-specialist HCPs. Guidelines should include clear criteria for when and how to refer to other specialists/MDT. Nephrologists should work together to deliver consistent and clear education regarding CKD management. Clinicians, professional associations, academic institutions and patient representative organisations need to engage with policy makers to ensure appropriate plans and funding are in place to deliver optimal CKD care. National kidney health programmes should be implemented to drive improvements to screening and diagnosis. The steering group was able to form a set of recommendations specific to the 11 participant countries ranging from lower-middle income to high income and relevant to other countries with a similar demographic. Implementation of these recommendations has the potential to improve detection of CKD at an earlier stage in patients with risk factors. Earlier diagnosis provides the opportunity for early intervention with DMTs that can slow or halt the progression of CKD, in addition to reducing associated morbidity and mortality. Reviewer comments Author's manuscript |
Hemoglobin A1C as a prognostic factor and the pre-diabetic paradox in patients admitted to a tertiary care medical center intensive cardiac care unit | 35c57317-b96f-404d-8ac0-a95bc954325e | 9153197 | Internal Medicine[mh] | Type 2 diabetes mellitus (DM) is a known risk factor for cardiovascular diseases , and patients with DM and cardiovascular disease suffer from higher morbidity and mortality as compared with non-diabetic patients [ – ]. Moreover, studies have shown a progressive relationship between plasma glucose levels and cardiovascular risk, and even pre-DM patients are at increased risk for cardiovascular diseases . Also, in patients hospitalized due to acute coronary syndrome (ACS), a higher plasma glucose level at admission is associated with higher mortality risk. This association is seen both in patients with and without a diagnosis of DM . Glycated hemoglobin A1c (HbA1c), the major fraction of glycated hemoglobin, is formed by irreversible non-enzymatic glycation. Its concentration depends only on the red blood cell life span and blood glucose level . Thus, it is an indicator for blood glucose concentrations in the preceding 2–3 months. It is of great significance for monitoring the regulation of diabetes and the risk for complications. Furthermore, the American Diabetes Association (ADA) and World Health Organization (WHO) recommend using HbA1c for the diagnosis of DM . Data regarding HbA1c and outcomes in contemporary intensive coronary care units (ICCUs) is limited. An observational study performed in a medical intensive care unit (MICU) found that HbA1c testing in patients with stress hyperglycemia during hospitalization reveals undiagnosed diabetes in 14% of patients. Moreover, hyperglycemia with lower baseline HbA1c was associated with increased mortality as well . Furthermore, in patients with ACS, acute glycemic control, as estimated by plasma glucose levels, rather than the chronic pre-existing glycemic state as estimated by HbA1c affects prognosis . Hence, we aimed to investigate the prognostic significance of admission HbA1c levels among contemporary ICCU patients in a tertiary care medical center.
Study population Study outcomes Statistical methods Results Patients characteristics In-hospital complications and mortality rates Follow-up mortality rate We performed a prospective single-center observational cohort study at the Shaare Zedek Medical Center, a tertiary referral hospital and one of the 2 largest medical centers in Jerusalem. The study population consisted of non-selected consecutive patients admitted to the ICCU between 1 January 2020 and 30 June 2021. We included only patients for whom HbA1c levels on admission were documented. Patients were divided into 3 groups according to their HbA1c levels: < 5.7 g% [no diabetes mellitus (no-DM)], 5.7–6.4 g% (pre-DM) and ≥ 6.5 g% (DM). The division into groups was done according to the position paper of the ADA: Classification and Diagnosis of Diabetes: Standards of Medical Care in Diabetes—2019 . Obesity was defined by a body mass index (BMI) > 30. Demographic data, comorbid conditions, medications, physical examination, laboratory findings, in-hospital complications, length of stay (LOS), and in-hospital mortality were systematically recorded. In-hospital complications were defined as the occurrence of acute heart failure, left ventricular thrombus, shock, recurrent myocardial infarction or stent thrombosis, malignant arrhythmias, mechanical complication (free wall rupture or ventricular septal rupture), acute renal failure, severe bleeding requiring blood transfusion, vascular complication, cerebrovascular accident, anoxic brain damage and sepsis.
The primary outcome of our study was overall mortality, with a follow-up of up to 1.5-year from the time of index hospitalization. Every death in Israel is documented in a central database of the Israeli Ministry of the Internal Affairs and is updated in the hospitals’ medical records. We used these records to examine the overall mortality rates among the study participants. The study’s secondary outcomes included: (a) comparison of patients’ characteristics among the different HbA1c levels; and (b) in-hospital interventions and complications during the index hospitalization among the different HbA1c levels.
Patients’ characteristics were presented as numbers (%) for categorical variables and as means (SD) or medians (IQR) for normal and non-normal distributed continuous variables, respectively. A chi-square test was used for the comparison of categorical variables. Analysis of variance (ANOVA) test and Kruskal–Wallis test were performed for comparison of normally and non-normally distributed continuous variables, respectively. For the post-hoc analyses, we used the Bonferroni correction method. Hazard ratios (HRs) and corresponding 95% confidence intervals (CIs) for the association between the HbA1c group and mortality were estimated using a Cox proportional-hazards model. The model included the following potential confounders: age, gender, hypertension, hyperlipidemia, diabetes mellitus diagnosis, in-hospital complication, prior intervention, heart failure, and chronic kidney disease. Data were censored at death or at the end of the study period. Kaplan Meier survival curves were compared using the log-rank test. Then, the pre-DM and DM were grouped together and the same analyses were performed. All tests were conducted with a two-sided overall 5% significance level (α = 0.05). All analyses were performed using R software (R Development Core Team, version 4.0.3, Vienna, Austria).
A total of 1739 patients were included in the study. HbA1c on admission was documented in 1412 (81%) patients. Of them, 550 (39%) patients were defined as no-DM, 458 (32.4%) were defined as pre-DM and 404 (28.6%) were defined as DM patients (Table ). Interestingly, 81/870 (9.3%) patients had high HbA1c levels ≥ 6.5 g% on admission, but did not know they had DM.
Patients in the no-DM group were younger as compared with patients in the other groups [mean age 63.6 (95% CI 62.1–65.1) vs. 70.3 (95% CI 69.1–71.6) in the pre-DM group and 68.6 (95% CI 67.5–69.8) in the DM group; p < 0.001]. The percentage of patients in the pre-DM group increased with patients' age, while for the DM group, it peaks in the seventh decade of life and was then declines (Fig. ). A higher proportion of men was observed in the DM group as compared with the no-DM and pre-DM groups (74% vs. 66.9% and 67%, respectively; p = 0.035). Obesity rates were higher in the pre-DM (34.1%) and DM (34.9%) groups vs. the no-DM group (24.5%) (p < 0.001). Lastly, patients in the pre-DM and DM groups had higher rates of hypertension, hyperlipidemia, chronic kidney disease, coronary artery disease, heart failure and cardiomyopathy, pulmonary hypertension, atrial fibrillation/flutter, cerebrovascular disease and peripheral artery disease as seen in Table .
Iֹn-hospital complications rates were similar between the groups (32.6% in the no-DM group vs. 28.0%, and 26.9% in the pre-DM and DM group, respectively; p = 0.313) (Table ). Crude in-hospital mortality rates were higher in the pre-DM and DM groups as compared with the no-DM group (3.7% and 3.0% vs. 1.5%, respectively; p = 0.072). The combined group of pre-DM and DM patients had a higher crude mortality rate when compared with no-DM patients (12.6% versus 8.2%; p = 0.01).
The mortality rates during the follow-up period were 8.2% in the no-DM group vs. 12.5% and 12.8% in the pre-DM and DM group, respectively (p = 0.035). Interestingly, although in a multivariate analysis, pre-DM and DM states were associated with higher mortality rates [HR 1.84, (95% CI 0.81–2.97); p = 0.184], the Pre-DM patients had the strongest association with mortality rate [HR 1.83, (95% CI 0.936–3.588); p = 0.077]. (Tables and ; Figs. and ). Other factors found to be independently associated with mortality were: age; a history of heart failure; structural heart disease and valvular disease, and in-hospital complications (Table ).
In this large prospective trial in a tertiary care medical center ICCU, the prevalence of pre-DM (32.4%) and DM (28.6%) patients were similar to several other studies involving cardiac and ICCU patients . Around 9% of patients with no previous diagnosis of DM had HbA1c levels on admission in the diabetic range. Our data show that in non-selected consecutive ICCU patients the highest risk for in-hospital and overall mortality rate is among the pre-DM and DM subgroups. Surprisingly, we found that the highest mortality risk tends to be among patients with pre-DM and not in the patients with DM. Pre-DM is an intermediate stage of glycemic control with glycemic parameters above normal but below the diabetes threshold. It is a state with a high risk of conversion to overt DM (5–10% per year) and is associated with various complications of DM, including cardiovascular complications . A recent meta-analysis including more than 10-million individuals has shown that a pre-DM state is associated with an increased risk of all-cause mortality and cardiovascular disease in the general population and in patients with atherosclerotic cardiovascular disease . However, despite the association between pre-DM and adverse cardiovascular outcomes, the recommended treatment remains focused on changing lifestyles and only suggests considering pharmacotherapy with metformin. This is in contrast to the given medical treatment and growing use of newer anti-glycemic treatments, such as sodium-glucose co-transporter-2 (SGLT2) inhibitors and glucagon-like peptide-1 (GLP1) agonists in patients with overt DM. The pre-DM paradox, in which the highest mortality risk tends to be among patients with pre-DM, is consistent with data presented in a number of recent studies. Yahyavi et al. showed that among ambulatory patients, the 12-month risk of major adverse cardiovascular events was highest in subjects with HbA1c just below the diagnostic threshold for diabetes—the pre-DM patients . They found an adjusted hazard ratio of 2.53 in the pre-DM group as compared with 2.46 in the DM group. Importantly, they also found a lower cumulative incidence of initiation of cardioprotective and glucose-lowering medications among patients just below the diagnostic threshold for DM, as compared with patients with a DM diagnosis. This might explain why those patients are at increased risk for mortality. They are less likely to receive self-management and lifestyle modification education compared with DM patients. These findings will mainly affect the long-term prognosis, as in our study. However, several other trials have shown similar findings with regard to the short-term prognosis, among a diverse group of patients. Kim et al. have found that pre-DM condition, unlike DM, was a significant predictor of short-term neurological outcomes and in-hospital mortality among patients with acute ischemic stroke . It should be mentioned that in this study, the initial glucose measurement was higher in the DM group as compared with the pre-DM and normoglycemic groups. Hence, the higher complication rate cannot be attributed to stress hyperglycemia as demonstrated in previous studies . This study also shows that pre-DM patients had a significantly lower rate of preadmission statin treatment (19.3% vs. 30%) and also lower rates of antiplatelet treatment. This, again, suggests that pre-DM patients may have been alienated from appropriate medical measures despite their cardiovascular risk. Furthermore, a recently published study investigated the impact of pre-DM and DM on the 3-year outcome of patients treated with new-generation drug-eluting stents using post-hoc analyses of two large-scale randomized clinical trials (the BIO-RESORT and BIONYX trials). This study has shown that after treatment with new-generation drug-eluting stents, both patient groups had higher risks of ischemic and bleeding events compared with non-DM patients. Differences in major bleeding were mainly attributable to dissimilarities in baseline characteristics . Our findings further support the findings of these previous studies that a pre-DM state is a significant risk factor for cardiovascular complications and mortality. Moreover, our study population were patients admitted to a tertiary care medical center ICCU with an acute cardiovascular disease, hence, had a higher risk for cardiovascular complications and mortality during follow-up. In light of the findings, the need arises for research regarding the effects of current diabetes treatments in patients with pre-DM state and whether the treatment recommendations for primary and secondary prevention should be changed accordingly. Study limitations Our study has several limitations: First, the study was an observational study and, as such, is subjected to confounding factors. Second, it is a single-center study. Lastly, we did not have data comparing medications on admission to the different groups in the study. However, this fact is less likely to affect the external validity of the study as long as pre-DM treatment is not significantly different from what is common elsewhere. Nevertheless, our study includes a large sample size of consecutive non-selected ICCU patients and includes real-world data about patients with various cardiac diseases who required hospitalization in the ICCU, which contributes to the external validity of the study.
Pre-DM and DM are common among ICCU patients. Among these patients, an HbA1c level of ≥ 5.7 g% (pre-DM & DM) is associated with a worse prognosis. Moreover, pre-DM patients probably have the highest risk for mortality following admission to the ICCU. Further studies are needed to better understand the reasons for this pre-DM paradox.
|
Impact of regenerative procedure on the healing process following surgical root canal treatment: A systematic review and meta-analysis | 6b579e36-9f39-4ab4-9915-62f4940dfcbb | 11695025 | Dentistry[mh] | Bacterial infection can result in pulpal inflammation, ultimately leading to pulp necrosis and periapical lesions . While conventional Root Canal Treatment (RCT) is the first-line treatment, Surgical Root Canal Treatment (SRCT) is recommended in cases where non-surgical RCT is unsuccessful . The main objective of SRCT is to create an optimal environment for periapical tissue repair. This is typically achieved by eliminating infections and inaccessible areas within the root canal system and preventing future infections . Success in both RCT and SRCT relies on the absence of signs of infection and inflammation, along with radiography showing reduced periapical lesion size and normal growth of the periodontal ligament gap . The evaluation of healing after SRCT is commonly conducted using the criteria established by Rud et al. and Molven et al. on 2D imaging . On the other hand, the Modified PENN 3D criteria have been used to evaluate healing on 3D imaging. Wound healing after SRCT can result in repair or regeneration. Repair involves the development of new tissue that differs from the original cells, while regeneration involves wound healing using cells from a similar tissue . However, the potential for connective tissue to invade the bony defect can interfere with the healing process . Guided Tissue Regeneration (GTR) procedures have shown promise in periodontology and dental implants and have gained interest as a supplementary approach for SRCT to improve healing and prevent soft tissue collapse within the bony defect . Different materials for GTR can be used in SRCT, including barrier membranes, bone grafts, and Autologous Platelet Concentrates (APCs), either alone or in combination . The first commercially produced material was expanded polytetrafluoroethylene (e-PTFE), but its complete removal requires additional surgery . Resorbable membranes, like collagen membranes, were developed in the 1990s to avoid the need for surgical removal . Different types of bone grafts offer unique advantages, disadvantages, and success rates. Familiar sources include autogenous, xenograft, allograft, and alloplast . The gold standard for bone grafting is autogenous grafts for their ability to promote osteogenesis, osteoinduction, and osteoconduction characteristics . However, they have disadvantages such as longer surgical time, morbidity, and limited bone supply . Xenograft bone, taken from animals like bovines, is becoming more popular for its osteoconductive properties . Allograft bone, donated between genetically dissimilar individuals, can be osteoconductive or osteoinductive without additional surgery. Alloplasts are synthetic materials considered exclusively osteoconductive . GTR using APCs showed high amounts of cytokines and growth factors, making it a promising option for tissue regeneration . These cells are obtained from the patient’s peripheral blood, making the procedure safer, well-tolerated, and cheaper . Meanwhile, platelet-rich plasma (PRP), the first generation of APCs, is difficult to prepare and requires anticoagulants . The second generation of platelet-rich fibrin (PRF) can be obtained through a single centrifugation method . A third-generation injectable PRF (i-PRF) was developed in 2014 using a different centrifugation force and plastic tubes, reducing clotting time . Many studies have evaluated the efficacy of GTR procedures. However, there is still debate about their impact on improving success rates after SRCT. Therefore, a systematic review and meta-analysis of these studies are necessary to aid clinicians in making informed decisions for successful SRCT. This study aimed to evaluate the impact of GTR procedures on the healing process following SRCT.
The systematic review and meta-analysis followed PRISMA 2020 and the Cochrane Handbook for Systematic Reviews of Interventions . The systematic review process was registered in PROSPERO (CRD 42023477089). The PICOST strategy was utilized to formulate the clinically relevant question: Among individuals having Surgical Root Canal Treatment (P), will the application of a GTR procedure (I) versus not using a GTR procedure (C) have an impact on the healing process (O) in randomized clinical trials (S) after one year follow up (T)? Search strategy Inclusion and exclusion criteria Study selection Data extraction Quality assessment Statistical analysis Certainty of evidence assessment The online search was carried out independently by three researchers (N.M., X.G., and A.S.) to locate relevant studies. The electronic databases PubMed, Embase, Scopus, Cochrane, and Web of Science were searched until February 25, 2024. Grey literature was found using Google Scholar and OpenGrey. No restrictions were placed on the publication date or language. Specific keywords were merged using Boolean operators, and the MeSH terms were incorporated into the electronic search strategy . In addition, manual searches were also conducted by checking the reference lists of relevant articles.
This systematic review compared randomized clinical trials that used GTR procedures in the intervention group to a control group that did not. The trials had to have a minimum follow-up of 12 months and focus on periapical lesions caused by endodontic problems in human patients. Clinical assessment was based on signs and symptoms, while radiographic assessment used criteria established by Molven et al. or Rud et al. for 2D imaging, and modified PENN 3D criteria by Schloss et al. for 3D imaging. Eligible patients had to be classified as American Society of Anesthesiologists (ASA) I or II. Studies were excluded if they did not have sufficient data, included patients with root fractures, resorption, or perforation, or included children under 12 or with sample sizes less than 10.
The records were imported into EndNote X21 (Clarivate Analytics, Philadelphia, PA). After the removal of duplicates, the titles and abstracts of the remaining records were screened independently for eligibility by 3 reviewers (N.M., X.G., and B.S.). The same three authors read the full texts of articles independently to determine if they met the inclusion criteria. References of all relevant research were also checked. A fourth reviewer (A.L.) was consulted to facilitate compromise in any disagreement.
Three independent reviewers (N.M., X.G., and F.A.) Obtained data from every study using a standardized Excel spreadsheet. The form included the following information for each study: the first author’s name, the year of publication, the age, the size or type of lesion, the regenerative techniques and materials used, the sample size, and the outcomes observed at the 12-month follow-up. Studies with missing outcome data were excluded. The clinical outcomes were evaluated by the presence or absence of signs of infection and inflammation. Radiographically, the healing assessment was determined by using the criteria established by Rud et al. or Molven et al. (complete, incomplete, uncertain, or unsatisfactory healing) for 2D imaging evaluation, whereas the modified PENN 3D criteria established by Schloss et al. (complete, limited, uncertain, or unsatisfactory healing) were used for 3D imaging evaluation. The assessment of success and failure was determined based on a comprehensive evaluation of both clinical and radiological outcomes. For statistical purposes, the outcomes were also dichotomized into success and failure. The success was referred to the loss of clinical symptoms and the signs of (complete or incomplete healing) for 2D imaging and (Complete or Limited healing) for 3D imaging. Failure was referred to the presence of clinical symptoms and/or the occurrence of (uncertain or unsatisfactory healing) for 2D and 3D imaging.
N.M., B.S., and F.A. evaluated the risk of bias in each study using the Cochrane Collaboration tool for randomized trials (RoB 2). They assessed trials using RoB 2 questions and determined each study’s risk of bias using the algorithm in the RoB 2 guidance. Each study’s overall risk of bias was assessed by considering each domain’s risk. If all domains of the study were low risk, the study was evaluated as having a low risk of bias overall. If the study had some concerns in at least one domain but no high risk, the study was evaluated as having some concerns of bias overall. If the study had a high risk in one or more domains, it was considered to have a high risk of bias overall. Disagreements amongst reviewers were resolved through discussion. Otherwise, a fourth reviewer, A.L., was consulted until agreement was reached.
The Cochrane Collaboration System’s Review Manager 5.4 was utilized to calculate the risk ratio (RR) to compare SRCT failures with and without GTR treatments. The chi-squared test (X2) assessed the study’s heterogeneity. Since the heterogeneity was small, the fixed-effects model was used (p > 0.1 or I2 ≤ 50%). Funnel plots were used to evaluate publication bias.
GRADEpro Guideline Development Tool software (Evidence Prime, Inc, Seattle, WA) was used to create a summary of the findings table to assess the strength of the evidence. Individual GRADE criteria were considered, and evidence certainty was calculated. The GRADE system evaluates evidence certainty as high, moderate, low, or very low .
Study selection Study characteristics Quality assessment Meta-analysis Subgroup analysis based on the techniques and materials Subgroup analysis based on the lesion type Publication bias Certainty of evidence We applied the GRADE process to rank the confidence level of the evidence obtained through our meta-analysis that evaluated the effect of GTR procedures on SRCT . After considering the serious risk of bias domain and the non-serious indirectness, imprecision, and inconsistency domains, the success rate of GTR procedures following SRCT is considered to have a moderate grade of evidence.
EndNote X21 received 1605 records from different electronic databases. After removing the duplicate publications, a total of 1273 were excluded based on their title and abstract, resulting in 30 papers being considered for full-text review. All studies identified after excluding duplications are shown in . In parallel, 4 references were identified through manual searches. Next, 34 papers were full-text reviewed for eligibility, 18 studies were excluded for different reasons . Finally, 16 studies met the inclusion criteria and were included in the meta-analysis . Three investigators, N.M., X.G., and B.S., conducted all the search processes and examined all the studies separately. In cases of disagreement, a fourth reviewer, A.L., was consulted in order to reach an agreement.
All data extracted in primary studies is shown in . summarizes the main characteristics of each of the included studies. All papers included in the analysis were RCTs published after 2000, except for Pecora et al. . The intervention groups showed 381 lesions, whereas the control groups had 309 lesions. All studies included a control group that did not use GTR procedures and an experimental group that utilized GTR. All studies analyzed 2D radiographic healing based on the criteria defined by Molven et al. or Rud et al. , and four studies investigated 3D radiographic healing using the modified PENN 3D criteria.
The risk of bias in all trials is presented in . However, the randomization process is not well explained in seven studies . Deviations from intended interventions and missing outcome data were assessed as low risk for all studies. Outcome measurement was considered to have some concerns in six studies . Bias due to selective reporting was considered some concern in twelve studies . Overall, four studies show a low risk of bias, and the remaining twelve studies exhibit some concerns regarding risk of bias. Completed risk of bias assessments are shown in .
Due to the lack of noticeable heterogeneity within the studies included, the meta-analysis used a fixed-effects model. The meta-analysis for all included studies reported the failure rate according to 2D evaluation. The results showed that using the50; 95% CI, 0.34–0.73; P < 0.001) . Four studies reported the failure rate according to 3D evaluation. The results showed that using GTR following SRCT significantly improved the healing process compared to the conventional SRCT (RR: 0.36; 95% CI, 0.15–0.90; P < 0.001) .
When e-PTFE membranes were used alone, no significant effects were seen in the healing process following SRCT (RR: 2.00; 95% CI, 0.22–18.33; P = 0.54). When resorbable collagen membranes were used alone, the results showed better outcomes but were statistically not significantly different from the control group (RR: 0.66, 95% CI: 0.29–1.52, p = 0.33). In groups that received only bone grafts or APCs, outcomes were also slightly better but not significantly different (bone grafts: RR: 0.59; 95% CI, 0.27–1.27; P = 0.17; APCs: RR: 0.75; 95%, 0.19–3.02; P = 0.69). When bovine bone-derived hydroxyapatite with collagen membrane was used in combination, the success rate was significantly increased compared to the control group (RR: 0.43, 95% Cl: 0.25–0.74, p < 0.001) .
Six studies utilized GTR procedures on confined periapical lesions; there was an advantage towards improved outcomes in GTR groups, without statistically significant difference (RR: 0.59; 95% CI, 0.34–1.02; P = 0.06). When GTR is used on the apico-marginal defect with complete root exposure on the buccal side, there is no significant impact on the healing process after SRCT (RR: 2.00; 95% CI, 0.22–18.33; P = 0.54). When the GTR procedures were used on patients with through-and-through lesions, the results indicated a significant increase in the success rate of the GTR group compared to the control group (RR: 0.36, 95% Cl: 0.19–0.68, p < 0.001) .
A funnel plot was created to evaluate the presence of publication bias. The findings indicated that the funnel plot showed bilateral symmetry, indicating the absence of notable publication bias .
GTR procedures have been used to enhance bone regeneration in SRCT . However, different techniques and biomaterials lead to conflicting outcomes , which are still controversial. The current systematic review and meta-analysis include RCTs, which aim to investigate the effects of GTR procedures on the outcome of SRCT. Because there is no outcome difference between the one-year and four-year follow-ups, the studies with more than a one-year follow-up were reported, and the data was explicitly extracted based on the one-year follow-up . Still, it is essential to be careful when generalizing the study’s findings regarding the long-term surgical outcomes of SRCT. The current meta-analysis shows that GTR procedures significantly improve the healing process one year after SRCT, regardless of the evaluation method used. However, results varied in subgroup analysis. Using e-PTFE membranes alone did not improve outcomes, but using collagen membranes, bone grafts, or APCs alone may accelerate healing. Using collagen membranes and bovine bone-derived hydroxyapatite together showed significant improvements. Furthermore, results varied according to the lesion type. GTR treatment for apico-marginal lesions did not improve outcomes, but may accelerate the healing in confined lesions and significantly improve the healing in through-and-through lesions. This meta-analysis was conducted on various types of permanent teeth with different lesion sizes, focusing on lesions caused by endodontic problems. Therefore, the results may apply to patients of different ages with permanent teeth. However, they may not apply to patients with combined Endo-Perio lesions. The first barrier membrane employed in SRCT as a GTR technique was e-PTFE . This membrane prevents soft tissue growth inside defective areas following SRCT and promotes osteoblast development . However, only two studies in our analysis used this approach, and no evidence justifies its use in SRCT. In contrast, Yoshikawa et al. conducted a study on beagle dogs. They found that using the e-PTFE membrane significantly increased the formation of new cortical bone compared to the control group. Furthermore, using e-PTFE may lead to complications such as membrane exposure and bacterial infection because it is non-absorbable and requires surgical extraction, which increases patient suffering, treatment expenses, and possible issues . The e-PTFE membrane was recently withdrawn from use in SRCT. However, there are alternate materials available . Collagen membranes are the later generation of absorbable membranes and offer several advantages over non-absorbable membranes, including cost-effectiveness and reduced risk of complications . In the present study, absorbable collagen membranes were included in 3 RCTs , and no significant differences were observed compared to the control group. Our findings support using collagen membranes, with no statistically significant differences observed compared to the control group. Our finding is consistent with Dominika et al. , who also observed a higher success rate after 6 months of follow-up when using collagen membranes. However, after 12 months, there was no significant difference in success rates. Their study showed that using collagen membranes can accelerate the healing process after SRCT. The collagen membrane has varying resorption times, and it is essential for the optimal resorption time to match the time for bone regeneration. Cross-linked technology has been utilized to extend the resorption time and facilitate successful healing . Hence, the collagen membrane should remain in place for a long time. Using Collagen membranes alone can be unstable, which causes collapse under loads and delayed bone tissue regeneration . To address this issue, some authors have used bone grafts . Bovine bone-derived hydroxyapatite is commonly used because it is a biocompatible graft material with osteoconductive characteristics . The slow resorption rate of bovine bone-derived hydroxyapatite is a critical advantage, as it enables better integration and acts as an effective osteoconductive grafting material during the natural healing process. This can ultimately lead to successful bone healing outcomes . In this study, 6 RCTs utilized collagen membranes and bovine bone-derived hydroxyapatite grafts, and we found that this combination significantly improved healing following SRCT. Similar results were reported by Wang et al., who found a significantly better success rate after 12 months of follow-up using collagen membranes combined with bovine bone-derived hydroxyapatite graft. The bone graft is the most common GTR procedure utilized . In this study, three studies used three different types of bone grafts without additional materials, and we found that this may result in improved healing following SRCT, but without a statistically significant difference. However, Sreedevi et al. found that using freeze-dried hydroxyapatite bone graft material resulted in successful bone healing compared to the control group. The different types of bone grafts have varying success levels and potential advantages and disadvantages . Autogenous bone grafts are the only type that has osteogenesis, osteoinduction, and osteoconduction properties, with a success rate of 95% ; however, due to their drawbacks, such as postoperative pain, longer surgery time, and increased morbidity, other graft types are often preferred, such as allograft, xenograft, and alloplastic grafts . Nonetheless, some studies have shown that combining bone grafts with barrier membranes can improve clinical outcomes . Some studies have investigated using APCs to improve healing and regeneration after SRCT . The APCs provide platelets, leukocytes, and growth factors that promote tissue growth and blood flow . Our current meta-analysis found that using APCs resulted in more favorable healing outcomes without a statistically significant difference. Dhamija et al. found a significantly better success rate with PRP in a 3D assessment but no significant difference in a 2D evaluation. The i-PRF is the third generation of APCs . In this analysis, only one study used i-PRF with a collagen-based bone graft. Further research is needed to establish the efficacy of this treatment protocol. SRCT may involve dealing with many compromised conditions. Based on several clinical and experimental studies, periapical lesions have been categorized into three main types: (1) confined to periapical areas without erosion of the lingual cortex, (2) through-and-through lesions (tunnel), and (3) apico-marginal lesions . In this study, 6 clinical trials assessed several GTR procedures to promote the repair of a confined lesion in the periapical area. Our findings indicate that utilizing GTR procedures can accelerate wound healing, although there is no significant difference in situations of confined periapical lesions. Our finding is consistent with Chen & Shen . They found that the GTR group demonstrated a better success rate than the control group after 6 months, but the difference was not statistically significant after a year. Apico-marginal lesions can cause epithelial down growth across the denuded root surface after SRCT, which increases the possibility of a recurrent connection between the apical and marginal tissues. Only two studies in our analysis used GTR procedures for apico-marginal lesions with complete exposure on the buccal surface of the root. Our findings indicate that regenerative techniques may not significantly impact outcomes for these lesions. However, a 2023 case series by Baruwa et al. found that endodontic microsurgery combined with GTR can be a highly effective treatment approach for treating apico-marginal lesions. It is important to note that proper diagnosis and procedures are crucial for achieving successful results. Furthermore, as the interest in using APCs for apico-marginal defects grows, better evidence is needed to support their effectiveness. Further well-conducted trials are necessary to fully understand the potential impact of APCs in these cases. Fast soft tissue proliferation from the facial and lingual sides can hinder bone growth and lead to incomplete healing or scar tissue formation in through-and-through lesions . Furthermore, these through-and-through lesions may offer a pathway for bacterial infection. Therefore, the inclusion of GTR procedures not only have a regenerative function but also play a crucial role in blocking this pathway . This meta-analysis included 9 studies that evaluated the effects of GTR procedures for these lesions and found that they significantly improve wound healing following SRCT. Similar results were reported by Taschieri et al. when using collagen membrane and hydroxyapatite from bovine bone. Strength and limitation systematic review and meta-analysis only contained RCTs, which provide robust and reliable evidence. Comprehensive inclusion and exclusion criteria were applied to ensure a focused research question and avoid bias in article selection . The search included multiple databases without language or geographical restrictions, increasing the potential for generalization . The study also considered the impact of lesion type on the effectiveness of different GTR procedures. Subgroup analysis can provide clinicians with further insights into choosing the suitable GTR procedure for different lesion types. There are various limitations to the current study. Most studies raised "some concerns" based on the Cochrane Collaboration tool. In addition, only a single research study incorporated the third generation of APCs. Therefore, meticulously planned clinical trials of superior quality are necessary. Also, only four studies were included that used modified PENN 3D criteria to evaluate the healing process after SRCT. In recent years, CBCT has been used in more clinical studies by using different criteria. More research needs to be carried out to develop straightforward, standardized ways to evaluate the results of 3D radiographs.
This meta-analysis showed that GTR processes significantly improve healing after SRCT, regardless of evaluation methods, especially when collagen membranes and bovine bone-derived hydroxyapatite are used together. Furthermore, for through-and-through lesions, the GTR procedures significantly improved the healing after SRCT.
S1 Checklist PRISMA 2020 checklist. (DOCX) S1 Table Search strategy. (DOCX) S2 Table Studies identified after excluding duplications. (XLSX) S3 Table The excluded studies and reasons for exclusion. (DOCX) S4 Table All data extracted in primary studies. (XLSX) S5 Table Completed risk of bias assessments. (XLSX)
|
Association Between the Risk of Dental Caries and | acd181f3-7d78-492d-a62b-b4cd191006d0 | 11619898 | Dentistry[mh] | Subjects Sample Collection Data Analysis Statistical power was calculated by GPower 3.1 (Heinrich Heine University Dusseldorf, Dusseldorf, Germany). Data analysis was conducted using SPSS 22.0 (IBM; Armonk, NY, USA). Continuous data were analysed using the t-test or non-parametric test, and quantitative data were analysed with the chi-squared test or Fisher’s exact test. The association of DLX3 gene (rs11656951 and rs2278163) polymorphisms with dcaries risk was assessed by the chi-squared test or Fisher’s exact test. Then the subgroup analysis of association was assessed by logistic regression analysis for the potential risk factors. A statistically significant difference existed when the two tailed p-value was less than 0.05.
All children ages 2 to 5 years who were found healthy upon examination at Changsha Stomatological Hospital from January 2022 to December 2023 were enrolled in this study. The dental check-up was conducted following World Health Organization criteria. Children with other dental diseases, systemic diseases, chronic medication use, or who could not cooperate were eliminated from this study. Baseline data (age, gender), dietary habits (sweets intake, eating before sleep), and oral hygiene habits (age at which toothbrushing was started, brushing frequency, brushing with fluoride toothpaste, and dental visits) were gathered by questionnaire. Questionnaires were filled out by every child’s parents or guardians. Children were divided into a control group (dmft = 0) and a case group (dmft ≥ 1) based on decayed, missing, and filled teeth (dmft) scores. Cases were divided into low (dmft = 1-5), moderate (dmft = 6-9), and high (dmft ≥ 10) severity groups. The present study was approved by the Ethics Committee of Changsha Stomatological Hospital. All the parents or guardians of children gave their written informed consent.
Oral epithelial cells were collected from children by cotton swabs. DNA was isolated from the swabs using a Magnetic Swab DNA Kit (TIANGEN; Beijing, China) according to the manufacturer’s introduction. DLX3 gene rs11656951 and rs2278163 polymorphisms were amplified by PCR as describe in a previous study. Then the products were sequenced by the Sanger sequencing method using a 3500 Genetic Analyzer (Applied Biosystems; Foster City, CA, USA).
Characteristics of Participants Association of DLX3 Polymorphisms with Caries Susceptibility Subgroup Analysis of the Association Between Association of DLX3 Polymorphisms with Caries Severity 217 children with caries were divided into low (dmft = 1–5), moderate (dmft = 6–9), and high (dmft ≥ 10) score groups. Then we analysed the association of DLX3 polymorphisms with caries severity . In comparison with the rs11656951 CC genotype, the rs11656951 TT genotype was statistically significantly correlated with reduced caries risk in the low dmft group (p = 0.004, OR = 0.387, 95%CI = 0.202–0.742) group. In addition, a negative association was discovered between the rs11656951 T allele and caries susceptibility (p = 0.006, OR = 0.662, 95%CI = 0.493–0.888) in the low dmft group. CT and TT genotypes of rs11656951 were all correlated with caries risk in the moderate/high dmft group. The T allele of rs11656951 was also a protective risk factor for caries. When compared to the rs2278163 A allele, the G allele (p = 0.046, OR = 1.354, 95%CI = 1.005–1.824) was statistically significantly positively related to caries susceptibility in the low dmft group, but not in moderate/high group.
A total of 441 children (242 boys and 199 girls) aged from 2 to 5 years participated in this study. The children were divided into a case group (dmft ≥ 1) and a control group (dmft = 0). The control group contained 224 children (123 boys and 101 girls). Give a two-tailed effect size of 0.3, α = 0.05, the statistical power was 0.882. The mean age of the case group was 3.40 ± 1.02 years, and mean age of the control group was 3.35 ± 1.01 years. No statistically significant difference was discovered in age and gender between case and control groups ( , p < 0.05). Dietary habits (sweets intake and eating before sleep) also made no statistically significant difference (p < 0.05). Toothbrushing started at ages over 24 months was statistically significantly more frequently discovered in the case group (p = 0.003). Brushing frequency more than once per day was statistically significantly lower in children with caries (p = 0.004). Children brushing with fluoride toothpaste (p = 0.025) and visiting the dentist regularly (p = 0.001) were statistically significantly more frequently observed in the control group. The mean dmft score was 4.76 ± 3.41 for cases: 70.97% (154/217) of cases had low dmft scores, 22.12% (48/217) cases had moderate dmft scores, and the other 15 cases had high dmft scores .
Genotype distributions of DLX3 gene rs116948495 and rs2278163 polymorphisms were in accord with the Hardy-Weinberg equilibrium (HWE) test in the control group ( , p < 0.05). Frequencies of rs11656951 TT genotype and T allele were statistically significantly higher in the control group than in the case group. The chi-squared test showed that rs11656951 CT genotype (p = 0.026, OR = 0.613, 95%CI = 0.398–0.944) and TT genotype (p = 0.001, OR = 0.378, 95%CI = 0.212–0.673) were statistically significantly negatively correlated with caries susceptibility when compared with CC genotype. The T allele was more frequently discovered in the control group, and was statistically significantly associated with decreased caries susceptibility (p = 0.001, OR = 0.636, 95%CI = 0.486–0.831) . The GG genotype of rs2278163 was higher in caries patients than in controls; however, the difference was not statistically significant. Moreover, the rs2278163 genotypes had no statistically significant association with caries susceptibility ( , p < 0.05). The G allele of rs2278163 was statistically significantly higher in caries patients (p = 0.049) and correlated with elevated caries susceptibility (OR = 1.314, 95%CI = 1.000–1.725).
DLX3 P olymorphisms and Caries Susceptibility based on Characteristics Subgroup analysis based on characteristics was assessed by logistic regression analysis. The results are shown in . Subgroup analysis based on gender showed that in comparison with the CC genotype, rs11656951 CT and TT genotypes were obviously correlated with decreased caries susceptibility in girls, but not in boys. When compared to the CC genotype, the CT and TT genotypes of rs11656951 were statistically significantly related to reduced caries susceptibility in children with intake of sweets more than once per day. The TT genotype of rs11656951 was negatively correlated with caries susceptibility in children with sweets intake < 1. Subgroup analysis based on eating before sleep showed that the rs11656951 CT and TT genotypes were negatively associated with caries risk in children who ate before bed. Children with rs11656951 CT and TT genotypes demonstrated a statistically significant correlation with a reduced risk of caries compared to CC genotype carriers when brushing frequency was less than once a day. In comparison with rs11656951 CC genotype, the TT genotype was statistically significantly related to decreased caries risk in children brushing with fluoride toothpaste. When compared with rs11656951 CC genotype, children with TT genotype had a statistically significantly lower caries risk in children with or without dental visits. p-values and OR with 95%CI are presented in . For the subgroup analysis of association between rs2278163 genotypes and caries susceptibility, we found that rs2278163 genotypes had no statistically significant association with caries susceptibility in the subgroups gender, eating before sleep, and age at which toothbrushing was started ( , p < 0.05). However, the rs2278163 GG genotype was positively related to caries risk in children not eating before sleep. Furthermore, the rs2278163 AG genotype was positively related to caries risk in children with brushing frequency at least once a day. The rs2278163 GG genotype carriers had a statistically significantly higher caries risk than AA genotype carriers in children brushing without fluoride toothpaste. The GG genotype of rs2278163 polymorphism was positively correlated with caries risk in children with dental visits . p-values and OR with 95%CI are presented in . In short, rs11656951 TT genotype is a protective factor and rs2278163 GG genotype is a risk factor for caries susceptibility in children with different oral hygiene habits.
It has been confirmed that caries is affected by oral hygiene habits. , In this study, age and gender had no statistically significant effect in the comparison of case- and control-group children. Later toothbrushing-inception age, lower brushing frequency, brushing without fluoride toothpaste and lower regular dental visits were more frequently discovered in children with caries than in control children. The trajectory of caries is impacted by the interactions among bacteria, oral hygiene habits, and genetic factors. , DLX3 impacts the mineralized dental tissues involved in the formation of teeth. Abnormal expression of DLX3 may cause individual susceptibility to risk factors. Therefore, mutations in the DLX3 gene might disrupt the formation of mineralized dental tissues, ultimately resulting in caries. In the current study, we analysed the association between DLX3 polymorphisms and caries susceptibility. We found that the CT genotype of rs11656951 was statistically significantly correlated with a 0.613-times lower caries susceptibility. The TT genotype of rs11656951 was statistically significantly related to a 0.378-times lower susceptibility of dental caries. The rs11656951 T allele was more frequently observed in the control group and was statistically significantly correlated with reduced caries risk. This is in accordance with previous study by Chisini et al , who indicated that the rs7501477 T allele combined with the rs11656951 C allele could predict a high risk of caries in Brazilian children. The G allele of rs2278163 was positively related to a 1.314-times greater caries risk, while rs2278163 genotypes had no statistically significant association with caries. This agrees with the study performed by Ohta et al , which indicated that the rs2278163 T allele was positively correlated with caries with high mutans Streptococci. However, Kastovsky et al found that rs2278163 had no statistically significant association with early childhood caries (ECC). Oral hygiene habits were obviously different between the case and control groups. However, as no previous studies focused on the association between DLX3 polymorphisms and oral hygiene habits, we conducted subgroup analyses based on them. The results indicated that rs11656951 CT and TT genotypes were statistically significantly related to 0.318- and 0.241-times lower caries susceptibility, respectively, in girls. However, rs11656951 genotypes were not related to caries risk in boys. In contrast, rs2278163 genotypes had no statistically significant association with caries in girls or boys. When compared to the CC genotype, the rs11656951 CT genotype was statistically significantly correlated with a 0.432-fold lower caries risk in children who ate/drank sweets more than once per day, while the rs11656951 TT genotype was statistically significantly related to a 0.408-fold and 0.288-fold caries risk reduction, respectively, in children with an intake sweets less than once and more than once per day. In contrast, rs2278163 genotypes were not related to the susceptibility for caries in sweets-intake subgroups. The CT and TT genotypes of rs11656951 polymorphisms were negatively related a to 0.537- and 0.305-fold lower caries risk, respectively, in children who ate before sleeping. Interestingly, the rs2278163 GG genotype was statistically significantly correlated with an 10.918-times higher caries risk in children not eating before sleep. Children with the rs11656951 CT genotype and TT genotype were statistically significantly related to 0.406- and 0.235-fold lower caries risk than were CC genotype carriers, when brushing frequency was less than once per day. The AG genotype of rs2278163 was statistically significantly correlated with a 1.681-times higher caries risk in children with a brushing frequency at least one time per day. The TT genotype of rs11656951 was statistically significantly correlated with reduced caries risk in children brushing with fluoride toothpaste. The rs2278163 GG genotype carriers had a 2.070-times higher caries risk in the subgroup brushing without fluoride toothpaste. Children with the rs11656951 TT genotype had a statistically significantly lower caries risk with or without regular dental visits. In contrast, the rs2278163 GG genotype was statistically significantly correlated with a 6.267-times higher caries risk in the regular dental visits subgroup. We also analysed the association of DLX3 polymorphisms with caries severity. The results showed that the rs11656951 TT genotype was statistically significantly related to lower caries risk in low and moderate/high dmft subgroups. But a positive association was discovered between the rs2278163 G allele and caries risk in the low-dmft group. This might be a result of the small sample size in the moderate/high-dmft subgroup. The current research had some limitations. First, despite 217 cases and 224 controls providing a reasonable sample size, the study may lack sufficient power to detect smaller effect sizes or associations with caries susceptibility in the high-dmtf subgroup analysis. Larger sample sizes could enhance the reliability of the findings. Second, other DLX3 gene polymorphisms and interactions between genetic and environmental factors were not explored in this study. Third, this study did not examine underlying mechanisms by which the DLX3 polymorphisms influence caries susceptibility. Finally, the effects of rs11656951 and rs2278163 on the expression and function of DLX3 were not explored in this or previous studies. Functional studies or additional molecular analyses could provide deeper insights into the observed associations.
The DLX3 gene rs11656951 TT genotype is a protective factor and the rs2278163 GG genotype is a risk factor for caries susceptibility. A statistically significant association was also discovered in gender, sweets intake, eating before sleep, brushing frequency, brushing with fluoride toothpaste, and dental visits subgroups. The DLX3 gene rs11656951 TT genotype was correlated with low and moderate/high dmft scores.
|
Analysis of prognosis and related influencing factors of different surgical approaches for early cervical cancer | 2bacad07-c240-48e8-91a8-520545cdc554 | 11872751 | Surgical Procedures, Operative[mh] | Study subject data collection Clinical data were retrospectively collected from 726 patients with early-stage CC who underwent radical surgery at Guangdong Women and Children Hospital between January 2005 and December 2017.
Preoperative diagnosis of CC and first-time surgical treatment. Clinical stage determined by two gynecologic oncologists with senior qualifications, based on the 2009 International Federation of Gynecology and Obstetrics (FIGO) staging system (stages IA2, IB1, IB2, and IIA). Surgical procedures included radical hysterectomy with bilateral pelvic lymph node dissection, with or without bilateral salpingo-oophorectomy. Availability of complete clinical and postoperative pathological data.
Incomplete patient data. Pregnancy complicated by CC. Coexistence of other malignant tumors. Data collection Observation indicators Data were extracted from the hospital's electronic medical record system, including patient demographics (name, age, gravidity, and parity), laboratory test results, FIGO stage, tumor diameter, tumor differentiation, histopathological type, lymph node metastasis, depth of stromal invasion, and positive vaginal resection margins that indicated infiltration or metastasis.
Survival time was calculated from the date of surgery to death, loss to follow-up, or the last follow-up, with the cutoff date set as March 2023.
5-year overall survival (OS): Time from surgery to death, last follow-up, or censoring. 5-year disease-free survival (DFS): Time from surgery to recurrence (confirmed via imaging or biopsy), death, or censoring. Loss to follow-up was defined as patients not returning for a revisit for over one year and failing to respond after at least three attempts at contact using outpatient visits, WeChat, telephone calls, or text messages. Statistical analysis A database was created using Excel, and statistical analyses were performed using SPSS version 26.0. Continuous variables: Normally distributed data were presented as mean ± standard deviation (SD), while non-normally distributed data were expressed as medians. Between-group comparisons were conducted using t-tests or non-parametric rank-sum tests, as appropriate. Categorical data: Presented as frequencies or percentages and analyzed using chi-square (χ2) tests or Fisher’s exact tests. Survival analysis: Kaplan–Meier curves were plotted, and the Log-rank test was used for comparisons. Prognostic factors: Assessed using Cox proportional hazards regression models. Statistical significance was set at P < 0.05.
General information Univariate analysis of prognostic factors for early-stage cervical cancer Univariate analysis revealed thatwere significantly associated with patient prognosis ( P < 0.05, Table ). Factors such as age at diagnosis, number of deliveries, pathological type, differentiation degree, vaginal margin status, high-risk HPV infection, and surgical approach did not significantly influence prognosis ( P > 0.05, Table ). Notably, through the analysis of the survival curve,we found that the surgical approach was not a prognostic factor for early-stage CC( P > 0.05,Fig. ). Using the 5-year survival rate as the dependent variable, the seven factors identified as statistically significant in the univariate analysis were included in a multivariate Cox regression model. Compared with stage IB1, stage IIA was associated with a reduced death risk (HR = 0.304, 95.0% CI: 0.164–0.564, P = 0.000, Table ). Vascular invasion and tumor diameter were confirmed as independent risk factors significantly influencing prognosis ( P < 0.05).
Between January 2005 and December 2017, 850 patients diagnosed with cervical cancer (CC) and treated with radical surgery at Guangdong Women and Children Hospital were initially identified. After excluding 121 cases that did not meet the staging criteria and three cases with incomplete data or comorbid malignancies, a total of 726 patients were included in this study. The average age at diagnosis was 46.34 ± 8.94 years, with a median age of 46 years. The mean follow-up duration was 53.15 ± 15.33 months, with successful follow-up in 599 cases and 127 cases lost to follow-up. The distribution of clinical stages was as follows: stage IA2 (33 cases, 4.5%), stage IB1 (430 cases, 59.2%), stage IB2 (84 cases, 11.6%), and stage IIA (179 cases, 24.7%). Histopathological types included squamous cell carcinoma (599 cases, 82.5%), adenocarcinoma (59 cases, 8.1%), adenosquamous carcinoma (47 cases, 6.5%), and other types (20 cases, 2%, Table ). Superficial interstitial infiltration was observed in 219 cases (30.1%), comprising 84 cases (24.2%) in the open surgery group and 135 cases (35.6%) in the laparoscopic group. Deep interstitial infiltration was observed in 422 cases (58.1%), with 228 cases (65.7%) in the open surgery group and 194 cases (51.6%) in the laparoscopic group. The open surgery group had significantly more cases of deep interstitial infiltration compared to the laparoscopic group ( P < 0.05). Tumor diameters ≤ 4 cm were observed in 649 cases (89.4%), while 77 cases (10.6%) had tumor diameters > 4 cm. In the open surgery group, 51 cases (14.7%) had tumors > 4 cm, compared to 26 cases (6.9%) in the laparoscopic group ( P < 0.05). No statistically significant differences were observed between the two groups regarding parity, number of deliveries, menopause, high-risk HPV infection, clinical stage, pathological type, differentiation degree, paruterine invasion, or lymph node metastasis ( P > 0.05, Table ). During the 5-year postoperative period, 66 patients died due to recurrence and metastasis, and 13 cases (2.17%, Table ) experienced recurrence. The shortest survival time after recurrence was six months. The overall survival (OS) rate was 89.0%, and the 5-year disease-free survival (DFS) rate was 86.8%. The 5-year OS rate was 87.2% for the open surgery group and 90.4% for the laparoscopic group (Fig. ). The 5-year DFS rate was 84.6% for the open surgery group and 88.6% for the laparoscopic group(Fig. ). Although the laparoscopic group demonstrated higher survival rates and lower recurrence rates, these differences were not statistically significant ( P > 0.05).
Cervical cancer (CC) remains the most prevalent malignancy of the female reproductive system and is a leading cause of mortality among women worldwide. Standardized treatment protocols for CC, particularly surgical interventions, generally achieve favorable therapeutic outcomes (Melamed et al. ; Hongladaromp et al. ; Derks et al. ). Studies report 5-year survival rates following surgery to range from 83 to 94.6%. In the present study, the 5-year overall survival (OS) rates for open and laparoscopic surgery were 87.2% and 90.4%, respectively. Univariate analysis indicated that the choice of surgical approach was not a prognostic factor for early-stage CC, aligning with findings from both domestic and international retrospective studies (Kanao et al. ). Open and laparoscopic surgeries each have distinct advantages and limitations. Before 2018, laparoscopic surgery gained popularity due to its technical advantages, such as reduced trauma, better visualization, less blood loss, and faster recovery. Evidence from multiple studies supported its safety (Wang et al. ; Nam et al. ; Conrad et al. ). However, randomized controlled trials (RCTs) and retrospective studies published in the New England Journal of Medicine revealed that laparoscopic radical hysterectomy was associated with higher recurrence rates and lower overall survival compared to traditional open surgery (Ramirez et al. ; Melamed et al. ). The underlying reasons for the increased risks associated with laparoscopic surgery remain unclear. Thus, a complete abandonment of laparoscopic surgery for CC may be premature. For instance, a South Korean multi-center study demonstrated that when the local CC lesion size was ≤ 2 cm, laparoscopic surgery did not adversely affect prognosis (Kim et al. ). Scholars attribute the higher recurrence and mortality rates in laparoscopic radical hysterectomy to three primary factors: disruption of tumor-free principles, the establishment of CO2 pneumoperitoneum, and the surgeon’s learning curve (Association of Radical Hysterectomy Surgical Volume and Survival for Early-Stage Cervical Cancer: Correction ; Chiva et al. ). Disruption of tumor-free principles CO2 Pneumoperitoneum The use of CO2 pneumoperitoneum distinguishes laparoscopic surgery from open surgery. Evidence suggests that performing vaginal incisions under pneumoperitoneum conditions may increase postoperative recurrence rates compared to surgeries without pneumoperitoneum (Loureiro and Oliva ). This could be due to the following factors: CO2 may enhance tumor cell proliferation, pneumoperitoneum pressure may dislodge cancer cells into the abdominal or pelvic cavities, and pressure fluctuations during surgery may promote cancer cell migration and dissemination (Loureiro and Oliva ; Nelson et al. ). In this study, recurrences and metastases were primarily localized to the vaginal stump, with no significant increase in distant metastases in the laparoscopic group. A prospective study on colon cancer also showed no significant difference in long-term prognosis between surgical approaches (Yang et al. ). Current findings on CO2 pneumoperitoneum effects are largely based on laboratory studies, and no clinical evidence has been reported of CO2 pneumoperitoneum promoting distant CC metastasis. The surgeon's learning curve plays a significant role in influencing patient outcomes (Pedone et al. ). Pedone et al. (Zhang et al. ) conducted a retrospective analysis of 243 early-stage cervical cancer (CC) patients undergoing minimally invasive radical hysterectomy. Their multivariate logistic regression analysis revealed that surgeons' learning curves had a significant impact on patient outcomes. The 3-year tumor-free survival rate increased from 75.4 to 91.6% ( P = 0.005) as surgeons gained sufficient experience. Another study demonstrated that survival rates of patients treated with open surgery were similar, regardless of whether they were treated at university-affiliated or non-university-affiliated hospitals. However, patients in the laparoscopic group treated at university cancer centers had higher survival rates than those treated at non-university cancer centers, suggesting that the treatment center level is a critical factor for patients undergoing radical hysterectomy for early-stage CC (Association of Radical Hysterectomy Surgical Volume and Survival for Early-Stage Cervical Cancer ). This single-center study initiated laparoscopic surgery in 2008, requiring surgeries to be led by associate senior surgeons with more than five years of gynecological oncology surgical experience. Consequently, it was found that the choice of surgical approach did not affect early-stage CC prognosis. The study identified clinical stage, interstitial infiltration, paruterine invasion, lymph node metastasis, tumor diameter, and vascular invasion as significant prognostic factors for early-stage CC, which is consistent with other findings. Multivariate Cox regression analysis confirmed that clinical stage IIA, vascular invasion, and tumor diameter are independent risk factors affecting survival ( P < 0.05). The main reasons for this are as follows: (1) Clinical stage remains a pivotal factor influencing CC prognosis. Literature indicates that higher clinical stages correspond to lower OS and DFS rates (Shinagare et al. ). Advanced stages are associated with larger tumor volumes, greater invasion, and poorer biological behavior of tumor cells (Shinagare et al. ; Liu et al. ). These factors increase the likelihood of invading surrounding tissue, leading to a higher risk of postoperative recurrence and poorer outcomes. (2) This study reported a 5-year survival rate of 79.6% in patients with vascular invasion, compared to 92.2% in those without vascular invasion. Patients with vascular invasion exhibited a death risk 0.483 times higher than those without vascular invasion, thereby highlighting it as a significant prognostic risk factor. Vascular invasion often signals increased tumor invasiveness and represents an early stage of metastatic progression, heightening the risk of lymph node metastasis (Balaya et al. ; Gulack et al. ). Most studies agree that vascular invasion adversely impacts prognosis in early-stage CC(Gulack et al. ) 0.3) Tumor diameter > 4 cm was identified as an independent risk factor for CC prognosis. Studies of malignant tumors, including lung, breast, gastric, and renal cancers, consistently demonstrate a correlation between larger tumor sizes and poorer prognosis (Gulack et al. ). In gynecological malignancies, smaller tumor diameters correlate with reduced interstitial vasculature invasion and metastasis, yielding better postoperative outcomes. For instance, Mahdi et al. [34] reported that after adjusting for factors such as age, grade, lymph node status, and adjuvant therapy, tumor diameter remained a strong predictor of specific survival in endometrial cancer (HR = 1.13, 95% CI = 1.08–1.18, P < 0.001). In CC, tumor size forms a primary basis for the FIGO staging system, with larger tumors indicating later stages, greater severity, and increased risks of paracervical tissue infiltration and lymph node metastasis, all of which adversely affect prognosis. According to this study, we found that for patients with early—stage cervical cancer, when there is no pneumoperitoneum, there is no uterine manipulation, and the tumor volume is less than 4 cm, it is a safe practice for experienced gynecological oncologists to choose minimally invasive abdominal surgery. This study, being a retrospective analysis, has inherent limitations. The proportion of cases with deep stromal infiltration and tumor diameters > 4 cm was significantly higher in the open surgery group than in the laparoscopic group. This discrepancy may have led to greater caution among surgeons when selecting laparoscopic surgery for cervical cancer patients, potentially creating an imbalance between the conditions of the two groups. Simultaneously,this research is primarily centered around the study of survival outcomes and relevant clinical parameters. However, it falls short in terms of conducting follow—up on patients' subjective experiences. In future follow—up studies, significant efforts should be made to strengthen the assessment of patients' satisfaction with surgical outcomes, quality of life, and other such aspects. Despite these confounding factors, the findings of this study provide evidence supporting the value of laparoscopy in radical surgery for early-stage cervical cancer. Under specific conditions, laparoscopic surgery remains a promising option for early-stage cervical cancer. Clinical stage, vascular invasion, and tumor diameter > 4 cm were identified as independent risk factors influencing postoperative survival in patients with early-stage cervical cancer. To ensure the safety and efficacy of laparoscopic surgery, as well as to guide the choice of surgical approaches for early-stage cervical cancer, future research should include more detailed subgroup analyses, larger multicenter datasets, longer follow-up durations. At the same time, it is crucial to strengthen the follow—up on patient satisfaction and quality of life so that we can conduct a more comprehensive analysis when choosing different surgical methods and thus provide more personalized treatment plans for patients.
One hypothesis is that uterine manipulator use contributes to parametrial migration, lymphovascular space invasion, pelvic metastasis, and distant metastasis (Krizova et al. ; Logani et al. ; Uppal et al. ). Studies indicate that the proportion of “tissue structure fragmentation and detachment of cancer cells from the stroma” was significantly higher in pathological specimens from laparoscopic surgery groups compared to open surgery groups (45.0% vs. 12.6%) (Krizova et al. ). Uppal et al. (Janda et al. ) found no recurrence in patients who underwent laparoscopic surgery without a uterine manipulator, while recurrence rates reached 7% in patients using intrauterine manipulators and 11% in those using vaginal manipulators. Interestingly, in early-stage endometrial and ovarian cancer cases, manipulator use during laparoscopic surgery showed no significant differences in prognosis compared to open surgery (Gueli et al. ; Lin et al. ). In this study, the laparoscopic group used manipulators, yet the proportions of interstitial infiltration and paruterine invasion were not significantly higher than in the open surgery group ( P > 0.05). Moreover, the prognosis was comparable between the groups. The specific mechanism contributing tumor dissemination requires further investigation and validation through extensive trials.
|
metaExpertPro: A Computational Workflow for Metaproteomics Spectral Library Construction and Data-Independent Acquisition Mass Spectrometry Data Analysis | 91e3dd9d-4322-4a13-8572-25be26081384 | 11795700 | Biochemistry[mh] | Experimental Design and Statistical RationaleData Acquisition Methods and Batch Design for Human Fecal Samples Collected in This StudyDefinition of Factual FDR and Empirical FDRMultivariate Statistical AnalysisMetaproteomic Protein Extraction and Trypsin DigestionMG DNA Extraction, Sequencing, and Gene PredictionHigh-pH Reversed-Phase FractionationDDA Mass Spectrometry Acquisition for Library GenerationDIA-MS Acquisition for Peptide and Protein QuantificationImplementation of metaExpertPro WorkflowDatabases Construction in the Benchmark Tests In the benchmark tests with HeLa cell and human tissue datasets, the reviewed human protein database was first downloaded from UniProt (date 20211213), containing 20,375 protein sequences. The mouse gut microbiome database was downloaded from the microbiome database ( https://db.cngb.org/microbiome/genecatalog/genecatalog/?gene_name = Mouse%20Gut ), which contains approximately 2.6 million protein sequences. The 1× mouse microbiome database represents a random selection of 20,375 protein sequences from the entire mouse gut microbiome database. Similarly, the 10× mouse microbiome database consists of 203,750 randomly selected protein sequences, and the 100× database follows the same pattern. To ensure uniformity in the order of human and mouse microbiome protein sequences, we interspersed human protein sequences at regular intervals among the mouse microbiome protein sequences. The CPU dataset benchmark tests include two types of databases. One is the CPU MG database supplemented with different subsets of the human gut microbiome catalog database (IGC), and the other is the CPU MG database supplemented with varying numbers of human gut microbiota species. The CPU MG database contains 123,088 protein sequences. The 1× IGC database is a random selection of 123,088 protein sequences from the IGC database, while the 10× IGC database includes 1,230,880 randomly selected protein sequences, following the same pattern for the 100× IGC database. The databases for 5, 16, 32, and 48 human gut microbiota species contain the protein sequences from the respective number of human gut microbiota species. To ensure a clear distinction between the CPU and human gut microbiota species, there are no overlapping species between the CPU samples and the added human gut microbiota species. In this study, we utilized multiple datasets to evaluate the identification depth, reproducibility, and accuracy of the metaExpertPro software. First, we utilized 60 human fecal samples to assess the performance of metaExpertPro in analyzing human fecal sample data acquired from two different types of mass spectrometry instruments. Next, we compared the measurement results and runtime of metaExpertPro with three existing metaproteomics software tools—MetaLab, ProteoStorm, and glaDIAtor using one public dataset. Additionally, to estimate workflow accuracy, we calculated the factual FDR of protein groups, the F-score (the harmonic mean of precision and recall) forSubsequently, we examined the effects of different databases on spectral library construction and quantitative matrix generation using five major human gut microbial databases. Finally, we identified gut microbial proteins, functions, and taxa associated with DLP and explored the potential interaction network between microbes and the host using 62 human fecal samples from the Guangzhou Nutrition and Health Study (GNHS) cohort, including the aforementioned 60 samples. The details regarding the numbers and types of samples are as follows. Declaration of HelsinkiHuman Fecal Samples in the metaExpertPro Performance Assessment and DLP-Association AnalysisPublic Datasets in Software Comparison and Benchmark TestsThe study protocols of the Guangzhou Nutrition and Health Study wereWritten informed consent was obtained from all participants. A total of 62 fecal samples were collected from 31 subjects without DLP and 31 subjects (40–75 years old) with DLP from the GNHS cohort. These individuals had not received any antibiotic treatment in the 2 weeks before biomaterial collections to avoid the effects of the antibiotic on the gut microbiome. The fecal samples were immediately homogenized, stored on ice, and then transferred to −80 °C within 4 h. Additionally, the corresponding metadata variables including age, gender, blood triglycerides (TG), total cholesterol (TC), low-density lipoprotein cholesterol (LDL), and high-density lipoprotein cholesterol (HDL) were also collected either by questionnaire or blood biochemical measurement . DLP was defined as one or more of the TG, TC, LDL, and HDL were abnormal (TG ≥ 2.3 mmol/L or TC ≥ 6.2 mmol/L or LDL ≥ 4.1 mmol/L or HDL ≤ 1.0 mmol/L) or medical treatment for DLP . In the software comparison among metaExpertPro, MetaLab , ProteStorm , and glaDIAtor , we utilized six DDA-MS raw data files and six DIA-MS raw data files from a public human fecal sample dataset ( PXD008738 ) . Specially, six DDA-MS files were searched against the integrated gene catalog (IGC) database to compare the peptide identification performance among the four software tools. Subsequently, six DIA-MS files were analyzed for the comparison of DIA identification and quantification of metaExpertPro and glaDIAtor. All parameters and databases were kept consistent across different software analyses. Enzyme specificity was set to “Trypsin/P” with maximum one missed cleavage. Precursor mass tolerance and fragment mass tolerance were set at 10 ppm and 0.02 Da. All the tests were performed on a computer with AMD EPYC 7502 32-Core Processor and 512 GB RAM. In the benchmark tests, we utilized four public datasets from PXD021928 , PXD006118 , IPX0001400000 , and PXD008738 to assess accuracy at the protein and taxon levels. The PXD021928 dataset includes one raw data file acquired from Q Exactive HF MS instrument using a human hela cell sample. The PXD006118 dataset was generated from synthetic communities consisting of 32 organisms with samples prepared under three conditions: “equal cell number” (C), “equal protein amount” (P), and “uneven cell number or protein amount” (U). Each condition (C, P, U) includes four raw data files, acquired from a Q Exactive plus MS instrument. The IPX0001400000 dataset consists of 155 raw data files from human thyroid tissue samples and 100 raw data files from human pancreas tissue samples, acquired using Q-Exactive HF and Q Exactive MS instruments, respectively. The PXD008738 dataset includes not only the aforementioned 12 data files from human fecal samples but also six data files from the mixture of 12 different bacterial strains (“12-mix”) isolated from human fecal samples. The one raw data file from PXD021928 , four raw data files from the P condition of PXD006118 , and all 255 raw data files from IPX0001400000 were utilized for factual FDR estimation at the protein level. For identification and quantification accuracy assessment at the taxon level, we used all six raw data files from the “12-mix” of PXD008738 and 12 raw data files from PXD006118 . A spectral library was constructed using a mixture of peptide samples from 62 human fecal samples. The pooled sample was fractionated using high-pH reversed-phase LC. The fractionated peptides were then spiked with iRT (Biognosys) and subjected to DDA-MS data acquisition on timsTOF Pro (Bruker) and Orbitrap Exploris 480 (ThermoFisher Scientific Inc) mass spectrometers. Each of the 62 human fecal samples was also used for DIA-MS data acquisition. To minimize the impact of batch effects on data analysis, we randomly shuffled the samples based on age and gender and divided them into three batches. In each batch, we included two pairs of intra-batch biological replicates and two pairs of intra-batch technical replicates, as well as one pair of inter-batch biological replicates and one pair of inter-batch technical replicates. Factual FDR is utilized to estimate the 'true' FDR using MS/MS spectra data . The basis of factual FDR calculation is having prior knowledge of all spectra or peptides in the datasets, enabling straightforward calculation using the formula: Factual FDR = False positives/(False positives + True positives) The empirical FDR is defined as the FDR in the target-decoy approach (TDA), and it is calculated using the following formula: Empirical FDR = N decoy /N target Where N target (N decoy ) is the number of positive target (decoy) PSMs, peptides, or proteins. The NA values in the quantitative matrices at the peptide, protein, functional, and taxonomic levels were imputed with 0.8 times the minimum intensity, and all intensity values were log10 transformed for statistical analysis. The reproducibility of the quantitative proteins, functions, and taxa in biological replicate samples was estimated by Spearman correlation. The intensity comparisons of the identified peptides and protein groups between glaDIAtor and metaExpertPro were conducted using Wilcoxon Rank Sum Test. The Clusters of Orthologous Genes (COGs), KEGG Orthology (KO), human proteins, and species significantly associated with DLP were determined by general linear model (adjust the confounders of sex, age, and Bristol Stool Scale, p -value <0.05 and | beta coefficient | > 0.2). The differentially expressed human proteins, COGs, and species were identified by Wilcoxon Rank Sum Test ( p -value <0.05). The co-expressed COGs and human proteins were identified using the Spearman correlation of their abundance in 62 human fecal samples (| r Spearman | ≥ 0.2, Benjamini-Hochberg–adjusted p -value <0.05). The detailed methods of sample preparation for metaproteomics and metagenomics, high-pH reversed-phase fractionation, data acquisition, and data analysis are described in the following sections. The gut microbiota was first enriched using differential centrifugation . In detail, about 200 mg of feces were resuspended in 500 μl cold PBS and centrifuged at 500 g , 4 °C for 5 min. Then the supernatant was transferred into a new tube. The above process was repeated three times. All the supernatants were combined (about 1.5 ml) and centrifuged at 500 g , 4 °C for 10 min to remove the debris in the fecal samples. Then, the microbial cells were collected by centrifugation at 18,000 g , 4 °C for 20 min. Next, the microbial pellets were used for protein extraction . Briefly, 250 μl lysis buffer (4% w/v SDS and cOmplete Tablets (Roche) in 50 mM Tris–HCl, pH = 8.0) was added into the microbial pellets and the mixture was boiled at 95 °C for 10 min. Then, the mixture was ultrasonicated at 40 Khz (SCIENTZ) for 1 h on ice. Finally, to further discard the cell debris, the mixture was centrifuged at 18,000 g for 5 min, and the proteins were precipitated overnight at −20 °C using a 5-fold volume of acetone. Next, the in-solution digestion method was performed as follows. After purifying (washing by acetone) and re-dissolving (using 8 mM urea and 100 mM ammonium bicarbonate) the precipitated proteins, about 50 μg proteins from each sample were reduced with 10 mM tris (2-carboxyethyl) phosphine (Adamas-beta) and then alkylated with 40 mM iodoacetamide (Sigma-Aldrich). Proteins were predigested with 0.5 μg trypsin (Hualishi Tech) for 4 h at 32 °C. Then the proteins were further digested with another 0.5 μg trypsin for 16 h at 32 °C. The tryptic peptides were desalted using solid-phase extraction plates (Thermo Fisher Scientific, SOLAμ) and then freeze-dried for storage. Dried peptides were finally resuspended in a solution (2% acetonitrile, 98% water, and 0.1% formic acid [FA]) before MS acquisition. The MG raw data was derived from the previous study . Briefly, the raw sequencing reads were first filtered and trimmed with PRINSEQ (version 0.20.4) for quality control. The raw reads aligned to the human genome ( Homo sapiens , UCSC hg19) were removed using Bowtie2 (version 2.2.5) . Then, the remaining reads were used for MG assembly using MEGAHIT (version 1.2.9) and binning the contigs with MetaBAT (version 2.12.1) by default parameters. We further clustered and de-replicated the metagenome-assembled genome at an estimated species level (ANI ≥95%) using dRep (version 3.0.0) . The minimum genome completeness and maximum genome contamination were set to 75 and 25, respectively. Protein-coding sequences (CDS) for each metagenome-assembled genome were predicted and annotated with Prokka (version 1.13.3) . All the predicted protein sequences were compiled to generate the MG protein database. The public human gut gene catalog database IGC+ was donated by Xu Zhang et al , and the Unified Human Gastrointestinal Protein (UHGP) database was downloaded from https://www.ebi.ac.uk/metagenomics/genome-catalogues/human-gut-v2-0 . Cd-hit (version 4.8.1) was used for the integration of MG and IGC+ or UHGP database with the following parameters: -c 0.95 -n 5 -M 16000 -d 0 -T 32. For the 62 fecal samples, approximately 5 μg peptides were collected from each tryptic peptide sample to form a pooled sample for high-pH fractionation. The pooled sample was then fractionated using high-pH reversed-phase LC. The mobile phase of buffer A was water with 0.6% ammonia (pH = 10), and buffer B was 98% acetonitrile and 0.6% ammonia (pH = 10). Specially, about 300 μg tryptic peptides were separated using a nanoflow DIONEX Ultimate 3000 RSLC nano System (Thermo Fisher Scientific) with an XBridge Peptide BEH C18 column (300 Å, 5 μm × 4.6 mm × 250 mm) at 45 °C. A 60 min gradient from 5% to 35% buffer B with a flow rate of 1 ml/min was applied. A total of 60 fractions were collected and further combined into 30 fractions. Finally, the fraction samples were freeze-dried and re-dissolved in 2% acetonitrile with 98% water and 0.1% FA. A total of 30 fractionated samples were obtained. Each fraction was analyzed by DDA-MS acquisition with a 60 min gradient for spectral library generation. The remaining peptides from each sample were used for DIA-MS acquisition. The fractionated peptides were first spiked with iRT (Biognosys) . For the timsTOF Pro (Bruker)–based DDA mass spectrometry acquisition, two gradients of 90 min and 60 min were used, respectively. The 90 min LC gradient was linearly increased from 2% to 22% buffer B for 80 min, followed by a second linear gradient from 22% to 35% buffer B for 10 min (buffer A: 0.1% FA in water; buffer B: 0.1% FA in acetonitrile [ACN]). The 60 min LC gradient was linearly increased from 5% to 27% buffer B for 50 min, followed by a second linear gradient from 27% to 40% buffer B for 10 min. The peptides were loaded at 217.5 bar on a precolumn (5 μm, 100 Å, 5 mm × 300 μm I.D.) in 0.1% FA/water and then separated by a nanoElute UHPLC System (Bruker Daltonics) equipped with an in-house packed 15 cm analytical column (75 μm ID, 1.9 μm 120 Å C18 beads) at a flow rate of 300 nl/min. The timsTOF Pro was operated in ddaPASEF mode with 10 consecutive PASEF MS/MS scans after a full scan in a total cycle. The capillary voltage was set to 1400 V. The MS and MS/MS spectra were acquired from 100 to 1700 m/z. The TIMS section was operated with a 100 ms ramp time and a scan range of 0.6 to 1.6 V ·s/cm 2 . A polygon filter was used to filter out singly charged ions. For all experiments, the quadrupole isolation width was set to 2 Th for m/z < 700 and 3 Th for m/z > 800. The collision energy was ramped linearly as a function of mobility from 20 eV at 1/K 0 = 0.6 V ·s/cm 2 to 59 eV at 1/K 0 = 1.60 V ·s/cm 2 . For the Orbitrap Exploris 480 mass spectrometer (Thermo Fisher Scientific Inc.) based DDA mass spectrometry acquisition, the fractionated peptides spiked with iRT were loaded onto a pre-column (3 μm, 100 Å, 20 mm × 75 mm i.d., Thermo Fisher Scientific) using a Thermo Scientific UltiMate 3000 RSLCnano LC a U3000 LC system. The peptides were then separated at a flow rate of 300 nl/min using a 60 min LC gradient on an in-house packed 15 cm analytical column (75 μm ID, 1.9 μm, C18 beads) with a linear gradient from 5% to 28% buffer B for 60 min. Next, the column was washed with 80% buffer B. The mobile phase B consisted of 0.1% FA in MS-grade ACN, while the mobile phase A consisted of 0.1% FA in 2% ACN and 98% MS-grade water. The eluted peptides were analyzed by an Exploris 480 MS with the FAIMS Pro (High field asymmetric waveform ion mobility spectrometry) interfacing in standard DDA acquisition mode. Compensation voltage was set at two different CVs, −42 and −62 V, respectively. Gas flow was applied with 4 L/min with a spray voltage set to 2.1 kV. The DDA was performed using the following parameters. MS1 resolution was set at 60,000 at m/z 200 with a normalized automatic gain control (AGC) target of 300%, and the maximum injection time was set to 20 ms. The scan range of MS1 ranged from 350 to 1200 m/z. For MS2, the resolution was set to 15,000 with a normalized AGC target of 200%. The maximum injection time was set as 20 ms for MS1. Dynamic exclusion was set at 30 s. Mass tolerance of ±10 ppm was allowed, and the precursor intensity threshold was set at 2e4. The cycle time was 1 s, and the top-abundance precursors (charge state 2−6) within an isolation window of 1.6 m/z were considered for MS/MS analysis. For precursor fragmentation in HCD mode, a normalized collision energy of 30% was used. All data were acquired in centroid mode using positive polarity and peptide match and isotope exclusion were turned on. We obtained a total of 90 DDA-MS raw data profiles. These included 30 profiles from timsTOF Pro MS instrument with a 60 min gradient, 30 profiles from timsTOF Pro MS instrument with a 90 min gradient, and 30 profiles from Exploris 480 MS instrument with a 60 min gradient. For the timsTOF Pro-based DIA-MS acquisition, 300 ng peptides were trapped at 217.5 bar on the precolumn and then separated along the 60 min LC gradient same as the ddaPASEF LC gradient mentioned above. The ion mobility range was limited to 0.7 to 1.3 V ·s/cm 2 . Four precursor isolation windows were applied to each 100 ms diaPASEF scan. Fourteen of these scans covered the doubly and triply charged peptides’ diagonal scan line in the m/z ion mobility plane. The precursor mass range 384 to 1087 m/z was covered by 28 m/z narrow windows with a 3 m/z overlap between adjacent ones. Other parameters were the same as the setting in the ddaPASEF acquisition. For the Exploris 480-based DIA-MS acquisition, 500 ng peptides were separated by the LC methods with a slight modification from the DDA-MS LC methods. The initial phase B of the gradient was increased from 5% to 7% to get a more effective time for separation. The Spray voltage of FAIMS was set to 2.2 kV. The other FAIMS settings were consistent with those of the DDA-MS acquisition. In DIA mode, full MS resolutions were set to 60,000 at m/z 200 and the full MS AGC target was 300% with an injection time to 50 ms. The mass range was set to 390 to 1010. The AGC target value for fragment spectra was set at 2000%. Fifteen isolation windows of 15 Da were used for −62V compensation voltage with an overlapped of 1 Da, and 19 isolation windows of 20 Da were used for −42V compensation voltage with an overlapped of 1 Da. The resolution was set to 15,000 and the injection time to 54 ms. The normalized collision energy was set at 32%. Overall, 62 diaPASEF raw data profiles and 60 DIA-MS (Exploris 480) raw data profiles were obtained for the human fecal samples. Stage 1: DDA-MS-Based Spectral Library GenerationStage 2: DIA-MS-Based Peptide and Protein QuantificationStage 3: Functional and Taxonomic AnnotationStage 4: Quantitative Matrix Generation The peptide and protein quantitative matrices as well as the correspondences between peptides and proteins were generated by DIA-NN (version 1.8). The quantitative peptides or protein groups were classified as either microbial-derived or human-derived peptides or protein groups. Only the peptides or protein groups specifically belonging to microbiota or human were retained for the microbial or human peptide quantification ( C ). For the microbial taxonomic quantification, the quantitative microbial peptides were annotated into taxa which were further filtered by the number of corresponding peptides ( i.e. , 2, 3, 5). For each taxon, the abundance was calculated by summing the intensities of all corresponding peptides. For the microbial functional quantification, the quantitative microbial proteins were annotated into COGs or KOs. The abundance of each COG or KO was determined by summing the abundance of corresponding protein groups. Then the COGs or KOs could be further annotated into COG or KO categories. The abundance of each COG or KO category was the sum of the abundance of corresponding COGs or KOs. As a result, nine quantitative matrices including microbial peptide, protein group, taxon, COG, KO, COG category, and KO category matrices, as well as human peptide and protein group matrices were generated. Finally, each matrix was further divided into sample and quality control matrices according to the sample types. FragPipe (version 20.0), equiped with MSFragger (version 3.8) , Philosopher (version 5.0.0) , and EasyPQP (version 0.1.35) , was employed to conduct the database search. Specially, reference database was formed by merging the human gut microbial database, the human protein database (Swiss-Prot, dated 20211213), and the contaminated protein database (MaxQuant, dated 20160325). Then, decoy (reversed) sequences were added to the reference database using the Philosopher module of FragPipe. To optimize computational memory usage, the database was partitioned into multiple smaller databases utilizing the database split function of MSFragger (msfragger.misc.slice-db = 20). The default number of split databases was set as 20 based on memory requirement tests conducted with a DDA-MS raw data obtained on the timsTOF MS instrument searching against the IGC + database. Users have the flexibility to adjust this parameter via the metaExpertPro command line depending on the size of the reference database. The mass calibration process of MSFragger was disabled to mitigate its substantial memory requirements (msfragger.calibrate_mass = 0). PSM validation was performed using PeptideProphet (peptide-prophet.run-peptide-prophet = true; percolator.run-percolator = false; msbooster.run-msbooster = false). The precursor and fragment tolerance were set to 20 ppm. The fragment ion series was set to b and y ions. Enzyme specificity was set to ‘trypsin,’ allowing cleavages before proline. The isotope error was set to 0/1/2. The peptide length was set from 7 to 50. The carbamidomethylation of cysteine (57.021464 Da) was set as a fixed modification. The oxidation of methionine (15.99490 Da) and acetyl (Protein N-term) (42.010600 Da) were set as variable modifications. The remaining parameters were configured according to the default settings of FragPipe's spectral library generation workflow. The DIA-MS–based peptide and protein quantification was carried out using DIA-NN (version 1.8) . Specially, all the DIA-MS raw data files were searched against the final spectral library, with maximum mass accuracy tolerances set to 10 ppm for both MS1 and MS2 spectra. The quantification mode was set to “Robust LC (high precision).” The protein inference in DIA-NN was disabled to use the protein groups in the final spectral library. The oxidation of methionine was set as a variable modification and all other settings were left at their default values. For the functional annotation, protein sequences were input into the eggnog-mapper (version 2.1.5) and GhostKOALA (version 2.2) for COG and KO annotation, respectively ( D ). For the COG annotation, the narrowest COG, KOG (Eukaryotic clusters of orthologous genes), and arCOG (Archaeal clusters of orthologous genes) of each protein were extracted. And the KO annotation results were further filtered by a bit-score threshold of 60. For proteins in a protein group, the annotated results were retained when all proteins were annotated to the same COG or KO. The taxonomic annotation was carried out using the peptide-centric taxonomic annotation software Unipept (version 3.1.0). Because missed cleavages cannot be matched with the Unipept database directly, we first performed an in silico digestion of the peptides using the “Advanced missed cleavage handing” rule in Unipept. Then we filtered out the peptides with less than 5 or more than 50 amino acids before annotating them using Unipept with the “Equal I and L” rule. For peptides corresponding to more than one filtered peptide and the filtered peptides annotated to different taxa, we firstly determined if the taxa belong to the same branch. If the filtered peptides were annotated to the same branch, we retained the narrowest taxa. Otherwise, we retained the widest taxa ( , A and B ). Overview of metaExpertPro WorkflowIn-Depth Identification and High Reproducibility of metaExpertPro Workflow in Human Fecal SamplesComparison of metaExpertPro with Other Metaproteomics Software ToolsBenchmark Test of Protein Group Identifications of metaExpertPro, glaDIAtor, MetaLabTaxonomic Accuracy Estimation of metaExpertProNegligible Effects of Public Gut Microbial Gene Catalog Databases on DIA-MS-Based Proteome MeasurementsmetaExpertPro Analysis Revealed the Functions Associated with DLP and the Potential Interactions Between the Microbiota and the HostIn this study, we proposed a metaproteomics data analysis workflow called metaExpertPro for the measurement of peptides, protein groups, functions, and taxa of gut microbes as well as host proteins based on DDA-MS and DIA-MS data from either Thermo Fisher Orbitrap (.mzML format) or Bruker (.d format) mass spectrometers. Briefly, the workflow includes four stages: DDA-MS–based spectral library generation, DIA-MS–based peptide and protein quantification, functional and taxonomic annotation, as well as quantitative matrix generation. The implementation of the metaExpertPro workflow is shown in A with more details explained below. In the first stage, we applied FragPipe (version 20.0) software for spectral library generation ( A ). To minimize computational memory demands, the original database ( e.g. IGC database of human gut microbiome (consisting of 10,352,085 protein sequences) and UHGP) (consisting of 13,811,247 protein sequences) was divided into multiple databases utilizing the database split parameter of MSFragger. The more the database is split, the less memory is required, but the longer the runtime. Therefore, users need to judiciously choose the number of database splits based on the quantity of protein sequences contained in the database. It is recommended to split the IGC or UHGP-90 database, which contains approximately 10 million protein sequences, into 10 subsets for analyzing a raw data file from timsTOF or Orbitrap Exploris 480 using metaExpertPro on a computer with 128 GB of RAM. If the number of raw data files increases, the required memory will increase accordingly. Then, each DDA-MS raw data was searched against each split database, generating a pepXML and a pin file. All the pepXML and pin files for each DDA-MS raw data were aggregated for PSM validation using either PeptideProphet or MSBooster-Percolator. To determine the appropriate PSM validation method, we conducted two benchmark tests. Both utilized the factual FDR to estimate the ‘true’ FDR of protein groups. Unlike the empirical FDR commonly calculated by the TDA , factual FDR calculates the ratio of FP identifications to all identifications based on previously known PSMs, peptides, or proteins in the dataset . Specifically, we first selected standard samples of known content, as knowledge of the sample composition allows for the accurate discrimination between false and true positive (TP) assignments. Next, we added varying proportions of nonsample-derived protein sequences to the sample-matched reference database. Then, the MS data files acquired from standard samples were searched against the database using the TDA. Finally, the factual FDR was calculated by dividing the number of nonsample-matched protein identifications by the total number of protein identifications The benchmark tests utilized the public dataset (PXD006118) from a synthetic community of 32 organisms, searching against a sample-matched MG database, which was supplemented with either a subset of the IGC database, containing 1×, 3×, 5×, and 10× the number of proteins in the MG database ( A ), or with 5, 16, 32, and 48 human gut microbial species ( B ). FPs included contaminant proteins, IGC proteins, or proteins from the added microbial species. Both benchmark tests showed a lower factual FDR using the PeptideProphet method (0–0.058 using MG_IGC database and 0–0.037 using MG_species database) compared to MSBooster-Percolator method (0–0.091 using MG_IGC database and 0–0.048 using MG_species database), despite the MSBooster-Percolator method achieving 8.7 to 13.3% higher protein group identifications than the PeptideProphet method ( , C and D ). To maintain a relatively low factual FDR, we selected PeptideProphet as the default PSM validation method in metaExpertPro. More details of the spectral library generation step are described in Experimental procedures. In the second stage, we applied DIA-NN software to identify and quantify peptides and proteins from each DIA-MS data file ( A ). In the third stage, we performed taxonomic and functional annotation for the spectral library generated in stage one. Annotating for spectral library instead of quantitative matrices facilitates its repeated usage in different DIA-MS quantification experiment using the same spectral library. The taxonomic annotation was performed using the peptide-centric taxonomic annotation software Unipept , which has been proved to exhibit more accurate and precise taxonomic annotation compared to Kraken2 and Diamond . Because the Unipept only indexes perfectly cleaved tryptic peptides with 5 to 50 amino acids long (inclusive) , we in silico digested the peptides and filtered the peptide length before the Unipept taxonomic annotation. If the digested and filtered peptides were annotated as multiple taxa by Unipept, the annotation of the peptide was determined as follows: if the multiple taxa belong to the same branch, the narrowest taxon was selected; otherwise, the widest taxon was chosen. If the multiple taxa did not have common ancestors (belonging to different kingdoms), the taxon annotation results were removed ( , A and B ). The metaproteomic functional annotation tools eggnog-mapper and GhostKOALA were integrated into the pipeline to process functional annotation ( A ). In the fourth stage, we generated quantitative matrices at nine levels including human peptide, microbial peptide, human protein group, microbial protein group, COG, KO, COG category, KO category, and taxonomy ( C ). The peptides and protein groups were first classified as either microbial or human in origin to separately generate the human peptide, microbial peptide, human protein group, and microbial protein group matrices. Peptides and protein groups of both human and microbial origin were removed from the quantitative results to avoid assignment ambiguity. Next, the taxonomic annotation results generated in the third stage were used for annotating microbial peptide quantification matrix. We then calculated the number of peptides corresponding to each taxon and filtered for reliable taxa with more than 1, 3, 5, 10, 15, and 20 peptides. Taxon abundance was determined using the sum of the abundances of corresponding peptides, and the microbial taxon quantification matrices were generated accordingly. For the functional quantification matrices, the microbial protein groups were first annotated using the results from the third stage. The COG or KO abundance was determined by summing the abundances of corresponding protein groups, and the COG or KO category abundance was determined by summing the abundances of corresponding COGs or KOs. In summary, the metaExpertPro pipeline integrates high-performance proteomic analysis tools—FragPipe and DIA-NN—along with functional and taxonomic annotation software tools, employing rigorous filter criteria to provide a one-stop platform for large-scale DIA metaproteomics analysis. The pipeline and its required environment have been packaged into a Docker image based on CentOS 8 for convenient usage ( https://github.com/guomics-lab/metaExpertPro ). Subsequently, to assess the performance of metaExpertPro in analyzing human gut microbial sample data, we conducted tests to evaluate identification depth and result reproducibility using two MS instruments. Additionally. For workflow accuracy estimation, we computed the factual FDR of protein groups, the F-score (the harmonic mean of the precision and recall) of taxa, and the correlation between measured taxa and true protein amounts in multiple benchmark tests. Furthermore, we examined the effects of databases on spectral libraries and quantitative matrices using five mainstream human gut microbial databases. Finally, we applied metaExpertPro in the metaproteomic analysis of DLP patients to explore potential associations between human gut microbial functions and taxa related to DLP ( B ). Detailed descriptions of all tests are provided below. To demonstrate the benefits of metaExpertPro, we applied it to the metaproteomic analysis of 62 human fecal samples from 62 middle-aged and elderly volunteers of the GNHS . Specifically, we first mixed the 62 tryptic peptide samples to create a pooled sample. The pooled sample was then subjected to high-pH fractionation, generating 30 fraction samples for DDA-MS data acquisition. For DIA-MS data acquisition, 60 samples were analyzed using both timsTOF Pro (Bruker) and Orbitrap Exploris 480 (Thermo Fisher Scientific) MS instruments. Consequently, 30 ddaPASEF MS raw data and 60 diaPASEF raw data were acquired using timsTOF Pro, while 30 DDA-MS raw data and 60 DIA-MS raw data were acquired using Orbitrap 480 ( A ). A total of 220,365 peptides and 58,952 protein groups, including 57,862 microbial protein groups and 1065 human protein groups, were identified in the spectral library derived from timsTOF Pro ( B ). Using Exploris 480, 189,808 peptides and 51,269 protein groups, including 50,218 microbial protein groups and 1024 human protein groups, were characterized ( C). The average identification rate of the acquired MS spectra was 32.2% and 29.3% for the spectral libraries derived from timsTOF Pro and Exploris 480, respectively ( , D and E and ) For each sample, we quantified 43,194 ± 11,704 (mean ± SD) microbial peptides corresponding to 15,501 ± 3880 microbial protein groups and 2453 ± 398 human peptides corresponding to 537 ± 91 human protein groups on timsTOF Pro. On Exploris 480, we quantified 22,460 ± 4964 microbial peptides corresponding to 11,301 ± 2172 microbial protein groups and 1374 ± 246 human peptides corresponding to 414 ± 69 human protein groups ( , F and G and ). Due to the in-depth identification of peptides and protein groups, we also quantified an average of 90 to 92 microbial species, 68 to 71 genera, 1406–1511 COGs, and 1350–1475 KOs per human fecal sample ( , F and G and ). Another major benefit of DIA methods is the high degree of quantitative consistency. Thus, we next investigated the reproducibility of the quantified protein groups, functions, and taxa in five pairs of technical replicate samples and six pairs of biological replicate samples. As expected, PCA shows that all paired biological replicates or technical replicate samples cluster together ( , H and I ). High correlation was observed in all pairs of technical or biological replicates at each level in two MS instruments ( , A and B ). In addition, the Bray-Curtis distance between all pairs of technical and biological replicates was low, and no statistically significant difference were observed between the first and the second repeat MS acquisition (PERMANOVA p = 0.89–1) ( , A and B ). This indicates a high level of reproducibility in the proteins, functions, and taxa quantified by metaExpertPro. In summary, metaExpertPro offers comprehensive identification and quantification capability for metaproteomics analysis of human fecal samples, utilizing MS raw data from either timsTOF or Exploris 480 instruments. Notably, integrating FragPipe and DIA-NN indicates remarkable reproducibility across replicate samples and MS instruments, ensuring reliable and consistent results. We next compared the application scenarios and the performance of metaExpertPro with the existing metaproteomics software tools. Among them, MetaLab , MPA , and ProteoStorm are DDA-MS–based metaproteomics analysis tools. They are all compatible with Q Exactive and Orbitrap Exploris MS instruments. Additionally, ProteoStorm is also compatible with Low-res LCQ/LTQ ( A ). Both MetaLab and MPA can perform DDA-MS–based peptide and protein quantification in metaproteomics analysis. Furthermore, MetaLab provides additional functionalities for function and taxonomic annotation, as well as quantification. glaDIAtor is the next generation of diatools . diatools and glaDIAtor are currently the only published analysis tools available for DIA-MS metaproteomics. However, it is important to note that neither glaDIAtor nor diatools is compatible with PASEF MS instrument. metaExpertPro is the exclusive DDA-assisted DIA-based metaproteomics analysis tool that is compatible with the timsTOF MS instrument. It provides a comprehensive solution encompassing DDA-MS–based spectral library generation, DIA-MS–based peptide and protein quantification, as well as function and taxonomic annotation and quantification, all in one platform ( A ). To compare the performance of these software tools, we reanalyzed the Orbitrap acquired DDA-MS and DIA-MS datasets from six human fecal samples published by the Elo team . For the DDA-MS–based software tools MetaLab and ProteoStorm, six DDA-MS raw data files were used for peptide and protein quantification or identification. On the other hand, in the case of DIA-MS–based software tools glaDIAtor and metaExpertPro, these same six DDA-MS data sets were employed for spectral library generation. Subsequently, peptide and protein quantification were performed using DIA-MS raw data and the generated spectral library ( B ). We compared DDA-MS–based peptide identifications among metaExpertPro, glaDIAtor, MetaLab, and ProteoStorm. For the MetaLab software, we used version 2.3.0, selecting the MaxQuant workflow. metaExpertPro exhibited the highest peptide identifications in the spectral library (30,155) among the compared tools, surpassing glaDIAtor (19,371 peptides), MetaLab (24,557 peptides), and ProteoStorm (11,226 peptides) in the spectral library. Despite the variations in peptide identification counts, metaExpertPro exhibited substantial overlap with other tools. It identified 16,580 peptides shared with glaDIAtor, 20,415 peptides shared with MetaLab, and 9384 peptides shared with ProteoStorm. These shared peptides accounted for 85.6%, 83.1%, and 83.6% of the total peptides identified by glaDIAtor, MetaLab, and ProteoStorm, respectively ( C and ). Additionally, metaExpertPro exclusively identified 5368 peptides in the spectral library. Next, we compared the DIA-MS–based quantification of metaExpertPro and glaDIAtor. To ensure a fair comparison, both software tools were set to DDA-assisted DIA mode, guaranteeing identical raw data input for the analysis. Using metaExpertPro, we measured more than two-fold peptides (mean ± SD = 16,971 ± 3315 vs 6918 ± 1456) and six-fold protein groups (mean ± SD = 5368 ± 885 vs 812 ± 218) compared to glaDIAtor ( D and ). Over half of all the peptides (59%) and protein groups (80%) were only detected by metaExpertPro. 32% of the peptides and 16% of the protein groups were quantified by both workflows. Only 8% of the peptides and 4% of the protein groups were quantified by glaDIAtor only ( E ). In the comparison of peptide and protein abundance between the two workflows, we observed a relatively high correlation in the abundance of peptides and protein groups quantified by both metaExpertPro and glaDIAtor (median r Spearman = 0.79 and 0.63) ( F ). Furthermore, the abundance of peptides and protein groups exclusively detected by metaExpertPro was significantly lower than those identified by both workflows ( G and ). These findings suggest that integrating FragPipe and DIA-NN excels in identifying low-abundance peptides and protein groups in metaproteomics analysis. To further verify the confidence of the peptides quantified only by metaExpertPro compared to glaDIAtor, we inspected the probability, the number of fragments, the b/y ion intensity ratio, and the spectra of these peptides. Among the 30,155 peptides identified in the metaExpertPro spectral library, 13,575 peptides were uniquely identified by metaExpertPro, while 2791 peptides were uniquely identified by glaDIAtor. We firstly evaluated the accuracy of the 13,575 peptides in the metaExpertPro spectral library, confirming their reliability. Remarkably, all these peptides exhibited peptide probability values of 0.9963 ± 0.12 (median ± SD). The probability values were posterior probability scores determined by PeptideProphet, with higher values indicating greater confidence . Therefore, our results indicated the high confidence in the peptide-spectrum matches of metaExpertPro. The median number of fragments matched for unique peptides identified by metaExpertPro was 14 ( H ). Notably, among the metaExpertPro unique peptides, 99.6% displayed two-sided fragment types, while only 54 peptides were identified as one-sided. Furthermore, in ion trap mass spectrometry, the intensities of y -ions are typically approximately twice that of their corresponding b -ions . The median ratio of intensities between y-ions and their corresponding b-ions was 1.6 ( H ). To visually showcase the qualitative accuracy of the peptide identifications in the metaExpertPro spectral library, we obtained the DDA MS/MS spectra of the top 20 lowest abundant peptides. All 20 peptide spectra can be identified with at least eight fragments containing both y ions and b ions. Most of the high-intensity peaks in the spectra can be matched to fragments, and there was a large dynamic range between high- and low-intensity fragments. In addition, the intensity of y ions is higher than that of b ions . These criteria are in line with the manual assessment of high-quality peptide segments , which indicated the reliability and precision of the identified peptides in the spectral library . Collectively, these findings strongly support the high quality and reliability of the peptides exhibiting relatively low abundance. Next, we conducted a comparison of the running times for metaExpertPro and glaDIAtor on an AMD EPYC 7502 32-Core Processor with 512 GB RAM using the PXD008738 dataset. With 10 threads, glaDIAtor took approximately 21.1 h for DDA-MS analysis, while metaExpertPro required approximately 17.4 h when the database was split into 20 subset databases. For DIA-MS analysis, glaDIAtor took around 23 min per file, while metaExpertPro completed the analysis in just 1 min per file ( I ). Considering that the number of DDA-MS raw data is usually less than 100, while a high-throughput project may involve thousands of DIA-MS raw data files, metaExpertPro is well-suited for high-throughput metaproteomic analysis by leveraging the speed advantages of FragPipe and DIA-NN software. We further investigated the accuracy of protein groups identified by metaExpertPro, glaDIAtor, MetaLab using benchmark tests. We initially assessed the factual FDR of protein groups in the spectral library using the published dataset of HeLa cells . Briefly, the DDA-MS data of the HeLa cell was searched against the human protein database (Swiss-Prot, date 20211213) supplemented with 0×, 1×, 10×, 100×, and the entire mouse microbiome catalog sequences (∼2.6 million proteins), respectively ( A ). The factual FDR is defined as the bacterial and contaminant hits divided by all the identified hits. Using metaExpertPro, the factual FDR was extremely low (0.017) when searching against the human protein database only (benchmark standard) ( B and ). The count of human protein groups reached 5417 ( B and ), surpassing the originally published result of approximately 5000 human protein groups based on a single-step search using MaxQuant software . When increasing microbial sequences in the human protein database, the factual FDRs remained well-controlled (FDRs = 0.024–0.031), and the count of true human protein groups showed a slight decrease compared to the benchmark result (5068 in the supplemented all bacteria sequences vs. 5417 in the human protein database only) ( B and ). glaDIAtor identified a comparable number of protein groups (5395 human protein groups in benchmark standard) as metaExpertPro, but metaExpertPro had a lower factual FDR (<3.5% in metaExpertPro vs. <5% in glaDIAtor) ( B ). MetaLab achieved a comparable factual FDR with metaExpertPro (∼3% in the supplemented all bacteria sequence database), but it identified about 11% less protein groups (4742 human protein groups in benchmark standard) than metaExpertPro. To evaluate the ability of metaExpertPro to maintain a low protein-level FDR with larger sample sizes, we extended the number of DDA-MS raw data to 255, including 100 pancreas tissue samples and 155 thyroid tissue samples (IPX0001400000) . These raw data were thenand 100× mouse microbiome catalog sequences ( A ). The factual protein group FDRs remained below 5% when adding 0×, 1×, or 10× mouse protein sequences (∼2.6 million proteins) ( B and ). However, when searching against 100× mouse protein sequences, the protein group FDR reached 5.4%. This suggests that controlling the factual protein group FDR becomes challenging when both the sample size and the unmatched protein sequences in the database increase in metaExpertPro. To gain insights into real-life scenarios of metaproteomics studies, we conducted two additional benchmark tests to identify FP microbial proteins from microbiota mixtures. In the first test, we used the "equal protein amount" (P) dataset ( PXD006118 ) and searched it against an MG database supplemented with varying numbers of human gut microbiota species protein databases (5, 16, 32, 48) using metaExpertPro, glaDIAtor, and MetaLab ( C ). The results showed that metaExpertPro identified the most protein groups in all database searches. When fewer protein sequences from human gut microbial species (≤16 species) were added to the database, all software achieved a low factual FDR (<3%). However, as the number of protein sequences from other species increased, only metaExpertPro maintained a factual FDR of less than 4%, while the factual FDR of the other software tools increased significantly ( D and ). In another microbiota benchmark test, we added the protein sequences of 0×, 1×, 5×, 10× IGC + protein sequences (10,352,085) to the MG database ( C ). Remarkably, we consistently achieved factual protein group FDRs below 5%, except for the 10× IGC + benchmark test, which had a factual FDR of 5.8% ( D and ). These results indicate the robustness of metaExpertPro in maintaining a low protein-level FDR in challenging scenarios. In conclusion, by integrating FragPipe, the metaExpertPro workflow effectively maintains both a low factual FDR and high-sensitivity identification at the protein group level during spectral library building. Determination of taxonomic annotation and biomass contributions is another challenge due to a large number of homologous protein or peptide sequences derived from hundreds of closely related species. Thus, we next estimated the taxonomic accuracy at genus and species levels using two artificial bacterial community datasets. The first dataset, referred to as “12-mix data,” consists of a mixture of 12 different bacterial strains isolated from fecal samples of three human donors, as published by Pietilä et al . ( A ). The second dataset, called “CPU data,” was generated from synthetic communities comprising 32 organisms with "equal cell number" (C), "equal protein amount" (P), and "uneven" (U), as published by Kleiner et al . ( B ). We first searched the 12-mix data against the IGC database of human gut microbiome and the CPU MS data against the matched MG database using the metaExpertPro workflow. Then, we calculated the TP, FP, false negative (FN), and F-score (the harmonic mean of precision and recall) at genus and species levels. When filtering out the taxa annotated by only one peptide, we got a relatively high true positive rate (TPR) (8/10) and a low false negative rate (FNR) (2/10) at genus level using the 12-mix dataset. But we also obtained a high FDR (10/18–11/19) and thus a relatively low F-score (average of 0.56) at genus level . At species level, because of the decrease of TPR and increase of FNR and FDR, the F-score further decreased to 0.26 . The average F-scores of the CPU data were 0.73 and 0.40 at genus and species level, respectively, outperforming the 12-mix data. Interestingly, the numbers of FP taxa in “uneven” samples were extremely low (4–5), resulting in high F-scores (0.84–0.86) at genus level . Here the F-scores were relatively low. Thus, we next investigated the impacts of the spectral count of peptides, the peptide length, and the number of peptides corresponding to the taxa on the TP and FP identifications at both genus and species levels ( , A and B ). The data showed that, while all these three factors exhibited significant differences between the TP and FP identifications, the number of peptides corresponding to the taxa displayed the highest difference ( , A and B ). After checking the peptide count distribution of TP and FP taxa ( , C and D ), we filtered the number of peptides corresponding to taxa at the threshold of 1, 2, 3, 5, 10, 15, and 20, respectively, and recalculated the TF, FP, FN, and F-score. The data showed that filtering the taxa with at least five peptides led to the highest F-scores (C: 0.90; P: 0.85; U: 0.90) at the genus level ( , C and D and ) in C, P, U datasets. This resulted in high TPR (C: 15/17; P: 15/17; U:17.25/20), low FNR (C:2/17; P: 2/17; U: 2.75/20), and low FDR (C: 1.5/16.5; P: 3.5/18.5; U:1/18.25). However, in the 12-mix dataset, filtering at least three peptides led to the highest F-scores (0.73) at the genus level. At the species level, we also obtained the highest F-score with the threshold of five peptides. But at the species level, the F-scores were still relatively low in two datasets (0.44–0.55) ( , E and F and ). The true quantitative information of the microorganisms in the CPU dataset allowed us to investigate the accuracy of the relative abundance of the taxa calculated by metaExpertPro workflow. With a threshold of five peptides, relatively high correlation between the true protein biomass of genera and the metaExpertPro results were observed (r Spearman = 0.8, 0.73, and 0.82) in the C, P, and U datasets ( E ). As expected, the correlation between the true cell number of taxa were relatively low (r Spearman = 0.63, 0.58, and 0.52 for the C, P, and U datasets, respectively) . The consistency of the true protein biomass of taxa and metaExpertPro results at species level was relatively low (r Spearman = 0.2, 0.27, and 0.35) in the C, P, and U dataset ( G and ). Taken together, we found that filtering the taxa with at least three to five peptides led to the highest F-score at genus and species levels, and metaExpertPro achieved high accuracy in both diversity and biomass at genus level. The relatively low accuracy at species level might be due to that we used the Unipept-based taxonomy annotation. As a peptide-centric taxonomic annotation software, Unipept depends on taxon-specific peptides to identify taxa. However, the number of taxon-specific peptide sequences steadily decreases from higher to lower taxonomic rankings, with a particularly large drop between genus and species levels . In addition, we found a species named Pseudomonas pseudoalcaligenes is not presented in the NCBI taxonomy database ( E and G , marked in gray), which may lead to FN taxa. Nevertheless, Unipept is still the preferred software for taxonomy annotation in the absence of matched MG data according to the previous study . Here, we showed that metaExpertPro integrated Unipept can achieve high accuracy in the relative abundance estimation of genera ( E and ). Three types of protein databases were commonly used in gut microbiota metaproteomic studies, including well-annotated public gut microbial gene catalog databases ( e.g. , IGC of human gut microbiome , UHGP catalog , protein sequences that predicted from metagenome data from matched samples, and the merged databases of above two types of databases. To evaluate the impacts of databases on the peptide identifications in spectral library generation, we compared the peptide numbers in the five spectral libraries based on IGC+ , UHGP-90 (90% protein identity), matched MG protein catalog database, and their merged databases (MG_IGC+ and MG_UHGP-90) using 90 min gradient DDA-MS acquisition on timsTOF Pro of the 62 human fecal samples mentioned above ( A ). To trim redundant protein sequences from the combined databases, we employed cd-hit (version 4.8.1) to eliminate sequences with ≥95% identity and ≤5 amino acids. Because the IGC, UHGP, and MG databases contain some protein sequences with identity greater than 95%, after merging with cd-hit, the sizes of the IGC and UHGP databases are smaller than their original sizes. The data showed that the spectral library based on IGC + database identified the most peptides (284,681), followed by MG_IGC + database (273,779), MG_UHGP-90 (273,338), UHGP-90 (271,751), and MG (261,986) ( B and ). More specifically, 57.0% (194,485) of the peptides were commonly identified by all the spectral libraries. The spectral library based on MG contained the most unique peptides (21,296) ( C and ). This may be because the MG database has not undergone the removal of redundant protein sequences using cd-hit. These redundant sequences might lead to FP identifications, ultimately resulting in the identification of the highest number of unique peptides. The identification rate of IGC + spectral library was significantly higher than that of the other four databases. The average of identification rates based on the five databases was 30.6 to 31.8% ( D and ). Overall, we found that in the spectral library generation step of metaExpert Pro, public gene catalog databases outperformed the matched metagenome database in terms of peptide identification. A similar conclusion has been proposed by Zhang et al. using MetaPro-IQ . We further investigated the impacts of different public gene catalog databases on 60 min DIA-MS–based proteome measurements using two public gut microbial gene catalog databases (IGC+ and UHGP-90). High mapping ratios were obtained at COG (medium of 95.3% and 95.5%), KO (medium of 76.1% and 76.7%), and taxonomy (medium of 87.5% and 87.6%) levels with the two databases . The mapping ratio at the phylum level was comparable to the results of six human fecal data analyzed by glaDIAtor (∼70%). But the mapping ratio was less than that of glaDIAtor at the genus level (∼18% vs . ∼40%), which may be because we used a stringent taxonomy filtering criterion of at least five peptides per taxonomy to ensure the accuracy of identification. Next, we compared the richness per sample at eight levels and observed no significant differences between the two databases at all levels ( A and ). At the peptide, COG, and KO levels, we also observed a high proportion of overlapped features (77–92%) between the two databases ( B ). 84% of the genera and 86% of the species were identified by both databases, showing a high degree of consistency. The taxonomic and functional profiles identified by the two databases were also highly similar ( C and ). In detail, at the taxonomic level, most of the peptides (99.4%) were assigned to the four major phyla of human gut microorganisms characterized by MG data , namely Bacillota (∼60%), Bacteroidota (∼30%), Actinomycetota (∼9%), and Pseudomonadota (∼1%). Also, the profiles of taxa were highly similar to that obtained by glaDIAtor (∼60% Bacillota, ∼10% Bacteroidetes, ∼7% Actinomycetota, and ∼0.5% Pseudomonadota). At the functional level, the largest functional categories included G ‘carbohydrate metabolism’ (∼18%), J ‘translation’ (∼16%), and C ‘energy metabolism’ (∼10%), which was in line with previous studies of human fecal metaproteomes ( C and ). The abundance of human protein groups, microbial functions, and taxa also showed high correlation (medium of pairwise Spearman correlation coefficients = 0.95–0.97) between the two databases ( D and ). Taken together, these results suggest the negligible effects of public gut microbial gene catalog databases on DIA-MS–based quantification at peptide, functional, or taxonomic levels. Therefore, for the analysis of human gut microbial samples, matched MG sequencing may not be required for the metaExpertPro and the results generated by metaExpertPro based on public databases could be directly comparable. DLP is a disorder in lipid metabolism characterized by high levels of LDL-cholesterol and/or triglycerides and low HDL-cholesterol levels, which is considered a high-risk factor for cardiovascular disease . However, the real functions of the microbiota associated with DLP are still unclear. The 62 GNHS subjects mentioned above included 31 subjects without DLP and 31 subjects with DLP. Here, we performed metaproteomic analysis on the fecal samples from these subjects to characterize the changes of microbial taxa, functions, and human protein groups in DLP. In total, we quantified 55,573 microbial protein groups and 993 human protein groups. The microbial protein groups were annotated as 2347 COGs and 2469 KOs. The microbial peptides were annotated as 106 genera and 172 species. About 87 to 97% of the identified protein groups, functions, and taxa were present in both non-DLP and DLP groups ( A ). Two ( Olsenella and Cloacibacillus of the six genera uniquely identified in the DLP group have previously been reported to show a positive association with serum lipids or obesity in mice and metabolically unhealthy obese human individuals. Among the eight genera uniquely identified in the non-DLP group, three ( Enterococcus , Lactococcus , Turicibacter have been reported to exhibit a negative association with DLP and obesity in mice. Enterococcus , a well-known probiotic, has been shown to alleviate obesity-associated DLP in mice . Lactococcus , a potential antihyperlipidemic probiotic , is also linked to insulin resistance and systemic inflammation, exerting an antiobesity effect . Turicibacter is markedly reduced in mice fed with high-fat diet. A total of 56 COGs, 3 species, and 18 human proteins were significantly associated with DLP using general linear model ( p -value <0.05 and | beta coefficient | > 0.2) . Wilcoxon Rank Sum Test was used to further verify the associations. The data showed that 30 of the associated microbial COGs were significantly differentially expressed between the two groups (Wilcoxon Rank Sum Test, p < 0.05) . Functions related to the “Energy production and conversion” (two COGs in category C), “Coenzyme transport ad metabolism” (one COG in category H), “Lipid transport and metabolism” (two COGs in category I), “Transcription” (two COGs in category K), “Replication, recombination and repair” (one COG in category L), “Posttranslational modification, protein turnover, chaperones” (one COG in category O), “Intracellular trafficking, secretion, and vesicular transport” (one COG in category U), and “Mobilome: prophages, transposons” (one COG in category X) showed significantly increased in DLP group, while the functions related to “Cell cycle control, cell division, chromosome partitioning” (one COG in category D), “Amino acid transport and metabolism” (two COGs in category E), “Lipid transport and metabolism” (one COG in category I), “Inorganic ion transport and metabolism” (two COGs in category P), “Signal transduction mechanisms” (two COGs in category T), “intracellular trafficking, secretion, and vesicular transport” (one COG in category U), and “defense mechanisms” (one COG in category V) showed significantly decreased in the DLP group ( B and ). The results indicated an enhancement in energy production, conversion, lipid transport, and metabolism functionality in the gut microbiota of DLP patients. The increase of the functions in DNA repair pathways such as uracil-DNA glycosylase functions was consistent with the metaproteomic results in pediatric inflammatory bowel disease patients . Defects in human amino acid transporters are linked to inherited metabolic disorders . In this study, we observed a reduction in amino acid transport and metabolism within the human gut microbiota. This finding suggests potential drug targets that could be focused on microbial proteins related to amino acid transport. We also found that the functions related to bacteria-secreted protein toxins such as biopolymer transport protein ExbD and WXG100 family proteins YukE and EsxA were downregulated in the DLP group ( B and ). One species including Blautia luti was significantly differentially altered in DLP ( C ). Both species and the corresponding genus have been reported to be associated with metabolic disorders including obesity , type 2 diabetes, or hypercholesterolemia . One benefit of metaproteomic analysis was exploring the interactions between the host proteins and microbiota. Thus, we analyzed differentially expressed human proteins between the DLP and non-DLP groups using Wilcoxon Rank Sum Test. We identified six significant differentially expressed human proteins ( p < 0.05). Only one human protein, the fibrinogen gamma chain (FIBG), was downregulated in the DLP group. All other human proteins were upregulated in the DLP group ( D and ). Four human proteins including transthyretin, heat shock protein HSP 90-alpha (HS90A), small ribosomal subunit protein (RACK1), and peroxiredoxin-4 (PRDX4) have been reported to be related to obesity, diabetes, and hyperlipidemia based on serum or tissue samples . However, it has not been reported that the dysregulation of these human proteins in human feces is also associated with DLP. Next, we analyzed the co-expression between the six human proteins and the 30 differentially expressed COGs. With a threshold of | r Spearman | ≥ 0.2 and Benjamini-Hochberg–adjusted p -value <0.05, we screened out 22 co-expressed proteins and COGs ( E and ). The human protein TTR exhibited the strongest correlation with microbial COGs. Three positively correlated COGs were COG1595 (related to transcription), COG3516 (component TssA of the type VI protein secretion system), and COG0644 (Dehydrogenase (flavoprotein)). The other eight negatively correlated COGs were COG0600 (ABC-type nitrate/sulfonate/bicarbonate transport system), COG3428 (membrane protein YdbT), COG3706 (Two-component response regulator, PleD family), COG4842 (secreted virulence factor YukE/EsxA, WXG100 family), COG4991 (uncharacterized conserved protein YraI), COG3887 (Cyclic di-AMP phosphodiesterase GdpP), COG2203 (GAF domain), and COG0710 (3-dehydroquinate dehydratase). Notably, the microbial function COG4842, a secreted virulence factor YukE/EsxA of the WXG100 family, exhibited negative correlations with two upregulated human proteins (PRDX4 and TTHY), indicating its significant role in the interaction with human proteins in the context of DLP. Taken together, the metaExpertPro-based metaproteomic analysis on DLP patients uncovered the alterations of microbial functions in DLP and the potential interactions between the microbiota and the host. Due to the complexity of the samples, metaproteomic data analysis has inherent limitations of high dependency on databases, low efficiency of peptide identification rate, the relatively low resolution of taxonomic identification, and large computer memory consumption. In this study, to solve the problems of low-efficiency identification rate and memory consumption, we used a library-based database search strategy in metaExpertPro; therefore, our approach cannot eliminate the database dependency. FDR control poses another challenge in metaproteomics analysis due to large number of homologous bacterial sequences in the databases. In this study, benchmark tests using HeLa cell and bacteria mixture samples showed a low factual FDR (<6%). However, as the sample size and unmatched protein sequences in the database increase, controlling the factual protein group FDR becomes more challenging. Therefore, there is still a need for algorithms that can efficiently distinguish TP spectra from highly similar spectra and employ stricter FDR filtering methods to ensure more accurate identifications. Additionally, synthetic microbiome peptide datasets can provide more reliable “ground truth” for accurate FDR estimation. Therefore, synthesizing peptides from dominant human gut microbial species to construct a robust "ground truth" dataset is necessary for this purpose. Although our data showed negligible effects on the metaproteomic results based on two public gut microbial gene catalog databases and 62 human fecal samples, one cannot assume similar results can also be obtained with other gene catalog databases or other types of metaproteomic samples, such as soil microbiota and marine microbiota. Moreover, the Unipept-based taxonomic annotation still limits the resolution of accurate taxonomy identification at the species level due to the limited number of taxonomy-unique peptides. If matched MG data is available, integrating MG taxonomic information with Unipept has the potential to increase the number of taxonomy-unique peptides. This integration limits the potential species to those specific to the samples, leading to a higher count of taxonomy-unique peptides than considering all species from the NCBI taxonomy database. Thus, a novel taxonomic annotation software integrating MG taxonomic information and Unipept has the potential to enhance the resolution of accurate taxonomy identification. Additionally, it is important to note that we did not observe any significantly associated microbial taxa, functions, or human proteins after correcting for multiple testing. This can be attributed to the limited number of samples used in our study, which consisted of 31 samples from individuals with DLP and 31 samples from non-DLP. In order to obtain more accurate and reliable results, a larger sample size is required for future studies. Finally, this study and most published metaproteomic studies only focus on the proteins expressed by the host and microbiota; however, the proteins from foods and the environment may also play important roles in the hosts’ health and the metabolisms of microbiota. Therefore, despite these research advances, there is still much to discover in the metaproteome of the human gut. In summary, the metaExpertPro workflow provides a computational pipeline for metaproteomic analysis and shows a high degree of accuracy, reproducibility, and proteome coverage in the quantification of peptides, protein groups, functions, and taxa in human gut microbiota. The workflow is established by integrating the high-performance proteomic analysis tools and stringent filter criteria to ensure both in-depth and high accuracy measurements. The negligible effects of databases on the measurement of peptides, functions, and taxa indicate that matched MG databases are not indispensable for metaExpertPro-based metaproteomic analysis, thus enabling direct comparison of metaproteomic data generated by metaExpertPro based on different public databases. All the raw104 . The custom code and step-by-step instructions are provided at https://github.com/guomics-lab/metaExpertPro . This article contains . T. G. is the shareholder of Westlake Omics Inc. The remaining authors declare no competing interests. |
Histology and Immunohistochemistry of Radial Arteries Are Suggestive of an Interaction between Calcification and Early Atherosclerotic Lesions in Chronic Kidney Disease | 435c6fee-ab1b-48b8-898a-1448c29243cc | 8619577 | Anatomy[mh] | The development of atherosclerosis, arteriosclerosis, and vascular calcification can represent the most important complications of chronic kidney disease and the main reason for the dramatic increase in cardiovascular mortality [ , , ]. Compared to the general population without CKD, patients with CKD have a 20-fold higher prevalence of early arterial atherosclerosis . Studies show that both accelerated atherosclerosis and increased incidence of cardiovascular disease are associated with a reduction in the glomerular filtration rate . The pathogenesis of their development and progression has been the subject of extensive research for more than 150 years. The theory of minor endothelial damage leading to the onset of atherosclerosis was initially developed as early as 1852, while the role of chronic inflammation and LDL oxidation was emphasized some years later . After extensive research, the prevailing pathogenic mechanisms of atherosclerosis are anticipated to be instigated by the activation of endothelial cells, including the expression of adhesion molecules and the accumulation of platelets and monocytes, which migrate to the subendothelial region and transform into macrophages [ , , ]. Early histologic lesions in atherosclerosis are characterized by endothelial cell damage, followed by the migration of vascular smooth muscle cells to the intima, where they proliferate and produce extracellular matrix proteins, leading to diffuse intimal thickening. The transformation of vascular smooth muscle cells into osteoblasts, chondrocytes, and adipocytes, and the infiltration by activated macrophages and lymphocytes, are two parameters which result in the hypertrophy of the media . Although atherosclerosis progresses gradually with age and is, thus, often described as a “normal” phenomenon, there are chronic diseases which can accelerate its progression, and chronic kidney disease is the most common condition, followed by accelerated atherosclerosis [ , , ]. Chronic inflammation and immunological and metabolic disorders in association with physical factors, including hypertension and shear stress, seem to aggravate endothelial damage and force its progression . Meanwhile, however, vascular calcification, another phenomenon directly driven by metabolic factors in CKD, seems to take place simultaneously. Vascular calcification, as a result of mineral deposition on the vascular wall, is not a passive consequence of aging, but, rather, it seems to progress as an active phenomenon determined by metabolic disturbances, hemodynamics, and also by genetic factors . Metabolic disorders common in CKD, including hyperparathyroidism or calcium and phosphate abnormalities, act in collaboration with inflammatory and immunological alterations in these patients, resulting in a progressive process of vascular calcification. Previous studies have defined two distinct types of vascular calcification based on their location: in the first type, minerals are deposited in the intima which is associated with the development of atherosclerotic lesions, leading to atherosclerotic calcification, while the second type, more common in CKD patients, is characterized by the calcification of the medial layer [ , , ]. The pathogenesis of calcification in CKD patients resembles that of atherosclerosis. Both phenomena are consequences of renal dysfunction, the accumulation of uremic toxins, oxidative stress, proinflammatory cytokines, and the activation of several pathways. Vascular smooth muscle cells are stratified, as well as lymphocytes and macrophages, leading to endothelial damage and the progressive development of secondary fibrosis and calcification [ , , ]. Several factors are responsible for the regulation of the course of vascular calcification, such as the receptor activator of nuclear factor-kB (RANK) and its ligand (RANKL) acting as accelerators, or the matrix carboxyglutamic acid protein (MGP) and osteoprotegerin (OPG) behaving as inhibitors [ , , , , , , , , , , , , , ]. In the present study, we describe the histological changes to the vascular wall of radial arteries in CKD patients, including the assessment of cell proliferation and vascular calcification. Furthermore, we evaluate the local mechanisms which seem to participate in this process, regarding the expression of calcification regulators and the activation of immune mechanisms and pro-inflammatory pathways.
Patients with CKD stage V, undergoing formation of radiocephalic arteriovenous fistula (RC-AVF), were included in the study. Patients were divided into two groups, group A included pre-dialysis CKD-stage V patients, being prepared to start on hemodialysis (HD), and group B, CKD patients who had already been on HD for at least 2 years. Inclusion criteria were: all patients were Greek-Caucasians, age 25–80 years, gradual deterioration of renal function up to stage V or under dialysis for more than 2 years. All patients should have been under close follow up for at least 3 years prior to enrolment, with adequate control of diabetes, hypertension, dyslipidemia, secondary hyperparathyroidism, and anemia. The vessels used for RC-AVF creation should be intact, meaning that patients should not have a previous attempt for a RC-AVF on the same vessels or the same limp. Exclusion criteria were: active infection, malignancy, autoimmune, or chronic inflammatory disease, previous corticosteroid or immunosuppressive treatment during the last 12 months, and finally, prior attempt or RC- or branchiocefalic (BC)-AVF creation at the same limp. The control group included healthy volunteers of similar age, sex, and ethnicity, who agreed to have a biopsy from the radial artery during an orthopedic procedure. The control group had no past medical history, or any other surgery in the same limb. All patients were informed and signed the consent form. The histological characteristics, inflammatory activation and immunophenotypic alterations of the radial artery wall were estimated and their association with the severity of calcification and atherosclerosis were studied. 2.1. Histology of the Radial Artery 2.2. Carotid Artery Intima—Media Thickness (IMT) Assessment 2.3. Life Expectancy of RC-AVF 2.4. Statistics Radial artery biopsy, obtained during the RC-AVF creation or an orthopedic operation should include a cross-section of the vessel. The biopsy sample was fixed in 10% formaldehyde, dehydrated in alcohol and xylene solutions and encapsulated in a paraffin cube until evaluation. For the optical microscopy evaluation, the paraffin cube was cut into 4 μm sections, deparaffinized, dehydrated, and stained with hematoxylin-eosin (HE) and GT for the assessment of general morphology, evaluation of cell structures and morphometric analysis, and with von Kossa and Verhoff’s elastic methods for the assessment of calcifications. The stained sections were examined under microscope ZEISS Axiolab 5 with the microscope camera Axiocam 208 color. Vascular calcification: The degree of mineralization was classified semi-quantitatively, in a scale of 0 to 2, where 0 represented no mineral deposition, 1: dispersed concretions, and 2: granular diffuse mineral deposits. Immunohistochemical analysis: Immunohistochemical analysis was performed to evaluate (i) inflammatory infiltration by monocytes-macrophages, T and B lymphocytes, (ii) phenotypic changes of endothelial cells and smooth muscle cells, and (iii) expression of factors associated with calcification on the vascular wall. The monoclonal antibodies used were: CD3 DAKO monoclonal mouse antibody, clone F7.2.38, code number M7254, dilution 1/100; CD20 THERMO monoclonal mouse antibody L26, dilution 1/100; CD68 THERMO monoclonal mouse antibody KP1, dilution 1/100; CD34 LEICA monoclonal antibody, dilution 1/100; ASMA DAKO mAb clone 1A4 and promoters, dilution 1/200; Receptor activator of Nuclear factor—kB Ligand, (RANKL) TRANCE/TNFSF/RANK L Antibody (12A668), monoclonal antibody, Novus Biologicals, dilution 1/50; Matrix carboxyglutamic acid protein (MGP), Antibody (OTI11G6), monoclonal antibody, Novus Biologicals, dilution 1/50; Osteoprotegerine (OPG)/TNFRSF11B Antibody (98A1071), monoclonal antibody, Novus BiologicalsMatrix carboxyglutamic acid protein dilution 1/100. Reagents were applied by using a semi-automated system. Primary antibodies were diluted in 1% BSA. Dilutions were arranged and determined after testing for a range of dilutions. Horseradish peroxidase (HRP) conjugated antibodies were applied, slides were incubated in 0.3% H 2 O 2 in TBS for 15 min, and, finally, the reaction developed in diaminobenzidine (DAB). For each antibody there was a positive and a negative control. Morphometric analysis: Morphometric analysis was performed using Image J software (National Institutes of Health) program for windows. On radial artery cross sections, lumen (L), intimal (I), and medial (M) areas were estimated, and, also, the ratios, luminal/intimal (L/I), luminal/medial (L/M), and intimal/medial (I/M) areas were calculated. All measurements were performed on sections stained with GT. Clinical and Laboratory assessment: Patients’ history, primary disease and comorbid conditions, medication and clinical examination were recorded based on hospital outpatients’ files. Prior to the scheduled day of RC-AVF creation, all patients underwent laboratory examination, included hematological and serum biochemical analyses.
Presence and severity of atherosclerotic lesions in CKD patients was assessed based on the measurement of common carotid intima—media thickness (IMT) of the common and internal carotid on both sides. The measurements were made using an Aloka Sonos SSDE—1700 ultrasound scanner and a 7.5 MHz high-resolution head and were performed by the same radiologist who was aware of patients’ clinical and laboratory data. The thickness of the intermeddle tunica corresponded to an ultrasound gray zone which does not project into the arterial lumen. The determination of IMT was performed at six points, 0.5, 1, and 2 cm above the bifurcation of the common carotid artery (in the internal carotid artery) and 0.1, 1, and 2 cm below the bifurcation of the common carotid artery, on both sides, in areas without lesions, and the average value of the above measurements was set as IMT. In patients undergoing dialysis, measurements were performed on a day between dialysis sessions.
The patients were followed for one year with clinical and laboratory testing every three months. During that period the clinical and laboratory indices of atherosclerosis and cardiovascular diseases were evaluated and the maturation and complications of arteriovenous fistulas were recorded.
Package for Social Sciences (SPSS Inc., Chicago, IL, USA) for windows, version 25.0 was used for the Statistical analysis. p values < 0.05 (two-tailed) were considered statistically significant for all comparisons. Shapiro–Wilk and/or Kolmogorov–Smirnov tests were applied to determine normality of variables. Normal distributed continuous variables were expressed as mean ± standard deviation, while data from non–parametric variables were expressed as medians and range. Differences between groups were estimated by Student’s t-test for independent samples, for normally distributed variables, and Mann–Whitney U test, Wilcoxon signed ranks test, and Kruskal–Wallis H test were used for non-parametric variables. Multiple regression analysis was performed to estimate independent variables correlated by a dependent parameter.
3.1. Patients’ Characteristics 3.2. Inflammatory Infiltration and Activation of Vascular Cells 3.3. Vascular Calcification (VC) 3.4. Atherosclerotic Lesions 3.5. AVF Survival At the end of the one year follow up, 14 (28%) of AVFs performed had failed. Patients with failed AVF had no significant differences in the frequency of comorbid conditions, such as hypertension, diabetes mellitus, cardiovascular disease, and also no differences in the severity of inflammatory infiltration, calcification, and calcification regulators. The only parameter which was significantly increased in patients in whom AVF failed was the I/M ratio, 0.38 ± 0.22 vs. 0.26 ± 0.12, p = 0.04, suggesting that the intimal thickening was the main factor regulating survival of AVF and implicated in its failure.
Fifty patients with chronic kidney disease (CKD), stage V, either pre-dialysis ( p = 25) (group A) or on hemodialysis (HD) ( p = 25) (group B) were included in the study. Clinical and laboratory characteristics of patients at time of AVF formation are depicted on and , respectively. There were no significant differences in age, sex, race, and also in the frequency of dyslipidemia, diabetes mellitus, or smoking habits between patients and controls. IMT was significantly increased in patients, compared to controls, 0.6 (0.22–1.2) mm vs. 0.26 (0.18–0.51) mm, respectively, p = 0.003.
Vascular infiltration by CD3(+) and CD20(+) cells was slightly increased in the whole cohort of CKD patients compared to controls, mean rank of 22.49 vs. 19 and 23 vs. 15.837, respectively, but these differences did not reach statistical significance. In a similar way, the severity of a-SMA expression was increased in CKD patients, but not statistically significant, mean rank of 22.97 vs.16, p = NS. The intensity of CD68(+) cell infiltration was significantly increased in CKD patients compared to healthy controls, mean rank of 23.3 vs. 13.42, p = 0.02. Comparing the expression of the above parameters between group A, group B, and healthy controls, there was a propensity for a gradually increased concentration of inflammatory cells, though not reaching statistical significance. summarizes differences in the severity of infiltrating cells between three groups.
3.3.1. Differences in the Severity of VC between CKD Patients and Controls 3.3.2. Association between Calcification Regulators, Inflammation and Cellular Activation with VC Intensity Severity of VC was evident in hematoxylin staining, however, specific evaluation was performed by applying von Kossa and Verhoff’s Elastic staining ( ). Thirty two patients (64%) had positive von Kossa staining, compared to 13 (65%) of controls, p = NS, however, the severity of calcification had significant differences between patients and controls,16 (32%), 23 (46%), and 11 (22%) of CKD patients were classified as grade 0, 1, and 2, respectively, and 7 (35%), 13 (65%), and 0 of controls as grade 0, 1, and 2, respectively, Chi-square 7.9, p = 0.01. Similarly, based on the Verhoff’s Elastic staining, 13 (26%), 32 (64%), 5 (10%) of CKD patients had grade 0, 1, and 2 calcification, respectively, compared to 10 (50%), 10 (50%), and 0 of controls, Chi-square 5.9, p = 0.01. The severity of RANKL, MGP, and OPG expression was estimated in the two groups of patients, and results were compared to controls. There were significant differences between the whole cohort of patients and controls, regarding the expression of RANKL (Mean rank 21.56 vs.8.5, p = 0.006), MGP (Mean rank 21.47 vs. 9, p = 0.01), and OPG (Mean rank 21.56 vs. 8.5, p = 0.006). Differences in the expression of these parameters between the three groups were also significant and are shown at and .
Expression of calcification regulators, namely RANKL, MGP, and OPG was correlated significantly with the degree of infiltration by CD3(+), CD20(+), and CD68(+) cells, as well as the expression of CD34(+) and α-SMA(+) cells, as depicted at . Significant correlation was found between inflammatory infiltration (expression of CD3(+), CD20(+), CD68(+) cells), cellular activation (CD34(+), a-SMA(+) cells) and calcification regulators (MGP, RANKL, OPG) with the degree of vascular calcification, as this was estimated and classified based on Verhoff’s elastic and von Kossa staining ( ). In multiple regression analysis, independent variables for the severity of vascular calcification were the intensity of CD34(+) cells (B = 0.595, r < 0.0001), aSMA(+) cells (B = 0.454, p = 0.004), and OPG (B = −223, p = 0.01), R 2 = 0.76, p < 0.0001.
3.4.1. Morphometric Changes of Radial Artery 3.4.2. Correlation of IMT with the Severity of Inflammation, Endothelial Activation, and Vascular Calcification Significant differences between patients and controls were noticed in the ratios intimal/media (I/M) and luminal/intimal (L/I), which strongly indicated that intimal enhancement predominated in CKD patients, resulting in luminal stenosis ( ). The I/M ratio was significantly increased in CKD patients compared to controls, 0.3534 ± 0.20 vs. 0.1520 ± 0.865, p = 0.003. Accordingly, L/I ratio in CKD patients and controls was 2.1709 ± 1.568 vs. 4.9958 ± 3.2975, respectively, p = 0.03, while luminal/medial (L/M) ratio was similar, 0.5310 ± 0.2417 vs. 0.7830 ± 0.2044, respectively, p = NS. No significant differences were evident in either I/M, L/I and L/M between group A and group B patients.
CKD patients had significantly elevated IMT compared to controls, 0.6 (0.22–1.2) vs. 0.26 (0.18–0.51), mean rank 21.7 vs. 7.75, respectively, p = 0.003. Significant changes in IMT were evident between controls 0.26 (0.18–0.51), Group A (pre-HD) patients 0.6 (0.22–1.2) and Group B (HD) patients 0.62 (0.32–1.1) ( ). IMT had significant positive correlation with the intensity of aSMA(+) cells, r = 0.3, p = 0.03, CD68(+) cells, r = 0.3, p = 0.03, expression of MGP, r = 0.4, p = 0.007, RANKL, r = 0.4, p = 0.002, OPG, r = 0.5, p = 0.004, and, also, with iPTH levels, r = 0.4, p = 0.02 and triglyceride serum levels, r = 0.5, p = 0.002. The association of IMT levels with the severity of inflammatory infiltration, expression of calcification regulators, and calcification severity is shown on and . In multiple regression analysis however, the only independent factor correlated with IMT levels was the severity of RANKL expression, B = 0.198, R 2 = 0.3, p < 0.0001.
Vascular calcification, as well as development of atherosclerosis are common consequences in chronic kidney disease . Although metabolic and hemodynamic disorders are considered to predominate in the pathogenesis of both conditions, recent accumulating evidence emphasizes the causative role of immune mechanisms. Immunological disorders in CKD are usually the result of impaired renal function per se, which seems to affect both cell and humoral immunity, leading to alterations in T and B cell phenotype and activation of inflammatory cells, phenomena implicated in the development of cardiovascular disease . In the present study, we evaluated the severity of vascular calcification, as it is evident on the wall of medium sized arteries in CKD patients, either at pre-dialysis stage or after being on hemodialysis for more than three years. We also performed morphometric analysis on the biopsies of radial arterial walls to estimate the degree of lumen stenosis and, also, expansion of intimal and medial areas. We evaluated the possible implication of local inflammatory and immunological mechanisms, activation in atherosclerotic changes and calcification of the vascular walls, and, finally, we assessed correlation between atherosclerotic and calcification pathways. Severity of vascular calcification, as this was estimated by both von Kossa and VE methods, was significantly increased in CKD patients compared to controls, with no difference between pre and post-dialysis patients. Similarly, expression of the calcification regulators, RANKL, MGP, and OPG was significantly increased in CKD patients and the severity of their staining showed positive correlation with the degree of vascular calcification, and, also, with the intensity of macrophage, myofibroblast, T and B cell infiltration, and endothelial cell activation. Interestingly however, although RANKL, MGP, and OPG were significantly increased in CKD, there were no significant differences in the expression of inflammatory and activation indices; suggesting either that even a trivial activation of macrophages and lymphocytes can stimulate production of calcification regulators, or that other factors influence this process. The role of macrophages in the progress of vascular calcification is rather complicated, as they can either promote its process, through the release of reactive oxygen species, matrix vesicles, and pro-inflammatory cytokines, or suppress it, through production of anti-inflammatory factors and differentiation into osteoclast-like cells . Migration of monocytes to the sub-endothelial space seems to be the critical step, followed by their differentiation into dendritic cells and macrophages, which interact with vascular wall cells, through releasing pro-inflammatory factors (TNF, IL-1, IL-6). Osteogenic genes, such as runt-related transcription factor 2 (Runx2), OPN, and bone morphogenetic protein 2 (BMP-2), are also expressed and released by macrophages and stimulate the osteogenic process [ , , ]. The significant correlation between RANKL, OPG, and MGP expression with macrophage infiltration, described in our study, support this role of macrophages, but, furthermore, positive correlation with α-SMA expression indicates a synergistic activity of RANK/RANKL/OPG pathway with myofibroblast formation and smooth muscle cell calcification. The description of RANKL/RANK/OPG pathway particularly elucidates the progress of bone metabolism, as well as vascular wall changes during calcification . RANKL is a transmembrane protein expressed on T lymphocytes, osteoblasts, endothelial cells, and, also, on vascular smooth muscle cells at calcification sites . Its soluble form binds to its receptor, RANK, a transmembrane protein expressed on dendritic cells and osteoclasts . Only calcified arteries express RANKL and RANK, stained on areas of calcification, while normal tissues express OPG and MGP, a protein protective against calcification, mainly expressed by vascular smooth muscle cells . OPG is secreted protein, an “atypical” member of the TNF family, with no transmembrane region and no direct signal transduction properties. OPG acts as a soluble RANKL receptor, inhibiting the RANK–RANKL reaction and thereby inhibiting the calcification process . Interestingly, recent experimental and clinical studies highlighted the multifactorial effect of OPG, as elevated serum levels appear to inhibit calcification, while at the same time being associated with hyperlipidemia, atherosclerosis, and increased cardiovascular risk . In our study, the degree of OPG staining, together with the degree of CD34 and a-SMA expression, were the only independent parameters correlating to the degree of vascular calcification, indicating that endothelial cell activation, myofibroblast formation and OPG pathway are mainly implicated in the development of vascular calcification in CKD patients. We also described a close correlation between classification and atherosclerotic changes in our patients. In most studies, early atherosclerotic lesions are lost, as assessment of atherosclerotic changes is almost exclusively based on imaging methods, which illustrate merely advanced vascular lesions. In the present study, we performed histo-morphometric analysis to assess early atherosclerotic changes, and we also estimated the IMT in order to evaluate the severity of established atherosclerotic lesions. Both morphometric analysis and IMT calculation showed a significant expansion of intimal area, which, according to increased I/M and reduced L/I ratio, described in morphometric measurements, seems to lead to lumen stenosis, and finally, to the failure of AVF. Our findings confirm previous studies which showed a neointimal hyperplasia, as the dominant histologic finding in CKD patients . We did not find any correlation between I/M and IMT. It could be attributed to different pathogenetic pathways activated in middle size arteries, such as radial artery, where morphometric analysis was performed, and in large size arteries, such as carotid artery. However, we rather believe that additional parameters may be implicated during progress of CKD. Levels of IMT correlated with most inflammatory indices and calcification regulators, most important factors being the severity of RANKL staining, a-SMA, and CD68 infiltration, however, in multiple regression analysis, intensity of RANKL was the only independent parameter correlated with IMT, and this finding further supports the correspondence between calcification and atherosclerotic pathways in our patients. The role of the effect of lymphocytes on the appearance and development of atherosclerosis, although initially disputed, is beginning to gain particular interest in recent years . Recent studies indicated immune mechanisms, such as activation of complement or specific T and B lymphocyte subpopulations to play a significant role in the atherosclerotic process . Different subpopulations of T lymphocytes have different effects on atherosclerosis, Th1, Th17 cells, natural killer cells (NKT), CD28 null cells promote the process of atherosclerosis, while some of them are expressed on atherosclerotic plaques . In contrast, Th2, FoxP3 + nTregs, Bregs exert a protective effect, possibly through the secretion of IL-10, TGF-β and antibodies against oxidized LDL (anti-oxLDL antibodies) .
In the present study, we performed histo-morphometric analysis to describe and evaluate early vascular lesions in CKD patients and we subsequently, compared results with established imaging methods assessing advanced atherosclerosis. We proved a close relationship between histopathological pathways of atherosclerosis and vascular calcification in CKD patients. Vascular infiltration of lymphocytes and macrophages, activation of endothelial cells, myofibroblast formation and activation of RANKL/RANK/OPG pathway are important to both phenomena, which seem to have a parallel and interactive progression.
|
The role of the microbiota and metabolites in the treatment of pulmonary fibrosis with UC-MSCs: Integrating fecal metabolomics and 16S rDNA analysis | 06e0fdf4-b378-4a61-bd95-e8d57087956c | 11717254 | Biochemistry[mh] | Pulmonary fibrosis (PF) is a debilitating respiratory condition characterized by progressive lung damage and scarring. Idiopathic pulmonary fibrosis (IPF) is the most prevalent form of PF, and is characterized by irreversible destruction of alveoli, widespread restructuring of lung tissue, and accumulation of the extracellular matrix. The pathogenesis of pulmonary fibrosis involves various factors, including inflammation, epithelial–mesenchymal transition (EMT), and oxidative stress. While pirfenidone and nintedanib have been authorized for IPF treatment, their efficacy has not been significantly impactful. Currently, lung transplantation remains the only viable treatment option for this condition . Consequently, there is a pressing need to explore and develop alternative therapies. Mesenchymal stem cells (MSCs) represent a distinct cellular population characterized by diverse differentiation and self-renewal capacities, and are derived from various tissues including bone marrow, umbilical cord, and adipose tissue . Notably, umbilical cord-derived MSCs (UC-MSCs) exhibit superior expansion and differentiation potential compared with bone marrow-derived MSCs (BM-MSCs) . Existing research has established a correlation between the progression of IPF and the inflammatory response . MSCs have been shown to repair cells and tissues by exerting anti-inflammatory effects and modulating immune cell activities . Additional research has indicated that MSCs play a role in reducing fibrosis in lung fibrosis models by mitigating inflammatory responses and decreasing collagen deposition . Nevertheless, the precise molecular mechanisms through which MSCs alleviate pulmonary fibrosis remain unclear. The gut microbiota, which is the most extensive and populous microbial community within the human body, serves a crucial function in maintaining the intricate equilibrium of the host’s metabolic and immune functions . Recent studies have revealed a correlation between dysbiosis of the gut microbiota and the advancement of IPF . The gut microbiota compartment forms the foundation of immunodynamics and governs the reaction of the lung to inflammation primarily by modulating innate immune cells, adjusting inflammatory cytokine reactions, and reshaping adaptive immunity, thereby potentially disrupting this delicate balance . An increasing body of research has revealed a correlation between lung disease and intestinal disease, leading to the conceptualization of the "gut–lung axis". While previous studies have demonstrated the impact of factors such as diet , on the progression of IPF through the modulation of this axis , the specific mechanism by which UC-MSCs influence intestinal microbial organisms remains unreported. Thus, the aim of this study was to further investigate the relationship between gut microbial organisms and IPF. Numerous histological and microbiological analyses have been widely used in the examination of the gut microbiota in diverse diseases . Nevertheless, the integration of the gut microbiota and metabolomic analysis in the context of IPF remains unexplored in the literature. Thus, the present study aimed to investigate the anti-inflammatory properties of UC-MSCs in a murine model of pulmonary fibrosis, while also evaluating their impact on the gut microbiota and metabolites. This research seeks to offer novel perspectives on therapeutic approaches for IPF.
2.1 UC-MSCs administration attenuates symptoms in BLM-induced model IPF mice 2.2 Administration of UC-MSCs abrogates lung injury in BLM-induced model mice 2.3 UC-MSCs suppresses inflammatory cytokine release in the BLM-induced model mice 2.4 UC-MSCs changed the profiles of GM in BLM-induced model mice 2.5 Composition of the GM is modulated by UC-MSCs in BLM-induced model mice 2.6 Analysis of differences in microorganisms among the three groups 2.7 Identification and screening of differentially abundant metabolites 2.8 Enrichment analysis of differentially abundant metabolite pathways 2.9 Correlation analysis On the seventh day following BLM administration, the average body weight of the mice in the BLM group was significantly lower than that of the control group . On the ninth day after BLM administration, the weights of the mice in the UC-MSCs intervention groups began to increase, resulting in significantly greater average weights than those of the Model group at the end of the study period. Additionally, survival probability was utilized to evaluate the impact of UC-MSCs treatment on BLM-induced IPF model mice, with Kaplan–Meier survival analysis indicating that intravenous administration of UC-MSCs led to increased overall survival rates and reduced median survival times in mice with pulmonary fibrosis . Collectively, these findings suggest that supplementation with UC-MSCs effectively mitigated the severity of symptoms associated with BLM- induced injury in mice. At the end of the experiment, the number of survival mice in control group, model group, UC-MSCs-L group, UC-MSCs-M group and UC-MSCs-H group was 15, 4, 7, 10 and 8, respectively. The assessment of hydroxyproline levels, a widely accepted indicator of pulmonary fibrosis severity, was conducted across the five experimental groups. Results showed a significant increase in hydroxyproline content in mice treated with BLM compared to the Control group, while administration of UC-MSCs led to a reduction in hydroxyproline levels in certain individuals . Importantly, the increase in survival indicators did not exhibit a linear relationship with increasing doses of UC-MSCs, and the effectiveness of moderate-dose UC-MSCs surpassed that of both low- and high-dose UC-MSCs. These findings collectively suggest that supplementation with UC-MSCs effectively mitigates the severity of symptoms associated with mice with pulmonary fibrosis in mice.
To evaluate the effect of UC-MSCs on BLM-induced lung damage, morphological changes in the lungs of mice were examined. As observed in the HE-stained lung sections , mice in the Control group had intact alveoli with transparent alveolar walls and no obvious inflammatory response. In contrast, tracheal instillation of BLM resulted in lung injury and inflammation, inflammatory cells and fibroblasts infiltrated the alveolar spaces, and the alveoli were disrupted, resulting in enlarged alveolar spaces and thickened alveolar walls. The administration of UC-MSCs effectively reversed BLM-induced pulmonary fibrosis in mice, as indicated by a decrease in the inflammatory cell count in lung tissues and the presence of a smaller area of injured tissue surrounded by larger areas with a typical alveolar structure . Additionally, the Ashcroft score was significantly lower in lung tissues from the UC-MSCs-M and UC-MSCs-H groups, than in those from the Model group . Furthermore, our findings suggest that compared with both the high-dose and low-dose treatment groups, the medium-dose treatment group exhibited greater alleviation of IPF. These results highlight the protective effects of UC-MSCs on the development of BLM-induced pulmonary fibrosis.
The effect of UC-MSCs treatment on inflammatory activity was subsequently assessed through the quantification of cytokine levels in the lung tissues of the experimental mice, as depicted in , compared with those in the Control group, the concentrations of the proinflammatory cytokines IL-1β, IL-6, TNF-α, and the pro-fibrotic factor TGF-β1 in the bronchoalveolar lavage fluid (BALF) of the Model group were markedly elevated. The administration of UC-MSCs resulted in a decrease in cytokine levels compared with those found in the Model group. Furthermore, the reduction in inflammatory factors was notably more pronounced in the medium-dose treatment group than in to the low-dose and high-dose groups. These findings indicate that UC-MSCs have the potential to mitigate BLM-induced IPF through the inhibition of cytokine production.
The GM is considered a major environmental factor that plays a significant role in the development of IPF and lung damage. In this study, we investigated the effect of MSCs on the GM by multiplex sequencing of 16S rDNA in BLM-induced IPF model mice. The Chao1 index tended to increase in BLM-injured mice in the Model group and were significantly decreased in the UC-MSCs-treated UC-MSCs-M group . Moreover, BLM increased the ACE diversity index, whereas MSCs tended to decrease this index . Additionally, to better compare the bacterial community similarities among the three groups, principal coordinate analysis (PCoA) was performed on the OTU abundances obtained from the three groups samples. As shown in , the PCoA plot indicated an apparent clustering in the microbial composition of each group. There was a significant difference in the microbial composition of the Model and Control groups, whereas there was a partial overlap between the microbial composition of the UC-MSCs-M group and the Model group. A Venn diagram revealed that all of the groups had unique and shared operational taxonomic units (OTUs) . In the clustering analysis , samples in the Model group were clustered far away from the Control group. On the contrary, both the samples in the Control and UC-MSCs-M groups were clustered within the same cluster, which further supported the restoration of the GM in the UC-MSCs-M group by treatment of the BLM-injured mice with MSCs. Collectively, these data indicate that the UC-MSCs treatment can alleviate the BLM-induced disturbances in the GM profile of mice.
The changes in the composition of the intestinal microbiome at the phylum and genus level in our experimental mice were significant. At the phylum level, the evidently affected phylotypes of the GM were mainly involved with Bacteroidota , Firmicutes , Pastescibacteria , Proteobacteria , and Campilobacterota . The proportion of Bacteroidota the richest phylotype in the Control group , was most markedly increased from 44.74% in the Control group to 56.74% in the Model group . However, treatment of BLM-induced IPF model mice with UC-MSCs in the UC-MSCs-M groups decreased the proportion of Bacteroidota in BLM-induced IPF model mice. In contrary, the modulation of Firmicutes , the second richest phylotype in the Control group , had the opposite effect. BLM-induced IPF model mice in the Model group exhibited a slight decrease in the proportion of Pastescibacteria , whereas treatment of BLM-induced IPF model mice with UC-MSCs reduced the increase in the Pastescibacteria proportion in BLM-injured mice . Dubosiella and Actinobacteriota showed similar patterns of change . Moreover, the richness of Campilobacterota did not significantly differ among the groups . At the genus level, compared to the Control group, the Model group had lower abundances of Lactobacillus , Allobaculum , Alistipes , Helicobacter , Dubosiella , Lachnospiraceae_NK4A136_group and Candidatus_Saccharimonas . The heatmap of cluster stacking revealed that UC-MSCs treatment switched the abundances of the following bacteria to levels similar to those in the Control group: Desulfovibrio , Streptococcus , Bacteroides , Prevotellaceae_UCG-001 , Alloprevotella , Escherichia-Shigella . The histogram also revealed that the relative abundances of these bacteria were in accordance with the above results . Thus, UC-MSCs treatment partly counteracted the influence of BLM on the abundance profile of the above genera. Taken together, these results suggest that UC-MSCs regulate the bacterial composition in BLM-injured mice, which may be beneficial for ameliorating pulmonary fibrosis and lung damage.
To gain better insight into the impact of UC-MSCs intervention on the gut microbiota of mice, we conducted a linear discriminant analysis size effect (LEfSe). As shown in , the biomarkers with LDA values >3 and p < 0.05 were screened. The relative abundance of genera between groups was used to assess the impact of significantly different genera between groups. Bacilli , Firmicutes , Erysipelotrichaceae , Erysipelotrichales , and Dubosiella were enriched in the Control group. The genera Bacteroidia , Prevotellaceae , Clostridia_UCG_014 , Prevotellaceae_UCG_001 , and Enterobacteriaceae were enriched in the Model group. The UC-MSCs-M group exhibited increased relative abundances of Lactobacillales , Desulfovibrio , Pseudomonadales , Moraxellaceae and Acinetobacter .
On the basis of biochemical markers and histopathological findings, a medium dosage of UC-MSCs was chosen for metabolomics investigation. Metabolomic analysis of the mice fecal samples was conducted using UPLC-Q-TOF/MS, with a single quality control (QC) sample included for every 10 samples analyzed to monitor consistency during the injection process. Evaluation of the total ion chromatograms of the QC samples in both positive and negative ion modes revealed overlapping curves, supporting the reliability of the detection system. PLS-DA, a multivariate technique for supervised pattern recognition, was used in the analysis. The statistical analysis method utilized in this study identifies the most pertinent variables for grouping factors while mitigating the impact of extraneous factors. The PLS-DA scoring plots displayed in clearly distinguish between the Model and Control groups and between the Model and UC-MSCs-M groups. Subsequent alignment testing revealed that the model’s R2 and Q2 values were lower than the initial values from left to right , indicating that the model exhibited good predictive capacity without overfitting. To further screen the differentially abundant metabolites, we analyzed them using PLS-DA and volcano plots , and selected the metabolites that differed significantly between the Model group and the Control group, and between the Model group and the UC-MSCs-M group, and conditioned them on VIP > 1.5, P < 0.05, FC > 2 or FC < 0.5 (log 2 FC > 1 or log 2 FC < -1); subsequently, we used the HMDB database validate the differentially abundant metabolites. A total of 149 eligible differentially abundant metabolites were identified in the Model group compared with the Control group, of which 59 differentially abundant metabolites were upregulated and 90 differential metabolites were downregulate; whereas a total of 81 eligible differentially abundant metabolites were identified in the Model group, of which 49 differentially abundant metabolites were upregulated and 32 differentially abundant metabolites were downregulate compared with the UC-MSCs-M group. We selected the top 20 differentially abundant metabolites ranked by the VIP value for heatmap analysis, and the results revealed that 15 differentially abundant metabolites were significantly downregulated while 5 differentially abundant metabolites were upregulated in the Model group compared with those in the Control group, and 2 differentially abundant metabolites were significantly downregulated while 18 differentially abundant metabolites were upregulated compared with those in the UC-MSCs-M group. These metabolite species included Lipids and lipid-like molecules, Organoheterocyclic compounds, Organic acids and derivatives, Benzenoids .
To further explore the biological functions of the differentially abundant metabolites, we utilized the MetaboAnalystR package to perform MSEA (Metobolite Sets Enrichment Analysis) via The Small Molecule Pathway Database (SMPDB) database for all metabolites identified in the comparator group, which helped to identify and interpret important biological pathways involved in the pattern of changes in metabolite concentrations and to obtain information on pathways significantly enriched in the metabolites. Compared with those in the Control group the enriched pathways of the differentially abundant metabolites in the Model group were focused mainly on vitamin B6 metabolism, fatty acid oxidation, and alkaloid metabolism, whereas compared with that of the UC-MSCs-M group, the enriched pathways of the differentially abundant metabolites were focused mainly on bile acid metabolism, biotin metabolism, and biosynthesis of lipids as well as amino acids .
Spearman correlation analysis was used to study the functional relationships among local biomarkers, differential bacteria, and biochemical parameters. As shown in , Prevotellaceae_UCG-001 , Bacteroides , Escherichia-Shigella , and Alloprevotella were all associated with 16-hydroxyhexadecanoic acid, beta. -zearalenol, DL- tryptophan, Betaine, and Pentadecanoic acid were positively correlated; and negatively correlated with Flusilazole, Adouetine x, Dethiobiotin, Glycodeoxycholic acid, and Cholic acid. Lactobacillus , Dubosiella was positively correlated with Pyruvate, 4-pyridoxic acid, Methylphosphonic acid, D-mannose 6-phosphate, D-glucarate, Porphobilinogen, Adouetine x, 16-hydroxyhexadecanoic acid and beta-zearalenol.
Bleomycin induces pathological changes in the endobronchial buds, collagenous walls, and alveolar spaces that closely resemble the histological characteristics observed in patients with IPF. The use of bleomycin models offers several advantages, such as ease of manipulation, widespread availability, reproducibility, and alignment with key criteria for an effective animal model . Therefore, bleomycin is often used to mimic the characteristics of human IPF lung injury. Stem cell therapies present a promising therapeutic approach for pulmonary fibrosis, and advancements in technology continue to expand the scope of potential treatments. MSCs are pivotal in the process of tissue regeneration and repair through their ability to differentiate into various cell types, exhibit immunogenic properties, and secrete anti-inflammatory factors to facilitate tissue healing . Furthermore, clinical studies have shown no adverse effects in the treatment of IPF via the use of MSCs derived from umbilical cords , which are easily isolated and expanded for further research on their therapeutic potential. The deposition of the extracellular matrix plays a crucial role in the pathogenesis of IPF, with collagen serving as a key component of the extracellular matrix. Hydroxyproline, a constituent of collagen, also plays a significant role in this process. Our study revealed a decrease in hydroxyproline content in mice treated with UC-MSCs compared with those with BLM-induced injury. Furthermore, UC-MSCs treatment led to improved survival rates and reduced lung fibrosis in the mice. These findings suggest that UC-MSCs may represent a promising therapeutic approach for treating IPF. Persistent inflammation is a key characteristic of IPF, in which the compromised alveolar epithelium releases various cytokines and growth factors, such as TGF-β, leading to the transformation of fibroblasts into contractile myofibroblasts capable of generating ECM . Conversely, the secretion of inflammatory mediators by activated fibroblasts/myofibroblasts facilitates fibrogenesis and attracts immune cells, thereby intensifying chronic inflammation . The findings from ELISA analyses indicated increased levels of the BALF proinflammatory factors IL-1, TNF-α, and IL-6, as well as the profibrotic factor TGF-β1 in mice with BLM injury, which inconsistent with prior research . The findings of our study indicate that UC-MSCs attenuate the inflammatory response in BLM-challenged mice. Interestingly, our results suggest that the efficacy of UC-MSCs does not exhibit a dose-dependent relationship, as evidenced by previous research demonstrating that low-dose MSC treatment yields nonsignificant effects while high-dose treatment may exacerbate the risk of pulmonary embolism and inflammation, as well as interfere with macrophage phagocytosis and diminish treatment efficacy ; however the exact mechanism remains unclear. Our research offers valuable insights into determining the optimal dosage of MSCs therapy; however, further investigations and explorations are necessary to fully understand the potential of MSCs in human therapeutic applications. Previous studies have demonstrated that the gut microbiota plays a role in IPF and is associated with inflammation . Therefore, the gut microbiota has emerged as a new target for IPF treatment. Our study used 16S rDNA sequencing analysis to determine the effects of medium-dose UC-MSCs on the composition and structure of the gut microbiota in IPF mice. In this study, BacteroidotaandFirmicutes were the most abundant phylum of the gut microbiota in the Control group. The relative abundance of Bacteroidota was significantly increased in IPF model mice, whereas that of Firmicutes was significantly decreased. Treatment with UC-MSCs resulted in a significant decrease in the relative abundance of Bacteroidota , whereas the relative abundances of Patescibacteria , Dubosiella , and Actinobacteriota were significantly increased. At the genus level, the relative abundance of Dubosiella was significantly downregulated in the Model group, whereas the relative abundances of Prevotellaceae_UCG-001 and Allobaculum was significantly upregylated. Treatment with UC-MSCs resulted in a significant downregulation of the relative abundances of Prevotellaceae_UCG-001 and Allobaculum , and the relative abundances of Lactobacillus and Desulfobacterota were significantly upregulated in relative abundance. Lactobacillus , a probiotic classified within the phylum Firmicutes and commonly found in the gastrointestinal tract, serves an immunomodulatory function in the management of individuals with chronic respiratory conditions . Previous research has demonstrated that Lactobacillus can translocate from the intestines to the lungs through the mesenteric lymphatic system or oropharyngeal reflux , stimulating lung macrophages and natural killer cells, and facilitating the recruitment of Treg cells to the lungs, thereby eliciting anti-inflammatory responses . Lactobacillus can to secrete various short-chain fatty acids, including propionate, butyrate, and acetate , which are known for their potent anti-inflammatory properties that play a role in regulating the immune function of the host’s lungs . Our findings revealed a decrease in the abundance of Lactobacillus in the Model group, which was subsequently increased following treatment with UC-MSCs. However, further investigations are neededto fully elucidate the underlying mechanism involved. Desulfovibrio , a sulfate-reducing bacterium, has been implicated in the pathogenesis of various diseases including Parkinson’s disease, inflammatory bowel disease, and bacteremia . Desulfovibrio is known to carry out various biological functions, such as inflammation and oxidative stress, predominantly by generating the noxious gas hydrogen sulfide(H2S), and is generally regarded as a pathogenic bacterium . However, our results showed that the abundance of Desulfovibrio instead decreased in the Model group of mice, which may be due to differences in climate, diet, and sampling techniques. A previous study revealed that Desulfovibrio is not always associated with adverse health effects , thus, the specific role Desulfovibrio plays in IPF needs to be further explored. Prevotellaceae is widespread in all parts of the body and plays an important role in maintaining human health; however, its association with disease is unclear . Some studies have shown that Prevotellaceae can slow the progression of Parkinson’s disease through the production of short-chain fatty acids , whereas others have shown that Prevotellaceae can induce a TH2 immune response that exacerbates asthma . The results revealed that the abundance of Prevotellaceae was significantly increased in our Model group of mice and significantly decresed after treatment, which, although the mechanism is not known, may provide some insights into the role it plays in different diseases. Allobaculum , is a gram-negative bacillus that induces the amplification of inflammatory intestinal T-helper 17 cells, compromises the intestinal barrier and contributes to the development of autoimmune hepatitis via the gut–liver axis . Previous studies have also shown that Allobaculum may exacerbate leaky gut, leading to increased LPS production, which enters the brain via the gut–brain axis and exacerbates neuroinflammation in Alzheimer’s disease (AD) by exacerbating the production of Aβ in neurons and the activation of glial cells . In our study, the abundance of Allobaculum increased in the Model group but decreased in the UC-MSCs group, which is consistent with the findings of previous studies showing that Allobaculum may promote the progression of inflammatory disease. However, in follow-up studies, it will be necessary to delve deeper into the precise function of Allobaculum in the context of lung disease. In summary, UC-MSCs can effectively inhibit inflammatory cell infiltration and pulmonary fibrosis by regulating the gut microbiota. The findings of this study suggest a correlation between alterations in gut microbiota and metabolite levels in response to UC-MSC treatment in IPF model mice. Bile acids, known for their crucial role in intestinal nutrient absorption and lipid secretion, are also involved in maintaining homeostasis and exerting anti-inflammatory effects. Previous research has shown that bile acids can activate farnesoid X receptors (FXRs), thereby suppressing inflammation and fibrosis in various organs including the liver, kidneys, and intestines . FXR activation has also been shown to inhibit the inflammatory response and to promote lung repair after lung injury . The results of the present study revealed that the level of bile acids and their metabolites Deoxycholic acid, Glycodeoxycholic acid, and 3-dehydrocholic acid were down-regulated in the Model group, whereas they were increased after treatment, suggesting that bile acids and their metabolites may play a protective role in the treatment of IPF via the administration of UC-MSCs, which is in line with previous findings . In addition, according to the pathway enrichment results, the biotin synthesis pathway plays a key role in the treatment of IPF with UC-MSCs. Biotin is a water-soluble B-type vitamin that is involved in a variety of cellular metabolic pathways, such as gluconeogenesis, fatty acid synthesis, and fatty acid oxidation. Biotin deficiency may lead to increased expression of proinflammatory factors, immune dysfunction, and activation of related inflammatory pathways . Previous studies have shown that biotin synthesis has an inhibitory effect on Mycobacterium abscessus in the lungs and affects the growth of Mycobacterium abscessus by influencing PH through supporting fatty acid remodeling and envelope mobility ; furthermore, the aggregation of inflammatory cells such as eosinophils, foam cells, and macrophages in the lungs of rats fed a biotin-deficient diet has been observed , suggesting that biotin has an impact on inflammatory responses. Our research offers insights into the biotin synthesis pathway as a potential treatment for IPF, yet further investigation is needed to fully elucidate the underlying mechanism involved. In recent years, the role of lipid metabolism in the progression of lung diseases has received much attention, such as the alteration of phospholipids and their metabolites in the plasma of patients with IPF, has received much attention . Phosphatidylcholine and phosphatidylethanolamine, metabolites of glycerophospholipids, have previously been shown to promote pulmonary fibrosis by mediating microvascular injury and promoting fibroblast migration and proliferation . In our study, the biosynthesis of these two substances was also demonstrated to play a key role in the treatment of PF by UC-MSCs, which is consistent with previous findings . Taken together, the survival of IPF mice treated with UC-MSCs may be attributed to changes in metabolites. However, the mechanism of metabolite changes remains to be investigated. In recent years, it has become increasingly clear that the gut plays a crucial role in directing immune responses outside the local environment, including the lungs, and that small molecule compounds play a role in connecting the lung-gut axis . In our research, we performed a Spearman correlation analysis of altered gut microbiota and potential metabolic markers. UC-MSCs have the ability to regulate the abandunce of Prevotellaceae_UCG-001 , Bacteroides , Escherichia-Shigella , Alloprevotella and Dubosiella , as well as the ability to modulate 16-hydroxyhexadecanoic acid, Glycodeoxycholic acid, and Cholic acid, thereby regulating bile acid metabolic pathways and protecting lung tissues from inflammatory factors. However, further studies are needed to confirm these hypotheses. In conclusion, we generated a murine model of IPF using BLM and observed that UC-MSCs exhibit therapeutic efficacy in attenuating lung injury and decreasing the level of inflammatory mediators in vivo. Furthermore, we investigated alterations in the gut microbiota and fecal metabolites in conjunction with omics approaches to elucidate the underlying mechanisms and identify potential biomarkers for UC-MSCs therapy in IPF. Nevertheless, our study has certain limitations, including (1) The small sample size of animals utilized and the necessity for further validation through additional animal experiments and clinical studies. (2) The study lacked an evaluation of the efficacy of the Control+UC-MSCs group. (3) There is a lack of evidence that UC-MSCmediated protection against pulmonary fibrosis is attributable to alterations in the gut microbiome and its metabolites. Futhermore, the absence of sterile mice and fecal microbiota transplants hinders the confirmation of a causal relationship between the microbiota and pulmonary fibrosis, highlighting the need for a more sophisticated experimental design to elucidate the mechanism of UC-MSCs in treating IPF.
In summary, our study used a multifaceted approach including analysis of altered inflammatory responses, 16S rDNA sequencing, and metabolomics to explore the effects of the use of UC-MSCs on pulmonary fibrosis, providing a basis for further exploration of the pathology of IPF and the development of new treatment approaches. The results of this study suggest that UC-MSCs treatment attenuates the progression of PF via a mechanism that may prevent inflammation by restoring the diversity of the gut microbiota and its metabolite changes. However, we did not investigate in this how UC-MSCs modulate the interaction between the microenvironmental microbiota and metabolites, which will be further explored in the future.
5.1 UC-MSCs preparation 5.2 Animal 5.3 Animal experimental treatment and UC-MSCs administration 5.4 Bioluminescence imaging of Fluc-labeled UC-MSCs 5.5 Lung morphology analysis and hydroxyproline assay 5.6 Bronchoalveolar lavage fluid (BALF) collection and measurement of cytokine levels via enzyme-linked immunosorbent assay (ELISA) 5.7 Analysis of the intestinal microbiota 5.8 Untargeted metabolome LC-MS/MS experimental methods and data analysis 5.9 Statistical analysis The neonatal umbilical cords used for the isolation of hUC-MSCs were obtained from healthy donors according to standard protocols. Each donor signed the informed consent form, and the Second Affiliated Hospital ethics committee of Fujian Medical University approved the protocol for bone marrow harvesting (Approval No.2021395). The recruitment period ranged from October 1, 2021 and completed on December 1, 2022. in the cells at passage 3 (P3) were analyzed by flow cytometry with the following antibodies: CD11b/c (MA1-80091, Thermo Fisher Scientific), CD45 (ab10558, Abcam), HLA-DR (SAB4700731, Sigma-Aldrich), CD29 (10587-MM06, Sino Biological), CD44 (MA4400, Thermo Fisher Scientific), CD73 (10904-MM07, Sino Biological), CD90 (MA5-16671, Thermo Fisher Scientific), and CD105 (10149-R103, Sino Biological). Trilineage differentiation of the isolated cells was assessed for their adipogenic, chondrogenic, and osteogenic differentiation potential using differentiation kits (Cyagen) according to the instructions. Here, the UC-MSCs used in this process were CD29 + , CD44 + , CD73 + , CD90 + , CD105 + , CD11b/c - , CD45 - , HLA-DR - and with the potential of lipogenic, chondrogenic, as well as osteogenic .
Female 8–10-week -old C57BL/6 mice were purchased from Shanghai SLAC Laboratory Animal Co. Ltd. (Shanghai China). All the mice were housed in an approved animal facility and cared for by a licensed veterinarian and supervised staff under a 12-hour light/12-hour dark cycle with access to food and water ad libitum. All experimental protocols were approved by the Institutional Animal Care and Use Committee of the Second Affiliated Hospital of Fujian Medical University (Approval No.2021395). The animal experiments were conducted from April 2023 to June 2023.
Mice (n = 75) were randomized into five groups. Group 1 was normal control group (Control), the mice in the control group were treated with 100 μL of sterile saline. Group 2 was model group (Model), in which the mice were anesthetized with pentobarbital sodium and intubated with a 21G atraumatic cannula for intratracheal instillation of 2U/Kg. Groups 3–5 included UC-MSC-low, UC-MSC-medium, and UC-MSC-high treatment groups. BLM-induced IPF mice were intravenously administered UC-MSCs on Day 7, and the numbers of UC-MSCs injected through the vein were 2.5×10 5 cells, 5×10 5 cells, and 1×10 6 cells in Groups 3, 4, and 5, respectively. The mice were maintained until they were euthanized on Day 21. Then, cecal content samples were randomly collected from each group (n = 6 for all groups) as soon as possible and immediately stored in liquid nitrogen for further analyses. The body weights of the mice were measured every other day. At the endpoint (Day 21), the mice were sacrificed with 3% sodium pentobarbital, and all efforts were made to minimize discomfort and pain.
For bioluminescence imaging (BLI), firefly luciferase (Fluc) was used for MSCs. For intravital imaging, IPF model mice were established via intratracheal injection of BLM. Next, IPF models received intravenous injections of Fluc-labeled UC-MSCs in a volume of 100 μL. IPF model mice administered Fluc-labeled UC-MSCs were imaged using the AniView Kirin Imaging System (Guangzhou Biolight Biotechnology Co., Ltd.) after intraperitoneal injection of the D-luciferin substrate (150 mg/kg; Biosynth International, USA). The results of BLI revealed that the fluorescence signal was strongest in the lungs and peaked on Day 1. Thereafter, the fluorescence signal became weaker and disappeared on Day 5, and these results indicated that UC-MSCs migrated mainly to the lungs .
The mice were sacrificed on the 21st day after the stimulation with BLM or saline and the right lung tissues of the mice were dissected. First, the mice were lavaged three times with 500 μL of phosphate-buffered saline (PBS) and fixed by injecting 500 μL with 4% paraformaldehyde. After fixation by 4% paraformaldehyde for 24 h and gradient dehydration with alcohol, the lung tissues were embedded in paraffin. They were cut into 5 μm sections from the frontal, middle, and posterior coronal planes. HE and MT staining were performed according to previously described methods for histology analysis. To determine fibrosis severity, tissue sections were scanned using a digital pathological image scanner. Histologic quantification was conducted using the Ashcroft score based on the H&E and Masson staining in a blinded manner . Hydroxyproline is a significant component of collagen in the lung tissues. To determine the hydroxyproline content, the mice were dissected, the surface blood were drained. We then weighed 50 mg of wet right lung tissue and determined the hydroxyproline content according to the content assay kit instructions (Jiancheng Bioengineering Institute, Nanjing, China).
After euthanasia, the thorax of each mouse was opened to expose the trachea, and the right trachea and lung lobes were ligated. The left lung was lavaged three times with 0.9 ml of PBS. The BALF was centrifuged at 12,000×g for 5 min, after which the supernatant was collected and stored at -80°C. The protein concentration was detected via the BCA method. Fifty micrograms of protein were measured to evaluate the levels of TNF-α, IL-1β, IL-6, and TGF-β1 via ELISA kits (BOSTER, Wuhan, China) following the manufacturer’s instructions.
Microbial DNA was extracted using the HiPure Soil DNA Extraction Kit (Magen, Guangzhou, China) following the manufactuer’s instructions. The 16S rDNA V3–V4 region was amplified by PCR (95°C for 5 min, then 95°C for 1 min, 60°C for 1 min, 72°C for 1 min for 30 cycles, and finally 72°C for 7 min) using the following primers: 341F: CCTACGGGNGGCWGCAG ; 806R: GGACTACHVGGGTATCTAAT . Aed using the ABI StepOnePlus Real-Time PCR System (Life Technologies, Foster City, CA, USA). The purified amplicons were subjected to double-end sequencing on the Illumina platform according to standard operations (PE250). The raw reads were further filtered using FASTP (version 0.18.0). The raw data from the Illumina platform were filtered using FASTP (version 0.18.0). Clean reads were merged into tags using FLASH (version 1.2.11) at a minimum overlap of 10 bp and a maximum mismatch rate threshold of 2%. The raw tags were than quality filtered, and chimeric sequences were removed to acquire effective tags, which were clustered into operational taxonomic units (OTUs) with ≥97% identity cutoff using UPARSE software (version 9.2.64). PCA was performed in the R project Vegan package (version 2.5.3). The Chao1 and ACE richness indices were determined with QIIME software (version 1.9.1, University of Colorado, Denver, CO, USA). The dominant bacteria were analyzed mainly at the phylum and genus levels via the R project. Venn analysis was performed via the language VennDiagram package (version 1.6.16) and the R language UpSetR package (version 1.3.3) to analyze shared endemic species/OTUs/ASVs between groups. Species comparisons among in different groups was conducted via Welch’s t-test in the R project Vegan package (version 2.5.3). The heatmap of cluster stacking was calculated using the R package and generated via OmicsMart (Gene Denovo Biotechnology Co. Ltd., Guangzhou, China), a dynamic real-time interactive platform for data analysis.
5.8.1 UHPLC-Q-TOF 5.8.2 Metabolomics data analysis samples were selected as described above. The samples were separated on an Agilent 1290 Infinity LC ultrahigh-performance liquid chromatography (UHPLC) HILIC column; the column temperature was 25°C; the flow rate was 0.5 mL/min; the injection volume was 2 μL; the mobile phases consisted of A: water + 25 mM ammonium acetate + 25 mM ammonia, and B: acetonitrile; and the gradient elution program was as follows: 0–0.5 min. 95% B; 0.5–7 min, B varied linearly from 95% to 65%; 7–8 min, B varied linearly from 65% to 40%; 8–9 min, B was maintained at 40%; 9–9.1 min, B varied linearly from 40% to 95%; 9.1–12 min, B was maintained at 95%; the samples were placed at 4°C for the whole analysis process. The samples were kept in a 4°C autosampler throughout the analysis. To avoid the influence of fluctuations in the instrumental detection signal, the samples were analyzed continuously in a random order. QC samples were inserted into the sample queue for monitoring and evaluating the stability of the system and the reliability of the experimental data. An AB Triple TOF 6600 mass spectrometer was used for the primary and secondary spectra of the samples. The ESI source conditions after HILIC chromatographic separation were as follows: Ion Source Gas1 (Gas1): 60, Ion Source Gas2 (Gas2): 60, Curtain gas (CUR): 30, source temperature: 600°C, IonSapary Voltage product ion scan accumulation time 0.05 s/spectra. The secondary mass spectra were obtained by information dependent acquisition (IDA) in high sensitivity mode, with a declustering potential (DP) 05 s/spectra. The secondary mass spectra were obtained by information dependent acquisition (IDA) and in high- sensitivity mode. The declustering potential (DP)was ±60 V (positive and negative modes), Collision Energy: 35±15eV, IDA settings were as follows Exclude isotopes within 4 Da, Candidate ions to monitor per cycle: 10.
The raw MS data were converted to MzXML files using ProteoWizard MSConvert before be imported into freely available XCMS software. The data obtained from XCMS extraction were first checked for completeness, metabolites with more than 50% missing values within the group were removed from subsequent analysis, null values were KNN filled, extreme values were removed, and finally, the data were normalized for total peak area to ensure that they could be compared for parallelism between samples and between metabolites. Orthogonal partial least squares identification analysis (OPLS-DA) was used to predict the reliability and stability of the model for rat stool sample data, and partial least squares identification analysis (PLS-DA) was used to assess the differences between groups. We combined multivariate statistical analysis of PLS-DA VIP values and univariate statistical analysis of T-test P-values to screen for significantly different metabolites between the different comparison groups. The thresholds for significant differences were a VIP ≥ 1.5 in the PLS-DA model and a ttest P<0.05. After the metabolites were identified, metabolite pathway enrichment analysis was performed on the differentially abundant metabolites via SMPDB, and pathways with Q value ≤ 0.05 were defined as pathways significantly enriched in the differentially expressed metabolites after correction for multiple testing. Significant enrichment of pathways allow for identification of the most important biochemical metabolic pathways and signaling pathways associated with the differentially expressed metabolites.
The data are presented as the means ± SDs. The significance of the differences was determined by analysis of variance (ANOVA). Histograms were created using Graph Pad Prism 9 software (San Diego, CA, USA). Bioinformatics analysis, including species taxonomy, richness and diversity analyses, was performed via OmicsMart. A p-value < 0.05 indicated a statistically significant difference.
S1 Fig Identification of UC-MSCs. (A) Cells were analyzed by flow cytometry with the following antibodies: anti-CD11b/c, anti-CD45, anti-HLA-DR, anti-CD29, anti-CD44, anti-CD73, anti-CD90, and anti-CD105. (B) Identification of the potential of UC-MSCs for lipogenic, chondrogenic and osteogenic differentiation. (TIF) S2 Fig Evaluation of UC-MSC delivery through intravenous administration. Fluorescence imaging of different ex vivo organs (brain, lung, heart, liver, spleen and kidney) 24 h after the administration of UC-MSCs. (TIF) S1 Data (ZIP)
|
Evaluation of the Comprehensive Geriatric Assessment (CGA) tool as a predictor of postoperative complications following major oncological abdominal surgery in geriatric patients | 5b1c8948-517b-418e-aa1c-b5b1bdaca58f | 8893608 | Internal Medicine[mh] | The age distribution of a population greatly affects its burden of disease and disability, including cancer incidence, morbidity, and mortality . With the sharp increase in life expectancy observed in both men and women, virtually every country in the world is experiencing growth in the size and proportion of older people in their population. Over the next three decades, the global number of older people is expected to more than double, reaching over 1.5 billion by 2050, with up to 16% of the world’s population being 65 years and above . Although the relationship between ageing and cancer is complex and far from understood, the incidence of cancer increases with age, as seen in humans and in animal experimental models . As global epidemiologic and demographic transitions continue, they signal an ever-growing cancer burden over the next few decades, with over 20 million new cases expected annually as of the year 2025. With surgical intervention being the main curative treatment for many solid tumors, the number of older patients undergoing surgery as part of their cancer therapy regimen is also expected to rise . These older patients are often considered at increased risk for complications after major surgery but chronological age alone is not a reliable predictor of postoperative complications, as it cannot on its own, capture the physiologic heterogeneity prevalent in this population . The concept of frailty extends beyond chronological age and is one of the most serious global health challenges to be faced in the coming century. Frailty can be defined as “a complex clinical condition characterized by a decline in physiological capacity and reserve across several organ systems, with a resultant increased susceptibility to stressors” . Older patients considered fit for surgery might do as well as younger patients but frail and vulnerable patients are at an increased risk of adverse postoperative outcomes . The usual method to identify frailty is to evaluate an older patient’s general condition and risk of adverse outcomes using the evidence-based process of comprehensive geriatric assessment (CGA). Through a battery of standardized and validated assessment instruments, CGA evaluates nutrition, cognition, functional status, comorbidities, and geriatric syndromes to identify at-risk patients and to possibly guide management, treatment, and follow-up . However, performing a full CGA is time-consuming, therefore the International Society of Geriatric Oncology (SIOG) recommends a “two-step approach”. This strategy starts with the use of a screening tool to identify patients in need of further evaluation by CGA. Screening can be done with the Geriatric 8 (G8), a screening tool that includes seven items from the Mini Nutritional Assessment (MNA) scoring system, and an age-related component (<80, 80–85, or >85 years). Final scores range from 0 to 17, with a score below 14 indicating a geriatric risk profile . Some components of the CGA have been found to be consistently associated with adverse postoperative outcomes; however, based on current evidence, it is not possible to reach a consensus as to how an optimal geriatric assessment (GA) should be conducted and recommendations vary based on patient age, type of surgery (i.e., minor versus major), and other risk factors . Moreover, geriatric screening using the G8 assessment tool has been reported to be a powerful outcome predictor in surgical oncogeriatric patients in terms of hospital stay, rate of postoperative delirium, and 1-year mortality rates . The purpose of this study was to identify independent predictors of postoperative complications in oncogeriatric patients, based on components of the comprehensive geriatric assessment.
Study design & objectives Surgical interventions and patients’ characteristics Outcomes Geriatric assessment G8 screening tool Statistical analysis This is a monocentric retrospective study, including patients ≥70 year-old operated with curative intent, for an abdominal malignancy, at Institut Jules Bordet (IJB), between January 2016 and December 2019. The study protocol was reviewed and approved by the Ethics Committee of IJB (CE3277). All patients’ data was retrieved from the Institute’s electronic medical records software (Oribase). The primary objective of the study was to identify whether some components of the CGA are independent predictors of postoperative complications in oncogeriatric patients. The secondary objective was to evaluate the predictive value of G8 in postoperative complications, between patients identified as frail due to an impaired G8 score and those with a normal G8 score.
Major surgical interventions were defined based on the extent of dissection: major body cavity opened (e.g. peritoneal cavity) and as oncological surgeries with major-to-severe tissue trauma (e.g. resection of an organ or part of it and/or digestive anastomosis). Inclusion criteria comprised patients 70 years or older, scheduled for major (open or laparoscopic) oncological abdominal surgery, such as: hepatectomy, colectomy, abdomino-perineal resection, cytoreductive surgery with or without hyperthermic intraperitoneal chemotherapy (HIPEC), low anterior resection, small bowel resection, exploratory laparotomy, esophagectomy, and gastrectomy. Further inclusion criteria among those patients were: pre-operative geriatric 8 (G8) screening, and in patients scoring below or equal to 14 when evaluated by the G8 screening tool, a comprehensive geriatric assessment score (CGA), performed at least 3 months prior to surgical intervention. All other patients were excluded (e.g. emergency surgeries). The institutional electronic medical record system (Oribase) was used as the data source. Among different components of the CGA, we retained a set of seven validated scores and questionnaires: Katz’s activities of daily living (ADL) and Lawton’s instrumental activities of daily living (IADL) scales to assess functional status Mini-Mental State Examination (MMSE) to assess cognitive status Geriatric Depression Scale (GDS) to assess depression status Mini-Nutritional Assessment Short Form (MNA-SF) to assess nutrition status Hospital Anxiety and Depression Scale (HADS-A HADS-D) to assess the symptom severity of anxiety disorders and depression . Patients’ clinical and demographic characteristics were retrieved as well: age, gender, cancer type and histology, type of intervention, G8/ CGA results, and complications.
Postoperative complications obtained from medical records were defined as any event requiring treatment occurring in a 90-day period after the intervention. Severity of complications were classified by the primary investigator according to the Clavien-Dindo (CD) classification scale (grades I to V) . Grades I and II were considered minor complications whereas grades IIIa to IVb were recorded as major complications. Grade V meant death of the patient. For complications requiring more than one treatment method, the highest severity grade was noted. In case of multiple complications in a patient, each was recorded and graded separately and, additionally, the Comprehensive Complication Index (CCI) was calculated and reported on an interval scale from 0 to 100 to summarize all (minor and major) complications . Postoperative mortality was defined as death within 90 days.
In the subgroup of patients scoring below or equal to 14 at G8 survey and having received a CGA, patients were divided into two groups based on the median values of continuous variables for each score and questionnaire. Dichotomized outcome variables for 90-day postoperative complications were created according to CD classification: no complications versus any complications. The impact of the collected variable (assessment tool of the CGA) upon the variable of dependent interest (postoperative complications) was explored by univariate analysis and by fitting logistic regression models using the six subscales of the geriatric assessment.
Development of complications was compared between patients with a G8 ≤14 and a G8 >14. This study investigated postoperative outcomes using binary measures: 1. Any morbidity (grade I-V) within 90 days of surgery, 2. Major morbidity (CD grade ≥ IIIa) within 90 days of surgery, 3. Major CCI score (CCI > 50) 4. Death (CD grade V) within 90 days of surgery.
A descriptive analysis of clinical and demographic variables was performed. To test the significance of each variable in relation to the outcomes, univariate analyses were performed using the chi-square test or Fisher’s exact test. Statistical information was encoded anonymously into a database using Microsoft Excel spreadsheets. A value of p <0 . 05 was considered to be statistically significant. All analyses were done using the Statistical Package for the Social Sciences (SPSS, version 27.0, New York, US).
Baseline patient characteristics Comprehensive geriatric assessment—CGA Postoperative outcomes Association between CGA and complications Association between G8 and complications provides an overview of the univariate analysis for predictors of complications, higher CD-score, higher CCI score, and mortality. A G8 score below or equal to 14 was significantly associated with the development of complications within 90 days of surgery. Patients who screened positive for frailty had significantly more complications than patients who screened negative, 52% versus 17% respectively ( p<0 . 05 ). However, impaired G8 was not associated with the occurrence of major complications or higher CCI score. Additionally, no association was reported between impaired G8 and 90-day mortality, but it was significantly associated with a higher 1-year mortality ( p = 0 . 01 ).
A total of 1377 patients underwent an elective surgery at our institute during the study period. 806 were ≥ 70 years old, out of whom 112 patients matched the inclusion criteria ( ). Non-oncological surgeries (e.g. cholecystectomy, umbilical or inguinal hernias) and oncological surgeries with a minimal or low amount of tissue trauma and limited extent of dissection (e.g. digestive stoma), were excluded (1156). Furthermore, patients younger than 70 years (571), at the time of surgery, were excluded as well. From the 221 interventions for oncological abdominal surgery remaining, 69 were further excluded because no G8 screening was performed. Of the remaining 152, 40 were excluded for not having an appropriate G8 (performed more than 3 months prior to the intervention, or, after the intervention). Amongst the 112 patients included, the median age was 74 years, 46 were males (41.1%) and 68 females (58.9%). The most common tumors were colorectal (52.7%) and ovarian (13.4%), and the most common histological type was adenocarcinoma in 84.8% of cases. The majority of surgeries (71.4%) were open surgeries and the most frequent types of interventions were colectomy (25.9%) and cytoreductive/ debulking surgeries (17.9%) ± HIPEC. Malignancies classified as “other” included unknown primaries, endometrial tumors, pancreatobiliary cancers, retroperitoneal cancers and breast, lung, and skin cancers metastatic to the abdomen. The majority of malignancies (37.5%) were grade II. Details of the baseline clinical variables are summarized in . G8 was impaired in 76 patients (67.9%) and normal in 36 patients (32.1%). When we compared the characteristics of the two groups according to sex, cancer type, tumor histology, type of intervention, and surgical approach, we did not find significant differences except for age and cancer grade ( p = 0 . 04 ).
Among the 76 patients with an impaired G8, the median age was 76 years (70–92 years). Ten of these patients were not evaluated by CGA at all. Preoperative CGA scores, among the remaining 66 patients, were ADL > 6 in 33.3%, IADL ≤ 7 in 50.8%, MMSE ≤ 28 in 57.9%, GDS > 3 in 50.0%, HADS-A > 6 in 42.9%, HADS-D > 4 in 46.4%, and MNA-SF ≤ 9 in 39.4%.
Out of all 112 patients, 77 developed at least one complication. Forty-three patients (38.4%) experienced a grade II complication and 15 patients (13.4%) developed a major complication. Sixteen patients (14.3%) had a CCI score > 50. Twelve patients (10.7%) died within the 90-day follow-up period from all causes and from postoperative complications. Postoperative Clavien-Dindo scores are displayed in .
We analyzed the impact of each component of the CGA upon the development of complications. In univariate analysis, the short form of the Mini-Nutritional Assessment was the sole prognostic factor for postoperative complications ( p = 0 . 03 ). There were no associations between the other assessment tools and the emergence of complications. Results are shown in .
The primary aim of our study was to identify independent predictors of postoperative complications based on components of the CGA. In our population, 67.9% of the patients were identified by an impaired G8 as being at risk for frailty. Among those patients, 26 scored below or equal to 9 when evaluated by the short form of the Mini-Nutritional Assessment tools. Our main finding was the strong association between this nutrition-based evaluation tool and the occurrence of any complications during the 90-day postoperative course. In this group of patients with an abnormal G8 score, no association was found between other components of the CGA and postoperative complications. This relationship has been increasingly investigated in the field of surgical oncology but with inconsistent results. In patients with colorectal cancer, ADL, IADL, MMSE, GDS, and MNA were inconsistently associated with postoperative complications . Huisman et al. found “impaired nutrition” to be associated with major postoperative complications, but the authors evaluated the nutritional status using the Nutritional Risk Screening (NRS) in their study . For patients undergoing gastric cancer resection, other malnutrition screening tools have been validated and found to be associated with peri-operative and postoperative morbidity and complications . Our findings indicate the importance of preoperative nutritional assessment in major oncological abdominal surgery. This tool provides valuable patient information, to assess the surgical risk-benefit ratio, and to possibly tailor an individualized nutritional therapy by a multidisciplinary team, even though the impact of normalization of potentially reversible factors on postoperative complications is still under investigation . However, despite recommendations on geriatric assessment in the oncogeriatric patients’ population, published in 2005 by the SIOG, there is still no consensus regarding which specific instruments should be included in a CGA . Inclusion of nutritional impairment screening is of high interest as this reversible factor seems to be an indicator of increased postoperative morbidity. Heterogeneity between patient selection and surgical intervention makes comparison difficult and indicates the need for further studies, with larger populations, using identical screening tools and cut-off values. Another important finding in our study was the significant difference in postoperative outcomes between patients with a G8 ≤ 14 and patients with a G8 > 14. We identified an association between impaired G8 screening and the development of at least one complication. Several studies investigating this relationship have provided contradictory results. A recent study on preoperative frailty assessment in 114 patients found the G8 tool not to be significantly associated with the risk of adverse events . Another study found similar results in 139 older patients treated surgically for colorectal cancer. The authors did not find isolated G8 to be of any predictive value on postoperative outcomes. They did, however, find that a combination of the G8 and the Identification of Seniors at Risk-Hospitalized Patients (ISAR-HP) screening tool resulted in a high predictive value for postoperative complications . Another combination recently investigated by Bessems et al. indicated that frailty screening by G8 in association with the 4-meter gait speed test predicts postoperative complications in colorectal cancer patients undergoing elective surgery . Additionally, in another study, de Vries et al. found G8 to be a strong predictor of postoperative complications in a population of patients undergoing surgery for cutaneous head and neck cancer . Our study did not find a relation between impaired G8 and major complications (defined as Clavien-Dindo grade > II). When investigating this same relationship in 143 patients of the same age group, requiring surgery for a suspected solid malignancy, Bruijnen et al., found no difference in the occurrence of major 30-day complications . However, three studies did describe an association between G8 and the occurrence of major Clavien-Dindo complications. Studies including grade II complications found this association in 78 patients who underwent surgery for colorectal cancer , 184 patients who underwent emergency abdominal surgery (including non-oncological patients) and 71 patients treated for hepatocellular carcinoma , thus confirming these findings. In these studies, no association was found between impaired G8 and a higher 1-year mortality rate. We found impaired G8 to not be predictive of 90-day mortality in our population. It was, however, associated with an increased 1-year mortality. This may be explained by the significantly higher cancer grades in this group. The predictive value of the G8 on postoperative complications in surgical oncology and practice remains unclear and further trials on larger populations are needed to complete our understanding of this screening tool and its place in diagnostic algorithms. This is in contrast to studies done in non-surgical oncological patients, in whom impaired G8 has been shown to provide helpful information through prediction of complications in patients receiving chemotherapy and/or radiotherapy . Furthermore, it is imperative to mention that the G8 was initially developed as a frailty screening tool for predicting deficits in the CGA. It was not intended to be used as a prognostic tool. Direct comparison between these study results and ours should be approached with caution. The above-mentioned studies focused mostly on homogenous populations with a single malignancy or surgical intervention whereas our study included a variety of tumor types and surgeries. Heterogeneity remains among patient characteristics between studies, definitions of frailty, cut-off values for screening and assessment tools but also in the definition of adverse postoperative outcomes. The present study had several limitations. First, although the inclusion criteria limited our population to patients undergoing ‘major’ oncological abdominal surgery, heterogeneity affecting risk of complications remained. For example, Kothari et al. recently showed that within each of their frailty cohorts, laparoscopic colectomy provided better outcomes when compared to an open approach in all domains, including cardiac/vascular, pulmonary, and wound complications . Additionally, this study did not account for differences in tumor characteristics, such as stage, treatment course, or intensity. Another limitation is the monocentric nature of our study since variations in postoperative complications are influenced by technical skill scores of surgeons and quality of care by medical staff . Other limitations lie in the fact that postoperative outcomes were collected retrospectively. As a result, some minor complications may have been underreported. Despite these limitations, our study was unique in including only geriatric patients admitted for major oncological abdominal surgery, while not being centered around a single malignancy or surgical approach.
To conclude, this study suggests that the MNA-SF is a valuable asset in preoperative risk assessment for postoperative complications that have the potential to impede recovery in oncogeriatric patients undergoing major abdominal surgery. Evaluation of nutritional status as part of the comprehensive geriatric assessment in patients identified as frail seems essential as nutrition is a potentially modifiable factor influencing preoperative management and treatment modalities. G8, besides its role as a screening tool for impairment in the CGA, shows potential as a predictor of postoperative complications. This finding corroborates well with the MNA’s predictive potential since all the questions of the G8 –except for age–are derived from the MNA.
|
Die Bedeutung der präoperativen Immunhistochemie bei Patientinnen mit Endometriumkarzinomen – welche Parameter sind entscheidend? | aad56bce-f7f8-4b40-b69d-6016d4754594 | 8604832 | Anatomy[mh] | Die genomische Klassifikation des Endometriumkarzinoms hat uns vor Augen geführt, dass die Zweiteilung in Typ I und Typ II sowie die klinischen Faktoren wie Myometriuminfiltration, Grading, Lymphangiose (L), Hämangiose (V), Tumorgröße und das Alter der Patientin die Biologie des Tumors nicht ausreichend beschreiben. Das Ergebnis sind insuffiziente Risikostratifizierungen („low risk, intermediate, high intermediate, high risk“), auf deren Basis Therapieentscheidungen für Tausende von Patientinnen getroffen wurden und z.T. auch weiterhin werden. Zahlreiche klinisch bedeutsame Fragestellungen zum Wert der adjuvanten Therapie, wie z. B. perkutane Therapie (EBRT) vs. Brachytherapie (BT); Radiochemotherapie (RCT) vs. Radiotherapie; RCT vs. Chemotherapie + BT etc. [ – ], führten alle zu inkonklusiven Ergebnissen, da die Patientenselektion auf der Basis der oben aufgeführten klassischen Risikoeinteilungen die Biologie der Erkrankung unvollständig beschreibt [ – ]. Statt der klinischen sind nämlich die genomischen Faktoren entscheidend. Die Einteilung in (1) POLE-Mutationen, (2) Tumoren ohne spezielles molekulares Profil („non specific molecular profile“ [NSMP]), (3) die mikrosatelliteninstabilen Tumoren (MSI = „mismatch repair deficiency“ [MMRd]) und (4) die p53-mutierten Tumoren hat eine neue, bisher unbekannte Welt eröffnet. Folgerichtig machte sich die PORTEC-Gruppe an die Reanalyse ihrer Studienergebnisse, erhob von den in Studien behandelten Patientinnen dieses molekulare Profil und korrelierte es mit dem onkologischen Ergebnis . Dabei zeigte sich, dass POLE-Mutationen zu einem sehr günstigen onkologischen Outcome passen, NSMP- und MSI-Patientinnen eine intermediäre Prognose aufweisen und p53-Patientinnen eine deutlich ungünstigere Gruppe darstellen. Es existieren Schnittmengen mit den klassischen klinischen Prognosefaktoren, aber auch mit vielen Patientinnen, die keiner adjuvanten Therapie bedurft hätten (POLE-Mutationen), und Patientinnen, die p53-Mutationen aufweisen und in der Vergangenheit eher das Risiko einer Untertherapie hatten. Auf der Basis dieser Daten hat die europäische Leitliniengruppe bereits ihre Risikoeinteilung inklusive der Therapieempfehlungen angepasst . Für die deutsche Leitlinie steht dies noch aus . Da bisher keine Level-I- oder -II-Evidenz vorliegt, sind die logische Konsequenz nun Studien, die das genomische Profil der Patientin zur Entscheidungsgrundlage machen, wie z. B. die laufende PORTEC-4a-Studie (ClinicalTrials.gov Identifier: NCT03469674). Die nachfolgend zu kommentierende Arbeit diskutiert, inwieweit zusätzliche immunhistochemische (IHC) Parameter, wie das Adhäsionsmolekül L1CAM und das in die ESGO/ESTRO-Leitlinien aufgenommene p53 sowie „alte Bekannte“ wie der Hormonrezeptorstatus, zu einer weiteren Verfeinerung der Prognosegruppen und somit der Therapieempfehlungen beitragen können.
Die Daten von > 760 Patientinnen aus 10 europäischen Kliniken wurden ausgewertet. Das mittlere Follow-up betrug 5,5 Jahre. 71 % der Patientinnen wurden präoperativ als G1/2 diagnostiziert, 89 % wiesen eine endometrioide Histologie auf.
Die präoperative Immunhistochemie zeigte p53 abn -Befunde bei 112 (14,7 %), L1CAM+ bei 79 (10,4 %) und einen negativen ER/PR-Status bei 151 (20 %) der Patientinnen. Bei 493 (65 %) Patientinnen war eine Lymphadenektomie erfolgt, 53 Patientinnen (11 %) wiesen Lymphknotenmetastasen auf. Die Indikation zur adjuvanten Therapie wurde bei 347 (46 %) gestellt. Von diesen 112-mal eine VBT, 104-mal eine EBRT und 93-mal eine VBT + EBRT. Eine Chemotherapie bzw. Radiochemotherapie erhielten 38 bzw. 26 Patientinnen. 102 (13,4 %) der 105 (13,8 %) Patientinnen, die ein Rezidiv entwickelten, starben an der Erkrankung. 12 % der hier untersuchten Patientinnen wiesen immunhistochemisch ein abnormes p53 auf; dies gilt als valider Surrogatparameter zur p53-Mutationsuntersuchung. p53 abn erwies sich als wichtigster Prognosefaktor. Die Studie bestätigt, dass p53 eine Schlüsselstellung einnimmt und der wichtigste Diskriminator zwischen guter/intermediärer Prognose (p53-negative Tumoren) und schlechtem Outcome (p53 mut ) ist. Dies ist eindrucksvoll Abb. zu entnehmen. Die zusätzlichen Faktoren wie L1CAM und die guten alten Hormonrezeptoren modulieren lediglich marginal. Dabei beeinflussten L1CAM-Negativiät und Hormonrezeptorpositivität das klinische Outcome zum Besseren, L1CAM-Positivität und Hormonrezeptornegativität beeinflussten das Outcome ungünstig.
Der Einsatz von präoperativen IHC-Biomarkern hat neben der ESMO-ESGO-ESTRO-Risikoklassifizierung und neben dem Lymphknotenstatus eine wichtige prognostische Bedeutung. Für die tägliche klinische Praxis könnte die p53/L1CAM/ER/PR-Expression als Indikator für das chirurgische Staging und die Verfeinerung der selektiven adjuvanten Therapie durch Einordnung in die ESMO-ESGO-ESTRO-Risikoklassifizierung dienen .
Die genomische Einteilung hat zu vier Subgruppen geführt, die eine unterschiedliche Prognose anzeigen und sich bezüglich des Ansprechens z. B. auf Immuntherapie deutlich unterscheiden. Faktoren, wie die bekannten Hormonrezeptoren oder das Adhäsionsmolekül L1CAM, haben einen zusätzlichen Einfluss, aber die wichtigste Unterscheidung bietet p53. Auch heute kann hier schnell und preiswert schon in jedem Op.-Präparat die Unterscheidung getroffen werden. Die anderen Untersuchungen sind aufwendiger und teurer. An ihnen werden wir in Zukunft aber wohl nicht vorbeikommen, obwohl leider die deutsche Leitlinie aufgrund zeitraubender formaler Prozesse eine der wenigen Leitlinien ist, die hier noch kein Update publiziert haben. Hier sei auf die ESGO/ESMO-Leitlinie verwiesen bzw. auf die NCCN-Leitlinie (Version 3/21; ). Die genomische Klassifizierung wird also in Zukunft eine Entscheidungsgrundlage bieten für den Ausschluss einer Hochrisikosituation (p53 wt ) bzw. das Vorliegen einer Hochrisikosituation (p53 mut ). Dies wird aller Voraussicht nach auch eine wichtige Entscheidungshilfe für die Indikation zur Lymphonodektomie sein, den Einsatz der postoperativen perkutanen statt der Brachytherapie und die Indikation zur Chemotherapie klären.
Bei Vorliegen von POLE-Mutationen handelt es sich wahrscheinlich um Patientinnen, die nicht von einer adjuvanten Therapie profitieren. MSI-Patientinnen haben eine intermediäre Prognose, sprechen aber auf Immuntherapien sehr gut an und die Subgruppe der NSMP-Patientinnen bedarf in Zukunft sicher noch einer verfeinerten Subgruppenanalyse. Klinische Studien werden in den nächsten Jahren zeigen, ob der Weg der genomischen Klassifikation bezüglich Prognose und Therapieentscheid der richtige ist. Simone Marnitz, Köln
|
Kartogenin Improves Doxorubicin-Induced Cardiotoxicity by Alleviating Oxidative Stress and Protecting Mitochondria | bbef6f69-2745-4eeb-a409-8a8a0c7b6bc2 | 11942266 | Pathologic Processes[mh] | Because of its great effectiveness, the common anthracycline doxorubicin (DOX) has emerged as a key component of clinical cancer treatment . While demonstrating therapeutic efficacy across multiple cancer types, the clinical application of DOX faces significant limitations owing to its propensity to induce cardiac complications . Patients’ lives are at risk since doxorubicin-induced cardiotoxicity (DIC) is typically irreversible and can result in clinical congestive heart failure . Therefore, the prevention and treatment of DIC depend heavily on the investigation of the precise mechanisms and the creation of novel therapeutic targets. Numerous biological processes are involved in the intricate mechanics of DIC . Substantial evidence implicates oxidative stress as the principal contributor to the pathogenesis of DIC . When the antioxidant and oxidative systems are out of balance, it can disrupt several signaling pathways and impact biological processes. This condition is known as oxidative stress . DOX has been mechanistically linked to the accumulation of ROS and subsequent oxidative stress-mediated cellular damage . The primary source of ROS generation in vivo, mitochondria, are also the subcellular organelles that DIC gradually damages the most severely . DOX has been shown to induce both mitochondrial structural alterations and mitochondrial malfunction in cardiomyocytes . Oxidative stress injury is made worse by mitochondrial malfunction, which also increases the generation of reactive ROS . In conclusion, DIC may be effectively treated and prevented by maintaining normal mitochondrial levels and lowering oxidative stress. Kartogenin (KGN), a small molecule compound, is a potent inducer that effectively promotes the differentiation of pluripotent mesenchymal stem cells in chondrocytes . Recent studies reveal its antioxidant properties and mitochondrial protective effect. Experimental data from Wang’s group revealed that the KGN-mediated activation of Nrf2/TXNIP signaling augments redox homeostasis in mammalian cells . Not coincidentally, Tian et al. showed that KGN treatment effectively up-regulated the expression of GPX1 and HO-1, which attenuated oxidative stress . Wang et al. also showed that KGN has a protective effect on mitochondria, as evidenced by its capacity to increase the membrane potential of mitochondria, decrease mitochondrial swelling, and prevent the outer mitochondrial membrane from rupturing . While previous investigations have extensively explored KGN’s capacity to induce stem cell differentiation, its potential protective mechanisms against DIC-associated oxidative stress and mitochondrial impairment remain poorly characterized. This study therefore seeks to determine whether KGN mitigates DIC progression through the modulation of redox balance and preservation of mitochondrial homeostasis. Our experimental design incorporated both cell-based and animal models to simulate DIC progression. Using echocardiography and histopathological staining, murine cardiac function and myocardial damage were evaluated. The detection of lipid peroxidation products and antioxidant enzymes from cardiac tissues and cells also assisted in determining the degree of oxidative stress. In addition to assessing ROS levels within cells, mitochondrial morphology and membrane potential were also examined, which are indicators of mitochondrial state. Through network pharmacology and sequencing data, we suggest potential pathways that KGN attenuates DIC. These novel insights hold translational potential for innovating therapeutic approaches aimed at mitigating the cardiotoxicity associated with anticancer regimens.
2.1. KGN Alleviated DOX-Induced Cardiotoxicity in Mice 2.2. Histological and Biomarker Assessment of KGN’s Protective Effects on Doxorubicin-Induced Cardiotoxicity 2.3. KGN Treatment Inhibited Myocardial Oxidative Stress in DOX-Treated Mice 2.4. GO and KEGG Enrichment Analysis by Network Pharmacology and RNA-Seq 2.5. KGN Treatment Inhibited Myocardial Oxidative Stress in H9C2 Cells 2.6. Mitochondrial Protective Effects of KGN in DOX-Induced Cardiotoxicity To assess the cardioprotective effects of KGN in DIC, mice were pre-treated with KGN for 7 days prior to receiving intraperitoneal injections of DOX (4 mg/kg) weekly for 4 consecutive weeks. Following the DOX administration, KGN treatment continued for an additional 7 days. This approach allowed for the evaluation of both the prophylactic and therapeutic effects of KGN within the same group of mice ( A,B). Before the drug intervention, there was no difference in the size and weight of mice between the groups ( A,B). Interestingly, KGN pre-treatment and continued administration significantly prevented the DOX-induced reductions in body weight, heart weight, heart size and the HW/TL, as well as significantly improving cardiac function ( C–F and C,D). DOX treatment led to notable cardiac dysfunction, characterized by increased LVIDd and LVIDs, as well as decreased EF and FS. Remarkably, KGN treatment significantly mitigated these functional impairments. The pre-treatment and continued administration of KGN resulted in reduced left ventricular internal diameter during diastole (LVIDd) and systole (LVIDs), and restored ejection fractions (EF) and fractional shortenings (FS), indicating enhanced ventricular performance and contractility ( G–I). These findings demonstrate that KGN, through both its pre-treatment and continued post-DOX administration, provides substantial protection against DOX-induced myocardial dysfunction. Further research is warranted to explore the underlying mechanisms and the potential clinical applications of KGN in managing DOX-induced cardiotoxicity.
To assess the histological effects of KGN on DIC, we performed HE and sirius red staining to evaluate myocardial structure and fibrosis. The HE staining showed that the myocardial architecture in the control mice was dense and orderly. In contrast, DOX treatment caused a marked disarray of myocardial cells, with irregular and sparse arrangement, indicative of significant myocardial injury. KGN treatment significantly improved this pathological alteration, restoring myocardial cell alignment and density, and thus preserving structural integrity ( A). Sirius red staining, which detects fibrosis, revealed extensive collagen deposition in DOX-treated mice, signifying pronounced myocardial fibrosis. KGN treatment significantly reduced collagen accumulation, indicating a substantial attenuation of fibrosis ( A,B). We also measured cTnT levels, a marker of myocardial injury. Consistent with the histological results, DOX-treated mice exhibited elevated cTnT levels, which were significantly reduced by KGN treatment, further supporting its cardioprotective effects ( C). Together, these findings demonstrate that KGN effectively alleviates both the structural and functional damage caused by DOX, suggesting its potential as a therapeutic agent for mitigating DOX-induced cardiotoxicity.
Numerous studies have demonstrated the key role of oxidative stress in DOX-induced cardiotoxicity. To comprehensively evaluate the oxidative stress status in mouse models, we assessed several key biomarkers, including MDA, GSH, GSSG, and the GSH/GSSG ratio ( D–G). These indicators provide a thorough assessment of the oxidative damage and redox balance in myocardial tissue. Our results demonstrated that DOX treatment led to a significant increase in MDA levels, a marker of lipid peroxidation, and a marked reduction in the GSH/GSSG ratio, reflecting oxidative damage and a disruption of redox homeostasis. Additionally, the GSSG levels were elevated, further supporting the presence of oxidative stress. Notably, KGN treatment significantly mitigated these oxidative alterations. It reduced the elevated MDA and GSSG levels induced by DOX, suggesting a reduction in lipid peroxidation and oxidative damage. Furthermore, KGN treatment restored the GSH/GSSG ratio, indicating that KGN helps to restore the balance between the reduced and oxidized forms of glutathione, a critical antioxidant system.
After eliminating duplicate data, a total of 367 KGN active targets were identified from the PharmMapper and SwissTargetPrediction databases. In addition, a total of 222 DIC-related targets were identified from the GeneCards, and OMIM databases. A total of 31 common targets of KGN and DIC were identified ( A). The 31 common targets were submitted to the DAVID database for enrichment analysis with GO bioprocesses, and the results were sorted by p -value. The GO enrichment analysis produced a term related to 196 bioprocesses, 29 terms related to cellular components, and 39 terms related to molecular functions. Intersecting targets were significantly associated with cellular resistance to oxidative stress and mitochondria ( B). Meanwhile, three groups of mouse heart samples were subjected to RNA-seq and the results were taken from the intersections for GO and KEGG enrichment analyses. KEGG pathway analyses revealed that the oxidative phosphorylation pathway was significantly enriched in DOX-treated mice ( C). The GO enrichment results suggested that cellular constituents and bioprocesses were highly correlated with mitochondria ( D). Our results draw the common conclusion from two different analytical approaches that the mechanism of KGN treatment of DIC is highly correlated with oxidative stress and mitochondria.
To investigate the protective effects of KGN on DOX-induced inhibition in cardiomyocytes, we utilized H9C2 cells to act as an in vitro model. Consistent with previous studies, DOX treatment significantly reduced cell viability in a dose-dependent manner when compared to the PBS control group ( A). This highlights the cytotoxic nature of DOX at varying concentrations. Subsequently, we treated H9C2 cells with different concentrations of KGN. Our results demonstrated that KGN alone did not affect cell proliferation at concentrations up to 4 μM, indicating that KGN is non-cytotoxic within this range ( B). Remarkably, when co-administered with DOX (1 μM), KGN significantly reversed the DOX-induced suppression of cell proliferation in a dose-dependent manner ( C). This suggests that KGN has a dose-sensitive protective effect against the antiproliferative impact of DOX on cardiomyocytes. To further validate the antioxidative effects of KGN at the cellular level, we quantified intracellular ROS production, MDA levels, and the GSH/GSSG ratio in DOX-treated H9C2 cells. Consistent with our in vivo findings, KGN significantly mitigated the DOX-induced increase in oxidative stress markers. Specifically, KGN reduced ROS and MDA levels while enhancing GSH levels and restoring the GSH/GSSG ratio ( D–K). These results indicate that KGN effectively alleviates the redox imbalance and oxidative stress caused by DOX.
The network pharmacology analysis and RNA-seq results highlighted the critical role of mitochondrial function in the therapeutic effects mediated by KGN. To further explore this, we examined mitochondrial changes in H9C2 cardiomyocytes subjected to DOX-induced stress. DOX treatment led to a significant reduction in mitochondrial density and disrupted inter-mitochondrial communication. Under normal conditions, mitochondria exhibit a reticular, network-like structure essential for efficient cellular function. However, in the DOX-treated group, mitochondria appeared fragmented, shortened, and aggregated, losing their characteristic network architecture. This morphological alteration indicates severe mitochondrial dysfunction and compromised cellular energy dynamics. Remarkably, KGN treatment significantly reversed these changes ( A–C). Mitochondria in KGN-treated cells demonstrated restored density and reestablished their reticular network, indicating improved mitochondrial communication and integrity. Further functional assessment using the JC-1 assay revealed that DOX significantly decreased mitochondrial membrane potential, a hallmark of mitochondrial dysfunction. KGN treatment effectively restored this membrane potential, underscoring its role in maintaining mitochondrial health ( D,E). Additionally, DIC showed a notable reduction in ATP production, reflecting impaired mitochondrial bioenergetics ( F). KGN not only reversed this decline but also restored ATP levels, indicating the recovery of mitochondrial function and energy production. These findings suggest that KGN plays a critical role in preserving mitochondrial structure and function, counteracting the detrimental effects of DOX. These results indicate KGN could restore DIC-induced mitochondrial network integrity and enhance bioenergetics.
Myocardial structural alterations and cardiac dysfunction are brought on by DOX treatment. This presents clinically as early asymptomatic left ventricular systolic impairment, which ultimately results in the development of refractory heart failure . In this work, we showed that KGN reduced myocardial damage and cardiac insufficiency brought on by DOX, highlighting KGN’s dual protective efficacy against DOX-mediated oxidative insults and organelle dysfunction in particular. We verified that KGN might be a viable option for treating DOX-induced cardiotoxicity based on these data. In this work, transthoracic echocardiography was used to evaluate heart function. The successful creation of our disease model was corroborated by the findings of Kuno et al., which showed that DOX lowered EF and FS and increased LVIDd and LVIDs in mice . On the other hand, DOX-induced cardiac insufficiency was lessened by KGN. We discovered that KGN decreased the rise of cTnT in serum, which is a helpful marker for the early diagnosis of myocardial damage . Furthermore, it was confirmed that KGN reduced DOX-induced myocardial damage by using HE and sirius red staining to look at tissue alterations. DIC has a complicated process, but Jeong et al. showed that enhancing antioxidant enzymes can effectively reduce DIC . It has been demonstrated that KGN improves oxidative stress to protect disk degeneration. We therefore want to know whether KGN has the same antioxidant effect in DIC. To evaluate oxidative damage, we quantified MDA and GSH/GSSG ratio, a key indicator of antioxidant capacity, in both murine cardiac tissues and H9C2 cardiomyocytes. In line with our hypothesis, KGN treatment significantly attenuated DOX-induced elevations in MDA levels while restoring the GSH/GSSG balance compromised by DOX exposure. These data point to a mechanism by which KGN shields the heart from oxidative damage. To delineate the cardioprotective mechanism of KGN against DIC, we implemented an integrated multi-omics strategy combining transcriptomic profiling with network pharmacology analysis. The results further confirmed that the mechanism by which KGN attenuates DIC is related to oxidative stress and mitochondria. Oxidative stress is characterized by a disruption of redox equilibrium within biological systems, marked by the excessive generation of ROS. These elevated ROS levels trigger oxidative modifications in essential cellular components, including protein structural alterations, the induction of lipid peroxidation cascades, and oxidative lesions in genetic material such as DNA [ , , ]. KGN decreased the DOX-induced rise in ROS, according to our analysis of the ROS content in H9C2 cells using flow cytometry and DCFH-DA staining. We looked into how DOX affected mitochondria because of the strong connection between oxidative stress and mitochondria . Yang et al. observed that enhancing mitochondrial function considerably reduced DOX-induced cardiac dysfunction . In line with the results, our investigation revealed that DOX changed the morphology of the mitochondria in H9C2 cells. This could be because cardiolipin is only found in the mitochondrial membranes and is most prevalent in cardiomyocytes. However, by generating a nearly irreversible complex with cardiolipin that remains in the inner mitochondrial membranes, the cationic drug DOX, which has a high affinity for it, can interfere with the normal morphology of mitochondria . In contrast, the addition of KGN treatment reduces the mitochondrial morphology disruption caused by DOX. The disruption of mitochondrial morphology affects its normal function. The production of ATP is one of the most important functions of mitochondria . Consistent with our hypothesis, KGN treatment effectively prevented the decrease in intracellular ATP concentrations induced by DOX in H9C2 cardiomyocytes. Additionally, our investigation revealed that KGN administration attenuated the loss of mitochondrial membrane potential (ΔΨm) caused by DOX exposure. The maintenance of ΔΨm, which depends on the proton gradient across the inner mitochondrial membrane, is critical for supporting mitochondrial ATP synthesis through oxidative phosphorylation . The disruption of this electrochemical gradient through mitochondrial impairment can compromise cellular viability by reducing both bioenergetic efficiency and ATP production capacity . The mitochondria-targeted antioxidant mitoTEMPO has been reported to attenuate DOX-induced cardiac injury, and it was effective in maintaining the structural integrity of mitochondrial membranes and enhancing ATP production capacity . Our findings demonstrated that the efficacy of KGN was almost the same as that of mitoTEMPO. Thus, we confirmed that KGN protects mitochondria and reduces DOX-induced mitochondrial damage. Despite significant advances in understanding DOX-induced cardiomyopathy, including metformin’s autophagy-mediated cardioprotection and S-propylcysteine’s antioxidative effects, dexrazoxane remains the only US Food and Drug Administration (FDA)-approved therapy, albeit with concerning limitations such as myelosuppression exacerbation, antitumor efficacy interference, and secondary malignancy risks [ , , ]. Our study proposes KGN—a compound renowned for its regenerative efficacy in cartilage repair, tendon healing, and wound regeneration, yet unexplored in cardioprotection—as a novel cardioprotective agent . Crucially, KGN demonstrates a favorable safety profile with no reported severe adverse effects, addressing dexrazoxane’s critical drawbacks. While DOX cardiotoxicity involves multimodal pathomechanisms such as ferroptosis activation and IL-27p28-driven inflammation, our findings provide the first direct evidence that KGN alleviates cardiac damage through oxidative stress amelioration and mitochondrial preservation [ , , ]. Although KGN’s inherent anti-inflammatory properties suggest potential multi-pathway efficacy, transcriptomic analysis indicates that its acute cardioprotection primarily stems from antioxidative and mitochondrial stabilization rather than canonical inflammatory pathway modulation . Whether KGN acts in DIC through multiple pathways still requires further investigation. Looking forward, leveraging established KGN delivery systems, including cardiac-targeted nanoparticles and injectable hydrogels, could enable precise, sustained drug release to enhance therapeutic efficacy while minimizing systemic exposure . Future studies should systematically investigate KGN’s chronic effects and combinatorial regimens with reduced-dose dexrazoxane to optimize clinical translation, building upon its unique mechanistic advantages and proven regenerative versatility. Despite the novel findings of this study, several limitations should be acknowledged. First, the reliance on a single cellular model (H9C2 rat cardiomyocyte cell line) may restrict the generalizability of our conclusions. Although H9C2 cells are widely used for in vitro cardiac research, their immortalized nature may result in discrepancies in gene expression profiles and functional responses compared to primary cardiomyocytes under pathophysiological conditions. Future studies incorporating primary cells or human induced pluripotent stem cells (hiPSC)-derived cardiomyocytes could improve the physiological relevance of the findings. Second, the precise molecular targets underlying the observed intervention effects remain unidentified. While our data suggest potential pathway activation, the lack of direct mechanistic evidence limits the translational potential of this strategy. Further investigations utilizing proteomic profiling or genome-wide CRISPR screening are warranted to delineate the key mediators of the intervention.
4.1. Experimental Animal Models 4.2. Echocardiographic Assessment of Myocardial Function 4.3. Histological Examination and Staining 4.4. Enzyme-Linked Immunosorbent Assay (ELISA) 4.5. Cell Culture and Treatments 4.6. Cell Viability 4.7. Measurement of GSH, GSSG, and MDA Levels 4.8. Flow Cytometry Analysis 4.9. Immunofluorescence Staining 4.10. Detection of ATP Levels in Cardiomyocytes 4.11. Predict the Intersection of KGN Active Targets and DIC Disease Targets 4.12. Gene Ontology (GO) Enrichment Analysis 4.13. RNA Extraction, Library Preparation, and Sequencing 4.14. Statistical Analysisequencing Statistical analyses were conducted using GraphPad Prism 9.0 (GraphPad Software, San Diego, CA, USA) with one-way ANOVA and Tukey’s post hoc tests. The results are presented as mean ± SD, with p <0.05 defining statistical significance.
Experimental protocols involving animal subjects were performed in compliance with the institutional guidelines established by the Laboratory Animal Ethics Committee of Chongqing Medical University Children’s Hospital (Ethical Approval Code: CHCMU-IACUC20241101010). Eight-week-old male C57BL/6J mice were sourced from HFK Biotechnology Co., Ltd. (Beijing, China) and maintained in a controlled environment with standardized photoperiod conditions (12 h light/dark cycle). The animals received unrestricted access to standard feed and autoclaved drinking water throughout the experimental period. A one-week acclimation period was provided prior to the initiation of experiments to ensure proper adaptation. The study was designed with three experimental groups: negative control (NC), doxorubicin (DOX), and doxorubicin with KGN treatment (DOX + KGN), each consisting of ten mice. The experimental protocol involved the daily intraperitoneal administration of KGN (10 mg/kg/day) over seven consecutive days. Subsequently, mice were subjected to weekly intraperitoneal injections of DOX (4 mg/kg) or an equivalent volume of saline vehicle for four weeks. Following the DOX regimen, animals continued to receive either KGN (10 mg/kg/day) or saline for 7 additional days. At the conclusion of the treatment period, heart tissues were harvested for subsequent analyses.
The echocardiographic evaluation of myocardial function was conducted on day 7 following DOX administration. A two-dimensional guided M-mode echocardiography system (VisualSonics, Toronto, ON, Canada) was employed for the assessments. Mice were anesthetized using a continuous inhalation of 2–3% isoflurane to ensure minimal movement and stable heart rates during imaging. Monitoring comprised serial evaluations of ventricular morphology (LVIDs, LVIDd) and functional capacity (EF% and FS%) through standardized echocardiographic protocols. These parameters were automatically calculated using the integrated Vevo 3100 algorithms to ensure accuracy and consistency in the derived measurements.
Cardiac specimens underwent immersion-fixation in 4% paraformaldehyde (PFA) followed by paraffin-embedding and serial sectioning at 5 μm thickness. Histomorphological evaluation via H&E staining combined by a quantitative histochemical approach using sirius red to assess interstitial collagen deposition. For fibrosis analysis, the percentage of sirius red-positive areas was calculated relative to the total field area to determine the collagen volume fraction. Imaging was performed with a Leica microscope (Leica Microsystems, Wetzlar, Germany), and ImageJ bundled with 64-bit Java 8 (National Institutes of Health, Bethesda, MD, USA) was utilized for the quantitative analysis of sirius red-stained sections.
The serum levels of cardiac troponin T (cTnT) were quantified using ELISA kits (Jiangsu Meibiao Biological Technology Co., Ltd., Yancheng, China) and according to the manufacturer’s protocol. Measurements were interpolated from a standard curve to ensure accurate quantification.
The H9C2 cell line (Procell Life Science and Technology Co., Ltd., Wuhan, China) was cultured in DMEM growth medium (Gibco, Grand Island, NY, USA) containing 10% fetal bovine serum, penicillin–streptomycin solution, and maintained under controlled atmospheric conditions (37 °C, 5% CO 2 ).
Cellular proliferative capacity was quantified via CCK-8 colorimetric assay (Beyotime, Shanghai, China). H9C2 cells were plated in 96-well microplates (3 × 10 3 cells/well) for pharmacological interventions (DOX/KGN). Following 24 h pharmacological exposure, 10 µL CCK-8 reagent was introduced per well with subsequent 60 min chromogenic development. Metabolic activity was assayed by measuring optical density at 450 nm using a BioTek Cytation 5 multimode reader (Agilent Technologies, Santa Clara, CA, USA), with background subtraction using blank controls.
Oxidative stress parameters were quantitatively analyzed through the spectrophotometric determination of reduced GSH, GSSG, and MDA concentrations in both myocardial tissues and H9C2 cells. These assays were performed using commercial kits (Solarbio, Beijing, China), following the manufacturer’s instructions. Absorbance readings were obtained using a BioTek Cytation 5.
Single-cell suspensions were generated via enzymatic dissociation utilizing a ROS assay kit. H9C2 cells were incubated with the ROS reagent (Beyotime, Shanghai, China) for 20 min at 37 °C. Flow cytometric analysis was performed on a BD FACSCanto™ II system (10-color configuration; BD Biosciences, Franklin Lakes, NJ, USA) with subsequent data processing using FlowJo.
H9C2 cardiomyocytes (3.0 × 10 5 /well) were cultured in confocal dishes and exposed to KGN for 2 h prior to 24 h incubation with 1 μM DOX. Cells were then stained with DCFH-DA (Beyotime, Shanghai, China), Hoechst 334342 (Beyotime, Shanghai, China), Mitochondrial probes MitoTrackerTM Red CMXRos (Beyotime, Shanghai, China) and JC-1 (Beyotime, Shanghai, China) was applied under optimized conditions following standard protocols. Post-staining, cellular samples underwent HBSS buffer washing and were subsequently imaged through Nikon A1R confocal microscopy (Nikon, Tokyo, Japan), with quantitative analysis conducted via NIS-Elements Viewer 5.21 64-bit (Nikon, Tokyo, Japan).
Cellular ATP content was determined via commercial enzymatic assay (Solarbio, Beijing, China) following standardized protocols. Processed cells (1 × 10 4 /sample) underwent ice-cold lysis using probe sonicator (3 × 10 s pulses), followed by centrifugation (10,000× g , 10 min, 4 °C) to obtain clarified lysates. ATP quantification employed bioluminescent detection normalized against total protein concentration measured by BCA assay, ensuring data standardization across biological replicates.
Potential molecular targets of KGN were identified through integrated computational prediction using the PharmMapper and SwissTargetPrediction platforms. Disease-associated targets for DIC were systematically retrieved from GeneCards and OMIM databases through query optimization with standardized terminology. All gene identifiers were normalized to UniProt accession numbers for cross-database compatibility. Therapeutically relevant targets were subsequently determined through Venn diagram analysis of compound-target and disease-target networks.
Functional annotation of KGN’s core targets in anti-DIC mechanisms was performed using the Database for Annotation, Visualization and Integrated Discovery (DAVID) bioinformatics database. Human genes were mapped through official gene symbols as primary identifiers. Statistically significant results ( p < 0.05) were filtered and visualized using an interactive bioinformatics platform (Bioinformatics.com.cn), with pathway enrichment and molecular function analyses conducted through DAVID’s computational framework.
Total RNA was extracted from the experimental specimens with TRIzol Reagent (Thermo Fisher, Waltham, MA, USA) following standard operating procedures. To ensure genomic DNA removal, DNase I treatment (NEB, M0303L) was systematically implemented. RNA quality control included the dual evaluation of purity (A260/A280 ratios measured by Nanodrop OneC, Thermo Fisher, Waltham, MA, USA) and structural integrity (analyzed through LabChip GX Touch, PerkinElmer, Waltham, MA, USA). Quantitative RNA measurements were conducted using Qubit 3.0 fluorometry with the Broad Range RNA detection kit (Thermo Fisher, Waltham, MA, USA). For sequencing library construction, the KCTM Digital mRNA Library Prep Kit (Seqhealth, Wuhan, China) was employed, featuring 12-nucleotide randomized unique molecular identifiers (UMIs) to mitigate PCR amplification artifacts and sequence duplication errors. Protocol optimization included size selection (200–500 bp fragments) through magnetic bead-based purification. Final library quantification preceded high-throughput sequencing on the DNBSEQ-T7 platform (MGI) using 150 bp paired-end configurations.
These findings demonstrate that KGN mitigates DIC through dual mechanisms: attenuating oxidative damage and preserving mitochondrial integrity. In order to determine the exact chemical mechanism behind the action of KGN, more extensive studies must be conducted in the future.
|
Developing a Toolbox of Antibodies Validated for Array Tomography-Based Imaging of Brain Synapses | f0788d7a-267c-4f69-a9a2-f95a03a7c374 | 10748464 | Anatomy[mh] | Array tomography (AT) is a powerful volume microscopy technique for high-dimensional analysis of complex protein populations in cells and organelles, including synapses. AT involves the use of ultrathin serial sections embedded in resin and subjected to multiple rounds of immunofluorescence antibody (Ab) labeling and imaging. AT relies on antibody-based detection of proteins but because commercial antibodies are typically validated for other applications they often fail for AT. To identify antibodies with high probability of success in AT we developed a novel screening strategy and used this to create a comprehensive database of AT-validated antibodies for neuroscience.
Array tomography (AT) is a powerful technique for the analysis of large populations of synapses with deep proteomic dimensionality. AT involves preparing ultrathin serial sections from brain tissue that has been embedded in acrylic resin, and subjecting this array of sections to multiplex immunofluorescence antibody (Ab) labeling and imaging, followed by multiple rounds of iterative Ab removal, reprobing, and imaging . After many rounds of imaging, sections can be exposed to heavy metal stains, and further imaged with scanning electron microscopy. Ultimately, images are reconstructed into three-dimensional volumes of brain ultrastructure with fluorescent labeling overlays . This technique can simultaneously interrogate the proteomic composition of thousands of synapses with deep dimensionality . Unfortunately, many commercial Abs do not exhibit efficacy and/or specificity when applied to brain samples prepared for AT , hindering efforts to broadly implement this powerful imaging technique. While further refinement of tissue preparation for AT could potentially lead to improved labeling with existing antibodies, such efforts are severely limited by two considerations. First, because multiplexing is a major advantage of the method, one needs to find conditions that will be beneficial for all antibodies. Often, changing one parameter (e.g., less fixation) may improve the performance of an antibody, while decreasing the performance of other antibodies, or resulting in loss of smaller cytosolic antigens and thus hindering their detection. Second, the ability to preserve ultrastructure and use both immunofluorescence and electron microscopy is a key feature of AT. Antigenicity can be improved by resin removal , but this damages the ultrastructure, making it difficult to examine the tissue under the electron microscope . Therefore, we focused our efforts on generating and validating a set of Abs with high efficacy and specificity for brain tissue prepared using current AT protocols. We had previously developed a reliable pipeline for generating, screening and validating monoclonal antibodies (mAbs) for neuroscience research, initially focusing on voltage-gated potassium channels . This approach comprised analyses of numerous candidate mAbs in immunoblot and immunohistochemistry assays against mammalian brain samples . This system reliably yielded mAbs against other ion channels , synaptic scaffolds , adhesion molecules neurotransmitter receptors , and a variety of other targets. This approach was used to provide highly validated mAbs to the research community in an National Institutes of Health (NIH)-funded effort at the University of California Davis/NIH NeuroMab Facility . A key aspect of Ab validation is to test them for efficacy and specificity directly in the particular application, sample type and under the exact sample preparation and labeling conditions in which they will be subsequently used . When a new immunolabeling technique like AT is introduced to the scientific community, it remains uncertain whether existing Ab collections will be effective and specific in the new application. Initial tests on a set of commercial Abs suggested that only a restricted subset of Abs screened on conventional assays would show efficacy and specificity for ultrathin sections embedded in plastic. Accordingly, identifying which Abs can be used on AT samples for systematic evaluation of brain synapses remains a requirement for broad and effective use of this powerful technique. Here, we describe efforts aimed at developing a reliable platform for validating Abs for AT. We present the results of using this platform in analyses of existing mAbs developed and/or validated for other purposes, and in new projects specifically aimed at developing novel mAbs for use in AT.
Hybridoma generation and conventional mAb screen Preparation of cell pellet arrays for AT cell-based proxy screen Immunolabeling and analyses of CBS proxy assay Preparation of arrays for AT from mouse neocortex Human neocortical tissue preparation for AT Lowicryl HM20 embedding for conjugate immunofluorescence-SEM AT applications Immunofluorescent labeling and analysis of brain AT sections Synaptic antibody characterization tool (SACT) analysis Immunogold labeling of osmium-treated samples Immunogold labeling of Lowicryl HM20-embedded tissue Mouse immunizations, splenocyte isolation, hybridoma generation and conventional screening were performed following the protocols in ; and ; except that electrofusion was used to generate hybridomas. Two ELISA assays, one against purified protein immunogen, and one against transfected heterologous cells overexpressing the full-length target protein, were used in parallel as the primary screen . A selected set of ELISA-positive candidates were taken through subsequent screens, including immunocytochemistry on transfected cells, immunoblots on brain subcellular fractions, and immunohistochemistry on conventionally prepared brain sections .
The cell-based proxy screen (CBS) was developed from a previously reported protocol for preparing cultured cells for transmission electron microscopy . Briefly, COS-1 cells were cultured overnight in 10-cm tissue culture plates until a confluency of ∼70% and then transfected with mammalian expression plasmids using Lipofectamine 2000 (ThermoFisher, catalog #11668030) per manufacturer’s instructions. Cells were either co-transfected with plasmids encoding enhanced green fluorescent protein (EGFP) and the target protein of interest, or with plasmid encoding the target protein fused to a reporter tag (EGFP, FLAG). Transfected cells were incubated at 37°C/5% CO 2 for 72 h, then harvested in Versene with manual pipetting to release adherent cells. Cells from multiple culture plates were pooled into a single 15-ml tube and centrifuged at 1000 × g for 5 min at room temperature (RT). The subsequent pellet was transferred to a glass vial and fixed for 2 h at RT in AT fixative [4% formaldehyde (FA) in 10 m m PBS (138 m m NaCl, 2.7 m m KCl) with 2.5% sucrose, made fresh from 8% aqueous FA; Electron Microscopy Sciences (EMS), catalog #157-8)]. The pellet was rinsed three times for 10 min each in PBS containing 50 m m glycine, followed by dehydration using 5-min incubations in solutions of 50% ethanol (1×) and 70% ethanol (3×). The pellet was then washed twice for 5 min each in a solution of 3 parts LR White acrylic resin (hard grade, SPI supplies catalog # 2645) and 1 part 70% ethanol, and then four times for 5 min each in 100% LR White at 4°C. The pellet was left in LR White overnight at 4°C, then transferred to a gelatin capsule filled with LR White resin, capped, and incubated for 24 h at 55°C. To generate semi-thin (400 nm) sections, the plastic “bullet” containing embedded cells was manually trimmed and then sectioned on an ultramicrotome (Leica, Ultracut UCT). Sections were collected using a thin metal loop, placed in single wells of a collagen-coated glass bottom 96-well plate (Corning 4582), air dried and stored in the dark at RT until screening.
Semi-thin (400 nm) sections in 96-well plates were rinsed in 50 m m glycine in Tris-buffered saline (TBS; 50 m m Tris, 150 m m NaCl, pH 7.6) for 5 min at RT. Glycine was removed and sections were incubated in blocking buffer [0.05% Tween 20 (EMS, catalog #25564) and 0.1% BSA (EMS, catalog # 25 557) in TBS] for 5 min at RT and then incubated in primary Ab in blocking buffer for 2 h at RT. Following three washes in TBS for 5 min each, sections were incubated in goat anti-mouse IgG secondary Ab conjugated to Alexa Fluor-594 for 30–45 min at RT. Following secondary labeling, sections were washed in TBS for 5 min each in RT. Sections were imaged using a 40×/1.2 NA objective on a Zeiss AxioObserver Z1 microscope with an AxioCam HRm digital camera controlled with Axiovision software (Zeiss). Target labeling of Ab was assessed by comparing fluorescent signal in red (Alexa Fluor-594) and green (EGFP) channels for degree of colocalization (specificity of target label) and for labeling intensity. Labeling was rated on a scale of 0 (no label) to 4 (intense and complete colocalization).
Arrays were prepared following the protocol described previously . All animal procedures were performed in accordance with the Administrative Panel on Laboratory Animal Care at Stanford University. Briefly, pentobarbital anesthetized mice were subjected to intracardial perfusion with 4% FA in PB (0.1 m phosphate buffer, pH 7.4) made fresh from powdered paraformaldehyde. Following removal of the perfusion-fixed brain, small (1 mm 3 ) blocks of cerebral cortex were dissected and immediately transferred to AT fixative for 1 h at RT followed by overnight incubation at 4°C. Tissue blocks were then washed, dehydrated, and embedded in LR White resin according to steps described above for CBS pellets. After embedding, blocks were manually trimmed, and serial sections (70 nm) were cut with an ultramicrotome (Leica, Ultracut UCT) and collected onto gelatin-coated glass coverslips. Sections were air dried and stored in the dark at RT until ready to be labeled.
Fresh human tissue and autopsy tissue prepared for AT from previous studies was used . Briefly, human cortical samples were rinsed in saline and placed in RT fixative (4% FA in PB) for 1 h. The tissue was further fixed for 23 h at 4°C, for a total time of 24 h in fixative. The tissue was then transferred to PBS with 0.01% sodium azide and stored at 4°C before further processing. The tissue was dehydrated and embedded following the same protocol as for the CBS pellet, except that incubation times in the different ethanol solutions and in resin were increased to 10 min.
All animal procedures were performed in accordance with the University of North Carolina animal care committee’s regulations. After deep anesthesia with pentobarbital, adult mice (three to four months old) were perfusion-fixed with a mixture of 2% glutaraldehyde/2% FA, dissolved in 0.1 m phosphate buffer (pH 6.8). Brains were removed and postfixed overnight at 4°C in the same fixative. Following extensive washes in buffer, 200-μm-thick Vibratome sections were collected, incubated on ice on a shaker with 0.1% CaCl 2 in 0.1 m sodium acetate for 1 h, then cryoprotected through 10% and 20% glycerol, and overnight in 30% glycerol in sodium acetate solution. The next day, small tissue chunks from neocortex were dissected out and quick-frozen in a dry ice/ethanol bath. Freeze-substitution was performed using a Leica AFS instrument with several rinses in cold methanol followed by substitution in a 2–4% solution of uranyl acetate in methanol, all at −90°C. After 30-h incubation, the solution was slowly warmed to −45°C and infiltrated with Lowicryl HM20 over 2 d. Capsules containing tissue chunks were then exposed to UV and gradually warmed to 0°C. Polymerized capsules were removed from the AFS apparatus and further exposed to UV at RT for an additional day, to complete curing of the plastic.
Ultrathin (70 nm) sections on coverslips were incubated in 50 m m glycine in TBS for 5 min at RT, followed by blocking solution (0.05% Tween 20 and 0.1% BSA in TBS) for 5 min at RT, and then incubated in primary Abs diluted in blocking solution overnight at 4°C. The reference antibodies are listed in . Following three washes in TBS for 5 min each, sections were incubated in cross-adsorbed Alexa Fluor dye-conjugated goat secondary Abs (ThermoFisher Scientific), diluted 1:150 in blocking solution for 30 min at RT. The mAbs were detected using Alexa Fluor-594-conjugated goat anti-mouse IgG (H+L; Invitrogen catalog #A-11032) and the reference Ab with an Alexa Fluor-488-conjugated goat Ab against the appropriate host (Invitrogen, catalog #A-11034 anti-rabbit, catalog #A-11073 anti-guinea pig or catalog #A-11039 anti-chicken). Subsequently, labeled sections were washed three times in TBS for 5 min each, followed by three rinses in water for 30 s each. Coverslips with sections were mounted onto glass slides using SlowFade Gold Antifade mountant with DAPI (Invitrogen #S36964) and imaged the same day using a 63×/1.4 Plan-Apochromat 1.4 NA oil objective on a Zeiss AxioImager Z1 microscope with an AxioCam HR digital camera controlled with Axiovision software (Zeiss). Image ZVI files were converted to TIFF and uploaded into Fiji Imaging software for analysis. Images of multiplex labeling from at least three serial sections were aligned using the DAPI signal with the MultiStackReg plugin in FIJI and immunolabeling was assessed for proper localization against a reference marker. Quality of labeling was assessed by experienced observers and rated on a scale of 0 (no label or off target only) to 4 (target only label).
To identify top Ab candidates for synaptic target localization we used the SACT program , which applies an unsupervised probabilistic detection algorithm to identify fluorescent puncta and determine whether they are located at synapses. For each candidate Ab, the size, volume, and density of immunolabeled puncta was measured and compared with similar measures made using an AT synaptic marker reference Ab in the same sections (Synapsin-1 or PSD-95). To evaluate the sensitivity and specificity of a candidate Ab we plotted the target synaptic density of each candidate (defined as the number of synapses detected with the candidate Ab per unit volume) and the target specificity ratio (TSR), defined as the number of synapses detected by the candidate Ab relative to the total number of Ab puncta .
For immunogold EM of osmium-treated tissue embedded in LR White, the samples were prepared similarly to Immunofluorescence AT, except that the fixative contained 0.1% glutaraldehyde in addition to the 4% FA, and a postfixation step was added with osmium tetroxide (0.1%) and potassium ferricyanide (1.5%) with rapid microwave irradiation (PELCO 3451 laboratory microwave system with ColdSpot; Ted Pella), three cycles of 1 min on–1 min off–1 min on at 100 W, followed by 30 min at RT. The immunolabeling protocol was similar to the immunofluorescence labeling, with two additional steps in the beginning: treatment for 1 min with saturated sodium metaperiodate solution in dH 2 O (to remove osmium) and 5 min with 1% sodium borohydride in Tris buffer to reduce free aldehydes resulting from the presence of glutaraldehyde in the fixative. A 10-nm gold-labeled goat anti-mouse IgG secondary Ab (SPI Supplies) was used at 1:25 for 1 h. After washing off the secondary Ab, the sections were treated with 1% glutaraldehyde for 1 min to fix the Abs in place. The sections were poststained with 5% uranyl acetate for 30 min and lead citrate for 1 min.
Thin sections (∼80 nm) of adult mouse cortex embedded in Lowicryl HM20 were cut and collected on nickel mesh grids. Grids were blocked in 1% bovine serum albumin in Tris-buffered saline pH 7.6 with 0.005% Tergitol NP-10, and incubated overnight at 21–24°C with the primary Ab. Grids were then rinsed, blocked in 1% normal goat serum in Tris-buffered saline pH 8.2, and incubated in goat anti-mouse secondary Abs conjugated to 10- or 20-nm-diameter gold particles (Ted Pella). Grids were counterstained with 1% uranyl acetate, followed by Sato’s lead, and examined in a Philips Tecnai transmission electron microscope at 80 KV, and images collected with a 1024 × 1024 cooled CCD (Gatan).
Efficacy and specificity of commercial Abs against synaptic proteins for array tomography Finding application-specific anti-PSD-95 mAbs via retrospective screen of a prior monoclonal project Application-specific generation and validation of mAbs for AT Homer1L mAb generation as an exemplar mAb project Generating and validating mAbs for array tomography: AT-focused screening of an anti-Homer1L mAb project The AT CBS assay effectively predicts mAbs that label target proteins in AT brain sections Using the AT CBS assay to screen Abs for immunoelectron microscopy Application of the AT CBS assay to development of novel mAbs We have completed 15 separate mAb projects, each targeting a distinct protein in which we used the CBS proxy assay to identify candidate mAbs that are effective for AT-based imaging. More than 1900 samples were screened with the CBS assay, and 259 CBS positive parents (CBS score ≥2) were identified. Out of the CBS positive parents, 207 were subsequently tested for AT on brain sections, and 124 out of these (60%) were also positive for AT on brain sections. Compared with the other assays that we performed, the CBS assay had a higher predictive value for identifying candidate mAbs positive for AT on brain sections . Thus, for the three projects (L113 Homer1, L109 Calbindin, and L106 Gephyrin) where all top ELISA positive candidates were selected for screening on every assay, the CBS screen had a higher positive predictive value for mAbs suitable for brain AT than any other assay, as well as a lower false omission rate. Positive predictive values were calculated as the number of candidates that were positive by both CBS assay and on brain AT (true positives), as a percent of all positive CBS candidates. False omission rate was calculated as the number of candidates negative on the CBS assay but positive for brain AT, as a percent of all the negative CBS candidates, and therefore reflects the likelihood of missing positive brain AT candidates . Overall, from the 15 projects, 12 yielded AT-validated mAbs (Extended Data ). Two projects did not yield any positive candidates on the AT CBS screen, although conventional IHC positives were obtained from both projects. Another project (L125 targeting Synapsin-3) failed for reasons unrelated to AT, as all of the obtained candidate mAbs were found to exhibit cross-reactivity to Synapsin-1. To verify that we were not inadvertently excluding candidate mAbs with potential AT utility by applying the AT CBS as a filter, for the two projects that did not yield any AT CBS positive candidates we also tested a set of candidate mAbs that had high scores from the IHC screen, but none of these yielded labeling in the AT brain assay. For many projects the CBS assay enabled us to develop more than one AT-validated mAb against the same target protein, either yielding mAbs of different mouse IgG subclasses (L106, L109, L122), or different target isoform specificity (L113, L127). These results support the reliability of the AT CBS assay for identifying a subpopulation of Abs that have a high likelihood of exhibiting labeling of their target in AT brain sections. The overall list of Abs tested and results can be found in Extended Data .
Initial efforts to identify AT-appropriate Abs that successfully labeled target proteins in sections from FA-fixed and LR White embedded mouse neocortex relied on ad hoc sampling of the vast array of preexisting Abs from commercial sources. Over 300 commercial Abs, selected based on available literature and personal communication, were evaluated for efficacy and specificity for AT (Extended Data ). Criteria for success included immunolabeling that matched known cellular expression and subcellular localization of the target protein. When synaptic proteins were targeted, the subcellular distribution of the immunofluorescence of the tested Ab was evaluated by assessing colocalization with well-known reference synaptic Abs and other AT validated antibodies (Extended Data ). All of the reference antibodies were characterized extensively and used in several previously-published studies . Potential background or nonspecific labeling was evaluated using “exclusion” markers defined for each target protein; for example, inhibitory synapse markers when testing Abs against proteins thought to be restricted to excitatory synapses. The labeling pattern of each Ab was compared with that for the nuclear marker DAPI to control for background nuclear immunolabeling . Scoring was performed using visual inspection of images by a trained observer. We found that even with widely-used commercial antibodies generally considered to yield optimal results, up to 50% fail completely (139/306 tested; Extended Data ). Even more alarming was the observation that for 32% of the targets (63 out of 196) we failed to identify an AT-suitable commercial antibody. Therefore, we set out to design a more focused and application-specific screening process. 10.1523/ENEURO.0290-23.2023.f1-1 Extended Data Figure 1-1 Table of all antibodies tested for AT. RRID, Research Resource Identifier. “Tested In” column shows the species where the antibody was tested. M, mouse; H, human; R, rat; Z, zebrafish. The results from the antibody testing indicated in the next column, “Target Label,” apply to all species where the antibody was tested, unless explicitly stated. Testing was performed on formaldehyde fixed tissue embedded in LR White, except for the antibodies indicated with *, which require glutaraldehyde in the fixative. The last column, “Performance on Lowicryl,” indicates results of the testing on tissue fixed with a combination of formaldehyde and glutaraldehyde and embedded in Lowicryl HM20. Download Figure 1-1, XLS file . 10.1523/ENEURO.0290-23.2023.f1-2 Extended Data Figure 1-2 Commonly used antibodies for AT, grouped by target. These antibodies have been validated in both mouse and human biopsy brain tissue, except where indicated . The great majority of antibodies perform well with both formaldehyde fixation and with a combination of formaldehyde and glutaraldehyde, except several that require glutaraldehyde in the fixative . RRID, Research Resource Identifier. Download Figure 1-2, DOCX file .
We previously performed a project to develop mAbs recognizing the mammalian synaptic marker PSD-95 employing a region of human PSD-95 (amino acids 77–299 of Uniprot accession number P78352-2) as the immunogen. This resulted in a set of 96 independent samples that displayed immunoreactivity against a recombinant PSD-95 protein fragment by ELISA, and to varying degrees on immunoblots and by immunohistochemistry against brain samples . From these 96 samples we selected one mAb, K28/43, that exhibited efficacy and specificity for reliable labeling of mammalian PSD-95 in brain tissue sections and cultured neurons. However, all 96 ELISA-positive samples had been archived as frozen hybridomas for potential future use. In a subsequent analysis of mAbs from this project aimed at identifying mAbs recognizing zebrafish PSD-95, we observed that binding to mammalian PSD-95 was not predictive of labeling the zebrafish ortholog . Whereas clone K28/43 robustly recognized mammalian PSD-95 , it did not recognize zebrafish PSD-95, although other mAbs from this same project did . Human and zebrafish PSD-95 (Uniprot accession number A0A8M3ASX4) share 89% amino acid identity within the region used as the immunogen, with distinct regions of high and low sequence identity. This provides a likely basis for mAbs with distinct epitopes within the collection originally selected for their binding to human PSD-95 displaying differences in binding to zebrafish PSD-95. That we were successful in rescreening existing mAbs within this collection for a new purpose suggested that this would be a viable approach to identify mAbs for not only new targets, but also for new applications, without the need to launch new mAb projects from scratch. Accordingly, we evaluated mAbs for labeling of PSD-95 in samples processed for AT . Following the strategy outlined in , controls to assess specificity included co-labeling with different reference Abs against the same target (a rabbit monoclonal anti-PSD-95 Ab, Cell Signaling #3450) and Abs against the adjacent presynaptic compartment of excitatory synapses (a rabbit monoclonal anti-synapsin Ab, Cell Signaling #5297). Because postsynaptic densities of synapses usually span at least two adjacent ultrathin sections (70-nm thickness each), the consistency of labeling was further assessed by examining serial sections for the presence of immunolabel. Using these criteria, we observed robust and specific labeling with K28/43 of excitatory synapses in AT sections prepared from mouse cortex, and from freshly obtained resected human neocortex . However, no specific labeling with K28/43 could be detected in neocortical samples from human autopsy brain, although the tissue was fixed and embedded using the same protocol. Therefore, we expanded our search to include other mAbs from the K28 anti-PSD-95 project that like K28/43 had been identified as labeling brain tissue prepared using conventional immunohistochemistry protocols. We found that unlike K28/43, mAbs K28/37-labeled, K28/74-labeled, and K28/77-labeled synapses in both fresh and autopsy human brain samples, as well as in mouse neocortex samples. Variations in the preparation of the AT samples also affected the performance of the mAbs. Thus, while K28/43 labeled conventional mouse neocortex AT sections, it did not label AT sections after tissue treatment with osmium , a preparation condition commonly used to preserve ultrastructure and provide contrast for EM. However, mAbs K28/38-labeled, K28/74-labeled, K28/86-labeled, and K28/91-labeled osmium-treated tissue in AT sections, and also subsequently yielded specific immunogold labeling . Overall, immunogold labeling on osmium-treated tissue was not very efficient and usually resulted in low labeling density, prompting us to use a different method for tissue preparation for EM purposes, as detailed below. Results from these post facto analyses of an existing collection of PSD-95 mAbs illustrated that application-specific re-evaluation of mAbs can identify those with strong and specific labeling that may not be identified in other assays. In addition, they highlighted the potential for retrospective analyses of other archived mAb projects to identify mAbs with characteristics suitable for use in AT.
Our generation of mAbs for neuroscience employs a stepwise screening workflow that incorporates the tissue culture aspects of classical hybridoma generation, expansion and archiving (immunization, hybridoma fusion, cell culture, cryopreservation) and parallel screening (ELISA, immunocytochemistry on transfected heterologous cells), while also including assays (immunoblots and immunohistochemistry) performed on mammalian brain samples . The above experience in rescreening the PSD-95 mAb clones suggested that including samples prepared for AT would help identify mAbs useful for that application. To define AT-compatible mAbs, we first added an additional screen comprising immunolabeling and analysis of AT brain sections into our mAb pipeline . However, we found that screening with AT on brain sections was too slow and labor intensive, given the large number of samples that needed to be prepared, immunolabeled, and evaluated. Moreover, developing an alternative cell-based proxy AT assay represented an opportunity to reduce the need for animal tissues. We therefore developed a rapid and straightforward cell-based proxy assay for mAbs able to recognize their target in samples prepared as for AT, employing transiently transfected cells as used in the immunocytochemistry screening step ( , bottom).
An exemplar mAb project (the L113 project) targeted Homer1L, an important component of the postsynaptic density of excitatory synapses . We immunized a set of mice with a recombinant protein comprising the C-terminal two-thirds of the mouse Homer1L protein (amino acids 121–363 of accession number Q9Z2Y3-1), a primary sequence that is 97.8% identical to human Homer1L and 97.1% identical to rat Homer1L. This fragment contains an N-terminal region present in all Homer1 splice variants, and a C-terminal region unique to the longest splice variant Homer1L . Next, we performed two sets of ELISAs on hybridoma conditioned culture medium (tissue culture supernatants; “TC supes”) harvested from individual wells of 32 × 96-well hybridoma culture plates. One set of ELISAs was against the purified fragment of Homer1L that was used to immunize the mice, and the other was against heterologous cells that had been transiently transfected to express the full-length mouse Homer1L protein and then fixed with 4% FA and permeabilized with 0.1% Triton X-100 (standard conditions for immunocytochemistry). We used the combined results from these two ELISAs to inform the selection of 144 hybridoma cultures for further screening, from the 2944 samples evaluated. A scatter plot comparing results from these two distinct Homer1L ELISAs is shown in ; data points for the 144 candidates selected for expansion in tissue culture and further analysis in the screening workflow are shown in light purple. Samples of TC supes harvested from the expanded cultures of these 144 selected hybridoma samples were assayed in parallel for efficacy and specificity using fluorescent immunocytochemistry against transiently transfected heterologous cells expressing full-length Homer1L ; immunoblots on brain samples ; and DAB-HRP immunohistochemistry on fixed, free floating brain sections . The positive candidates from these assays represent distinct, partially overlapping subsets of the original 144 ELISA positive clones , with different subsets of mAbs exhibiting efficacy in each assay. In parallel, we also subjected these 144 TC supes to a novel AT-specific cell-based proxy assay as described in the following section.
Our standard mAb generation pipeline did not contain a single assay that reliably predicted which candidate mAbs would successfully label AT-prepared brain tissue (see above). Moreover, screening all ELISA-positive clones (typically 96 or 144) on conventional AT sections of plastic-embedded brain samples, which included manual scoring by trained observers, was excessively slow and labor-intensive. This was especially problematic because we strive to identify the best candidates for subcloning of the hybridoma cell cultures to monoclonality before their cryopreservation, which needs to occur within one week after the initial ELISA screen . Therefore, we developed a novel cell-based proxy screen with a high predictive value for mAbs that would ultimately prove to be effective on labeling brain tissue in plastic-embedded AT sections. We hypothesized that the major factor distinguishing antigenicity in AT brain sections from conventional IHC is the process (dehydration, resin infiltration, heat curing) involved in the embedding of AT samples in array plastic. Our standard mAb screening workflow employs immunofluorescence labeling of the target protein expressed in heterologous cells as an important screen ( and ; ; ). By employing transiently transfected cells, the samples assayed are a mosaic population of cells with high levels of target protein expression adjacent to nonexpressing cells. Since the identity of the transfected cell subpopulation is apparent by the use of an independent marker, it is easy to determine which candidates selectively label target-expressing cells. We predicted that plastic embedding of transiently transfected heterologous cells expressing target protein would provide a similarly quick, inexpensive, and effective screen for candidate TC supes that exhibit target protein labeling under AT conditions. For this cell-based screening assay, we transiently co-transfected heterologous COS-1 cells such that ≈50% of the cells were transfected to express both the target protein and an independent transfection “marker” to monitor transfection efficiency and to identify the transfected cells. Transfection markers were either encoded on separate plasmids (e.g., EGFP) co-transfected with target expression plasmids, or were encoded as tags fused to the target protein (e.g., an epitope tag or a fusion protein). For the CBS AT assay, transiently transfected cells were harvested 3 d after transfection, pelleted by centrifugation, fixed in suspension, re-pelleted and the fixed cell pellet embedded in LR White plastic. A portion of the same transfection cocktail was used to transfect cover slips containing cultured COS-1 cells that were subjected to conventional immunocytochemistry, to verify successful coexpression of both the marker and the target protein. Embedding of the cell pellet in AT plastic was performed using the same protocol as used for embedding brain tissue in plastic for AT (4% FA fixation, dehydration in an ethanol series, embedding in LR White resin and curing at 55°C for 24 h). Semithin (400 nm) sections that contained a mosaic of cells overexpressing the target protein and marker adjacent to cells devoid of target protein expression were cut and deposited into the wells of a collagen-coated clear bottom 96-well plate, which was used to screen up to 94 candidate TC supes (using the remaining two wells for positive and negative controls). The nucleus of each cell was labeled with Hoechst 33258, and bound primary Ab was detected using Alexa Fluor-594-conjugated anti-mouse IgG secondary Abs. Subsequent visual analysis was based on determining the presence of Alexa Fluor-594 signal and whether it was specific to the subset of cells with marker expression in green .
A proxy screen should be rapid, simple, and inexpensive, but most importantly it must be able to identify Abs that are effective when employed for their intended end use. We interrogated positive and negative samples from the AT CBS assay for labeling efficacy and specificity against brain samples in AT plastic. We found a high degree of concordance: positive samples from the AT CBS assay were much more likely to be scored as positives against AT brain sections than the population as a whole, and negatives from the CBS assays were rarely positive against AT brain sections . For example, in the exemplar Homer1L mAb screen, of the 144 candidate mAbs screened by CBS, 60% of the CBS-positive candidates (CBS score >2) gave good brain AT labeling (brain AT score >2.5), compared with only four out of 96 CBS-negative candidates (CBS score 0–1). To further assess the value of the AT CBS assay we used a previously described synaptic antibody characterization tool developed to quantitatively assess synapse Ab specificity in AT . Using this tool we measured the target specificity ratio (TSR) which quantifies the ratio of target Ab label (e.g., Homer1L candidate mAbs) colocalizing with a reference synaptic marker, for example Abs against Synapsin-1 . We observed a positive correlation between TSR scores for brain AT labeling and the respective CBS scores, supporting that the CBS assay was highly predictive in identifying brain AT compatible candidate mAbs . The target synapse density, i.e., the number of synapses detected with the antibody per unit volume reflects the sensitivity of the antibody, and was another metric used to characterize candidate antibodies. Antibodies with high brain AT score tend to have higher synapse density and higher TSR as measured with the synaptic antibody characterization tool. Together these observations illustrate that the CBS assay can identify Ab samples with a high likelihood of successful brain AT labeling.
During our initial screening of available polyclonal and monoclonal Abs against synaptic proteins for AT we had also observed that a high proportion of Abs effective for immunofluorescence AT on LR White-embedded sections also perform well on Lowicryl HM20-embedded sections (82%, 73 out of 89 Abs). Because Lowicryl embedding provides EM ultrastructure superior to that of LR White, we wondered whether the CBS AT assay could also screen for Abs effective for postembedding immunogold EM of brain samples in Lowicryl. We first tested whether the mAbs identified in the AT CBS screen performed on cells embedded in LR White plastic would also exhibit effective and specific immunofluorescence labeling of samples prepared using a protocol similar that to prepare samples for analysis by electron microscopy (brain samples fixed in FA and glutaraldehyde and embedded in Lowicryl plastic). A total of 17 positives from the CBS evaluation of samples prepared in LR White plastic were tested on AT sections of mouse neocortex fixed in FA and glutaraldehyde and embedded in Lowicryl plastic; of these, 12 (71%) were positive (Extended Data ). Five of these mAbs was further evaluated at the EM level using immunogold labeling . The performance of the antibodies was considered to be good if they labeled the expected target with a high ratio of signal-to-noise. Both of the Homer mAbs identified using the AT CBS assay (L113/13 and L113/130; and ) performed well when used for immunogold EM on Lowicryl HM20-embeded sections from mouse brain ( ; Extended Data and ). Three different mAbs against gephyrin (a postsynaptic protein at inhibitory synapses) that were similarly identified using the AT CBS assay , were also confirmed as suitable for immunogold EM . Thus, the AT CBS assay of LR White embedded samples also has a high predictive value for identifying Abs effective for postembedding immunogold electron EM on brain tissue embedded in Lowicryl HM20. 10.1523/ENEURO.0290-23.2023.f8-1 Extended Data Figure 8-1 L113/13 immunogold EM on mouse tissue. Examples of L113/13 immunogold labeled synapses from mouse neocortex embedded in Lowicryl HM20. Download Figure 8-1, TIF file . 10.1523/ENEURO.0290-23.2023.f8-2 Extended Data Figure 8-2 L113/130 immunogold EM on mouse tissue. Examples of L113/130 immunogold labeled synapses from mouse neocortex embedded in Lowicryl HM20. Download Figure 8-2, TIF file .
The lack of highly validated Abs for research is a widely-recognized problem that has forced laboratories to employ extensive in-house Ab testing before their use . Here, we describe a systematic, rapid, and effective approach to validate Abs for brain AT, leading to a robust set of mAbs available to the research community (Extended Data ). We introduce a simple and low-cost proxy assay with a high predictive value for Abs effective and specific for immunolabeling of AT brain sections. Unlike direct screening on AT brain samples, the cell-based proxy screen does not use samples from animals, reducing animal use. The visual analysis of the CBS assay is also more straightforward than AT screening. It utilizes a heterologous expression system, allowing specific candidate Ab labeling of transfected cells to be easily distinguished from nonspecific labeling of neighboring nontransfected cells. Visual comparison of transfected to nontransfected cells is much simpler than evaluation of AT brain samples, which must be performed at the level of individual synapses and may be confounded by synapse variability, low levels of expression or unknown distribution. Finally, because target proteins are overexpressed in transfected heterologous cells, the CBS assay is more sensitive at detecting Abs that may be of low concentration in hybridoma supernatants early in the development pipeline. This is particularly advantageous as the mAb candidate screening employs supernatants from hybridoma cultures, often with low levels of Ab, at a time in the development workflow when emphasis is on maintaining the health of the hybridoma cells before their cryopreservation, not on maximizing mAb accumulation in the medium. Moreover, at this stage hybridomas are not typically monoclonal; accordingly, these supernatants can contain multiple representations of target-specific mAb at very different concentrations, which may be lower than typically present in final mAb preparations after subcloning to monoclonality and growing under culture conditions designed to yield maximal mAb accumulation in the medium. Therefore, an elevated level of target protein expression facilitates successful labeling during early-stage screening. In the experiments reported here, we included AT CBS screens in the workflow for 15 different projects, each aimed at developing mAbs against a distinct target. Each project employed screens aimed at developing mAbs for use in multiple downstream applications including transfected cell immunocytochemistry, brain immunoblots, immunohistochemistry on FA fixed conventional brain sections, and brain AT ( and ). Comparing the outcomes with Euler diagrams ( ; https://www.eulerdiagrams.org/eulerAPE/ ), shows that performance in one of these applications does not predict success in other applications , highlighting the need for application-specific screening. This confirmed our experience with many commercially available Abs, which were validated in applications other than AT, and often failed when used for AT. Our results suggest that the false omission rate of the AT CBS assay relative to brain AT is quite low (≈5–15%). However, the AT CBS assay does yield “false positives” (candidate mAbs that work in the AT CBS assay but not for brain AT), likely because the AT CBS assays involve overexpression of the target protein in the transfected cells. Moreover, the use of heterologous cells means that the target protein may not undergo the same posttranslational modifications as it does in neurons, and proteins that interact with the target protein in neurons may not be expressed; accordingly, epitopes targeting posttranslational modifications or protein-protein interactions may be accessible in heterologous cells but not in brain samples. This is especially problematic for synapse-associated proteins, which are extensively regulated by posttranslational modifications and participate in complex and densely packed networks of interacting proteins . This dense protein network can also result in more general Ab access problems, since access of the Ab to the synaptic compartment may be limited. Ineffective immunolabeling of synaptic targets because of these considerations led to the development of numerous protocols for enhancing Ab access in conventionally prepared brain sections . While ultrathin samples such as those prepared for AT are expected to have fewer issues with macro level Ab access to the synaptic compartment, there may still remain access problems at the molecular level, and these samples would presumably retain fixative-stabilized protein-protein interactions not present in heterologous cells, resulting in ineffective labeling of occluded epitopes at synapses but not in heterologous cells. Despite the occasional false-positives, the proxy CBS assay was effective at filtering out the numerous candidates that score as negatives in both cell-based and brain-based AT. In the exemplar Homer1 project there was a marked increase in the success rate in brain AT evaluation for those judged positive in the CBS assay (21/35 ≈ 60% of candidates with brain AT scores >2.5) compared with the success rate for the overall pool of 144 candidates (22/144 ≈ 15% of candidates with brain AT scores >2.5) that would have required evaluation had the CBS assay filter not been employed. Moreover, there was an extremely low false omission rate in the CBS assay (five candidates with a brain AT score >2.5 out of 109 candidates with CBS score ≤1.9). In contrast, other standard assays (conventional immunocytochemistry against transfected heterologous cells, immunoblots, conventional immunohistochemistry against brain sections) lacked predictive value for AT-effective Abs . Our results suggest that the same principles for Ab screening can be applied to other postembedding brain immunolabeling applications, including immunogold EM. In recent years, large-scale volume EM has provided important insights into the microscale organization of brain and principles of neuron connectivity . Complementing such expansive ultrastructural data with molecular information is rare , due in large part to the lack of Abs suitable for postembedding immunoelectron microscopy. Our experiments suggest that the CBS proxy assay can also be used to identify Abs with high probability of success for immunogold EM, thus providing an efficient preliminary screen for suitable reagents for this highly demanding and resource-intensive application. Expanding the repertoire of synaptic Abs for electron microscopy applications will further increase the ability to collect correlated molecular and ultrastructural data in future connectomics studies. We suggest that this assay strategy could be employed whenever substantial collections of Abs against a given target need to be evaluated for AT or immunoelectron microscopy.
|
Fatty acid biomarkers reveal the interaction between two highly migratory species in the Southern Humboldt System: the swordfish and its prey, the jumbo squid | 51da9c58-8ab1-4805-912d-c6068abde061 | 11927563 | Digestive System[mh] | Trophic marine ecology describes predator vs . prey interactions of heterotrophic organisms occurring at different levels of the marine food webs, and at the same trophic level (primary, secondary, top), and especially in the nutrients transfer between and within levels ( ; ). In this context, several studies related to understanding trophic ecology are based on: (i) direct observation in the field ( ; ), (ii) analyses of stomach contents ( ), (iii) collection of feces from predators ( ), (iv) determination of body remains of prey such as bones, otoliths, exoskeletons, and (v) analyses of fatty acid (FAs) profiles and isotopes as biomarkers ( ; ; ). FAs have recently been used as biomarkers of the trophic interactions and feeding habits of marine organisms from temperate and cold waters ( ; ; ; ; ; ). This has enabled us to explore the trophodynamics of these species in their environments, extending the temporal and spatial scales of these research topics ( ; ; ). In turn, from the stoichiometric and ecophysiological perspective, FAs are important at the structural (as carbon and hydrogen molecules) and functional level (as energy reserves) of organisms ( ). In particular, their greatest importance in marine animals is linked to long-chain (LC) polyunsaturated essential fatty acids (PUFAs) (LC-PUFAs: EPA (C20:5n3), DHA (C22:6n3)), which cannot be biosynthesized de novo by organisms, and therefore, must be obtained from the diet and/or consumption of prey that have these LC-PUFAs available in their bodies. Subsequently, these essential biomolecules are conservatively stored in the predators’ tissues and are involved in fundamental physiological processes (growth and reproduction) that require energy ( ; ). In this bioenergetic context, it has recently been reported that the relationship of FAs between the predator and its potential prey increases with taxonomic specificity ( ), that is, predators may present a degree of trophic specialization characterized by the consumption of prey with high fat contents, as occurs in the trophic interaction between the jumbo squid ( D. gigas ) and its prey, the red squat lobster ( G. monodon ) ( ). Together, these analyses addressing multidisciplinary aspects (biochemical, physiological, and ecological) allow us to more clearly understand the interaction between the predator and the prey, considering a spatial and temporal scale with a holistic view of the total environment ( ; ). In the marine food webs, active predation can be categorized according to the degree of choice and search for prey as follows: (i) selective, large species that eat specific prey with high energy contents (energy maximizer), but in low quantities and (ii) non-selective generalists that consume large amounts of various prey with lower energy contents (time maximizer) ( ). In an energetic context, the optimal foraging theory has been used to explain or predict the diet of species related to election of prey with high energetic contend ( ). In particular, active predation strategies describe how marine organisms maximize the energy they obtain from prey and minimize the time spent during the entire feeding process, from the consumption of prey, its ingestion and finally the assimilation of nutrients. In this context it is also important to consider that organisms can also carry out other energy-demanding physiological and behavioral activities while feeding, such as tolerating the exposure to variations of abiotic factors such as temperature, oxygen, and salinity ( ) and behaviors such as escaping from predators, searching for a mate, among others ( ). The optimal foraging theory also considers that the nutrients obtained through the consumption of food and/or prey are stored as energy reserves ( i.e ., FAs) in certain tissues including the liver, muscle, or gonads, and that these are subsequently used as bioenergetic fuel for highly energy-demanding fundamental physiological processes, such as growth and reproduction ( ; ). Species of higher trophic level, mostly described as strict carnivores, sustain their lifestyle and high energy demands by consuming only certain prey with high carbon and nitrogen contents ( i.e ., lipids and proteins, respectively), and also feed only specific parts of the prey ( i.e ., energy storage organs: livers, hepatopancreas, and digestive glands) that they capture through the use of specialized mouth structures ( ; ). In the context of this energy maximization strategy, the same trend has been described in terrestrial ( e.g ., lions capture and consume only the liver of their prey) ( ), and also in marine environments ( ; ). Particularly, here the swordfish slashes the jumbo squid with its sword and mainly consumes its digestive gland, probably because of its high fat content, which has been recorded in the field by scientific observers ( ; ; ). Considering the fact that only the presence of cuts on the body of their prey has been published, presumably due to the use of the sword ( ; ), it is necessary to carry out detailed comparative studies that use FAs as trophic biomarkers, consider the same space-time window ( i.e ., a simultaneous comparison) and include the interaction of these two fishery resources considered as “higher level species” and “highly migratory” that interact with one another in the southern Humboldt System, such as the swordfish and its prey, the jumbo squid. In the Humboldt Current System (HCS), considered one of the most important coastal upwelling ecosystems on the planet ( ), the model species studied (jumbo squid: D. gigas and swordfish X. gladius ) are considered highly migratory resources ( ; ; ; ) that interact trophically with one another ( i.e ., D. gigas as the main prey of swordfish; ; ; ). Due to the highly migratory activity of swordfish, this species requires large amounts of energy provided by food, which is stored in its organs (liver, muscle, gonad) in the form of lipids and FAs to support its fundamental physiological processes of maintenance, growth, and reproduction ( ). Numerous studies analyzing the stomach content of swordfish in HCS have indicated that their diet is mainly based on cephalopods ( ; ; ; ). Despite the capture restrictions of these species (event sampling in a narrow time window), these studies indicate how cephalopods (in this case D. gigas ) as food and/or prey items can be an important source of energy for the swordfish X. gladius ( ; ). However, details regarding how lipids and FAs vary in the organs of the prey and predator are still unknown, as is their potential use as trophic biomarkers, which could reveal more about their dynamics, and the potential physiological role that lipids and fatty acids play. In this context, identifying the FAs profiles of different organs of the jumbo squid could therefore provide key information on the rate of assimilation of these essential nutrients and how these are subsequently transferred and stored conservatively in the different organs of its predator X. gladius ( ; ) to then be used as bioenergetic fuel in various key physiological processes of the energy balance model (homeostasis, reproduction, growth). As previously mentioned, to date studies have focused on intra-individual variations of FAs, but not on the simultaneous interaction among them ( i.e ., comparison of FAs between tissues and species). Therefore, this is the first study that shows comparative results of the FAs of the predator and its prey, captured on the same space-time scale. Due to the selective feeding habit of the swordfish on the organs ( e.g ., digestive gland) of the jumbo squid, and its low capacity to biosynthesize FAs, these biomolecules are expected to be conservatively incorporated and stored in swordfish tissues. Consequently, some degree of similarity in the FA profiles found in the tissues of the predator and its prey is expected. Consequently, the objective of this study was to evaluate, using FAs analyses as suitable tracers, trophic interactions and ecophysiological aspects of two highly migratory resources that coincide in one of the most productive marine ecosystems on the planet, the Humboldt Current System.
Ethical declaration Capture, sample processing and transport to the laboratory Fatty acid profiles Statistical analysis This research was conducted in accordance with the Act on Welfare and Management of Marine Animals, and they comply with the current Chilean animal care and manipulation legislation of the fishery resources ( ). Consequently, to avoid the pain of the animals during their capture and processing, they were euthanized with a thermal shock of rapid freezing (−20 °C) (Law 20.380; Ministry of Health and Ethics Committee, Chile) ( ).
As part of the monitoring program of “Fisheries Project of Highly Migratory Resources: Biological-Fishing Aspects, 2022” of Instituto de Fomento Pesquero (IFOP), adult specimens of D. gigas ( N = 32) and X. gladius ( N = 31) were captured in the same fishing area where these species coincide temporally and spatially in the southern Humboldt System (Autumn-Winter: 34°–36°S & 73°–76°W) ( ; ). It is important to mention that, due to the low availability of specimens in the field, restrictive capture for this study area (catch by jigs) and fishing ban periods ( ), it was only possible to capture specimen samples in their habitat (far from the coast) for a narrow window of time. This permitted only opportunity sampling once a year. On board, the animals were measured (jumbo squid: 66 ± 5.07 cm of mantle length; swordfish: 216 ± 16.31 cm of lower jaw fork length), sexed and dissected following the methodology described by for jumbo squid, and for swordfish by . Five to 10 g of fresh tissue samples were extracted from the organs (jumbo squid: digestive gland (32), gonad (32), mantle muscle (32); swordfish: liver (three), gonad (three), muscle (31)) and preserved in 250 mL thermo-hermetic plastic flasks using dry ice. Subsequently, the samples were transported in hermetic boxes with dry ice to the Laboratory of Hydrobiological Resources of the Universidad Catolica de la Santisima Concepcion de Chile, where they were cold homogenized, sonicated and then dried and/or exposed to lyophilization (lyophilizer FDU-7012 Operon, for 48 h at –80 °C). Afterwards, a 20–30 mg dry weight (DW) tissue sample of each organ was extracted for fatty acid profile analyses.
First, the total lipid content was obtained following the methodology described by and recently applied for highly migratory marine species (for details of procedures, please see: ; ). For this, a 20 mg dry weight tissue sample was weighed with a Sartorius digital analytical balance (d = 0.1 mg), maintained at a constant cold temperature (ca. 5 °C) and immersed in 5 mL of dichloromethane:methanol (2:1) solution for 12 h. Then, it was sonicated (AC-120H equipment, MRC) at the same cold temperature for 15 min, after which 4 mL of potassium chloride (0.88% KCl in ultra-pure water) was added and centrifuged at 1,500 RPM for 5 min (FASCIO TG1650-S). Afterwards, the lower phase ( i.e ., containing the total lipids) was extracted and transferred to amber vials that had been previously weighed ( i.e ., dried empty weight). Finally, the total lipids were quantified by evaporating the solvent ( i.e ., with an injection of nitrogen gas) in a sample concentrator (109A YH-1; Glas-Col) and subtracting the empty dry weight of each vial from the new dry weight of the vials containing the extracted lipids. The fatty acid profiles of the samples were determined following the method used by . Thus, the fatty acid methyl esters (FAMEs) of each sample were assessed using the previously quantified total lipid extracts. In brief, 1 mL of the lipid extract was esterified by incubating them in methanolic sulfuric acid at 70 °C for 1 h in a Thermo-Shaker (MRC model DBS-001). After that, the fatty acids were rinsed three times with 6 mL, 3 mL, and 3 mL of n-hexane, and concentrated to 1 mL in amber vials using a sample concentrator (MD 200) and nitrogen. FAMEs were measured with a gas chromatograph (GC, Agilent, model 7890 A) equipped with a DB-225 column (J and W Scientific, 30 m long, 0.25 intermediate diameter, and 0.25 mm film). We used the temperature program for sample injections suggested for the GC column. Briefly, the oven temperature was set at 100 °C for 4 min and increased 3 mL/min to 240 °C for 15 min. The individual FAMEs were identified by comparing them to known standards of fatty acids of marine origin (certified material, Supelco 37 FAME mix 47885-U; ). Using chromatography software (ChemStation; Agilent), FAMEs were quantified by means of the response factor to the internal standard blank (corresponding to a C23:0 fatty acid added prior to transmethylation) ( ; ).
To evaluate differences in total fatty acids (saturated, SFA; monounsaturated, MUFA; polyunsaturated, PUFA) among organs for each species, a Kruskall-Wallis (H) test was performed. In turn, to compare the fatty acid profiles of X. gladius and its prey D. gigas , the fatty acids that contributed at least 5% of the total weight were selected, and their concentration values were transformed into % and log (X + 1) (for details see: ). In turn, a Bray-Curtis resemblance matrix was performed to develop a multivariate principal coordinate analysis (PCoA). The fatty acid profiles detected in the different organs of the prey D. gigas and its predator X. gladius were compared with a PCoA. For the observed differences, a PERMANOVA was performed. In this test it is not necessary to comply with the required assumptions compared to parametric tests such as ANOVA ( ) with a significance level of 0.05. In addition, a similarity analysis (ANOSIM) was performed to determine similar groups with a coefficient of determination close to zero (R = 0) and highly dissimilar groups close to one (R = 1) ( ) to statistically verify the dissimilarities that occur between the tissues observed through PERMANOVA. Subsequently, a similarity percentage analysis (SIMPER) was carried out to identify the percentage of contribution of fatty acids, which contributed to the differences observed in the studied organs ( ). All multivariate analyses were performed with Primer V6 software ( ).
Fatty acid profile of Fatty acid profile of A comparison of the fatty acids found in predator X. gladius and its prey D. gigas When comparing the FA profiles of the organs of the predator with its prey, differences in the grouping of FA profiles within and between analyzed species were evident ( ). In particular, the spatial arrangement of FAs was observed in the following groups: (i) the gonad and muscle of the jumbo squid (with the contribution of the FAs: C20:1, C22:6n3, C20:5n3), (ii) the three swordfish tissues with the jumbo squid digestive gland (with C18:1n9), and (iii) a great dispersion in the FA profile of the swordfish gonad was also revealed (with C16:0) ( ). By means of PERMANOVA, statistically significant differences were observed (PERMANOVA: pseudo-F 5,132 = 44.433, p = 0.001) in the different tissues of the predator ( X. gladius ) and its prey ( D. gigas ). In turn, in the ANOSIM, the FAs detected in the liver of X. gladius were different from those in the gonad (R-ANOSIM = 0.988) and muscle (R-ANOSIM = 0.979) of its prey D. gigas . When comparing the gonad organ of X. gladius with the tissues of its prey, the gonad (R-ANOSIM = 0.721) and the mantle muscle (R-ANOSIM = 0.745) of D. gigas were found to be very different, while the digestive gland was quite similar to the gonad. Finally, when comparing the muscle profiles of the predator X. gladius with the tissue of its prey D. gigas , a similarity was observed with the digestive gland (R-ANOSIM = 0.572), while the gonad (R-ANOSIM = 0.996) and mantle muscle (R-ANOSIM = 0.977) were very different ( ). In particular, the R values of 0.5 were recorded when comparing FAs of predator tissues with the prey’s digestive gland, indicating similarity. When evaluating the percentage of contribution to the fatty acid profiles by means of SIMPER ( ), the fatty acids DHA (C22:6n3), palmitic (C16:0), EPA (C20:5n3), and oleic (C18:1n9) in the digestive gland of D. gigas contributed the most according to the similarity evaluation, with a cumulative contribution of 70.39%. In the gonad, the fatty acids that contributed the most were DHA (C22:6n3), EPA (C20:5n3), palmitic (C16:0), gadoleic (C20:1), and stearic (C18:0), with an accumulated percentage of 95.72%. In the muscle, the fatty acids with the greatest contribution were DHA (C22:6n3), palmitic (C16:0), EPA (C20:5n3) and stearic (C18:0); their accumulated percentage was 92.22%. In these three organs, the C22:6n3 polyunsaturated fatty acid (DHA) presented the greatest contribution in the digestive gland (25.63%), gonad (27.51%) and muscle (44.88%). In all of the evaluated organs (liver, gonad, muscle) of X. gladius , the fatty acid that presented the greatest contribution in the percentage of similarity was oleic (C18:1n9). The fatty acids with the greatest contribution in the liver were oleic (C18:1n9), palmitic (C16:0) and stearic (C18:0), with an accumulated percentage of 81.41%; DHA (C22:6n3) polyunsaturated fatty acids (2.83%) presented a low percentage contribution in this organ. In the gonad, the fatty acids oleic (C18:1n9), palmitic (C16:0), DHA (C22:6n3) and stearic (C18:0) had a high contribution, with an accumulated percentage of 93.28%. The fatty acids with the greatest contribution in the muscle were Oleic (C18:1n9), palmitic (C16:0) and DHA (C22:6n3), with an accumulated percentage of 71.64%. In turn, in this last organ the monounsaturated fatty acids (palmitoleic (C16:1), gadoleic (C20:1)) showed a low contribution.
D. gigas The highest concentration of total fatty acids (SFA, saturated fatty acid; MUFA, monounsaturated fatty acid; PUFA, polyunsaturated fatty acid) was found in the digestive gland, followed by the gonad and muscle ( ). Significant differences were observed between tissues in each of the total fatty acid classes (K-W: SFA: H(2,96) = 76.098, p < 0.05, MUFA: H(2,96) = 81.4899, p < 0.05; ANOVA: PUFA: F(2,93) = 177.131, p < 0.05) ( ). N6 polyunsaturated fatty acids (n6 PUFA) were only present in the digestive gland, which presented the greatest diversity of fatty acids (N = 26 total; saturated fatty acids, SFA: N = 10; monounsaturated fatty acids, MUFA: N = 7; polyunsaturated fatty acids, PUFAn6: N = 5, PUFAn3: N = 4). The digestive gland presented a total content of PUFA of 144.95 ± 30.97 mg FA g DW −1 , with a higher percentage, 27.54% of docosahexaenoic (DHA, C22:6n3) and 14.69% of eicosapentaenoic (EPA, C20:5n3) ( ). The total content of MUFA found was 71.15 ± 12.06 mg FA g DW −1 , the fatty acids with the highest percentage were oleic (C18:1n9) with 10.84%, gadoleic (C20:1) with 4.84% and palmitoleic (C16:1) with 4.34% ( ). The total content of SFA was 90.36 ± 17.12 mg FA g DW −1 , the highest percentages of fatty acids were palmitic (C16:0) with 17.57%, myristic (C14:0) with 4.86% and stearic (C18:0) with 4.06% ( ). In turn, the gonad presented a PUFA content of 12.57 ± 3.16 mg FA g DW −1 . The fatty acids with the highest percentage in the gonad were DHA (C22:6n3) with 24.7% and EPA (C20:5n3) with 21.95% ( ). The total MUFA content was 4.03 ± 1.35 mg FA g DW −1 , and gadoleic (C20:1) showed the highest percentage with 12.36% ( ). The total SFA content was 8.38 ± 1.83 mg FA g DW −1 , and the SFA with the highest percentage was palmitic (C16:0) with 17.38% ( ). In the muscle, the total PUFA content was 8.43 ± 2.59 mg FA g DW −1 , only two n3 fatty acid chains were observed, including DHA (C22:6n3) with 31.96% and EPA (C20:5n3) with 12.13% ( ). The total MUFA content was 5.46 ± 0.56 mg FA g DW −1 , the MUFAs with the highest percentage were palmitoleic (C16:1) with 20.46% and gadoleic (C20:1) with 5.68% ( ). The total SFA content was 5.23 ± 1.41 mg FA g DW −1 , and the SFA with the highest percentage was palmitic (C16:0) with 18.46% ( ).
X. gladius The evaluation of the fatty acids found in the different organs of the swordfish X. gladius resulted in the liver being the organ with the highest total content of fatty acids, followed by muscle, and the gonad showed the lowest fatty acid content ( ). Statistically significant differences were observed among tissues in each class of total fatty acid (ANOVA: SFA: F(2,34) = 27.765, p < 0.05, MUFA: F(2,34) = 28.975, p < 0.05, PUFA: F(2,34) = 9.8180, p < 0.05) ( ). The liver presented the highest total MUFA content with 140.09 ± 54.04 mg FA g DW −1 , the MUFAs with the highest percentages were oleic (C18:1n9) with 48.22% and gondoic (C20:1n9) with 7.34% ( ). The SFAs presented a total content of 70.08 ± 12.78 mg FA g DW −1 ; the SFAs with the highest percentages were palmitic (C16:0) with 10.24% and Stearic (C18:0) with 9.48% ( ). The total PUFA content was 23.97 ± 4.97 mg FA g DW −1 . The n3 PUFA content was higher than the n6 PUFA content, in this organ the arachidonic acid (ARA, C20:4n-6) was not registered in comparison to the other organs (gonad and muscle). Among the n3 PUFAs, the highest percentage was DHA (C22:6n3) with 7.15% ( ). The total fatty acid content in the gonad was 20.21 ± 3.91 mg FA g DW −1 . This organ had a higher content of PUFA with a total of 7.72 ± 4.91 mg FA g DW −1 , of which the n3 PUFA content was higher than the n6 PUFA content, which was low. The n3 PUFAs that presented the highest percentages were the DHA (C22:6n3) fatty acids with 24.68% and EPA (C20:5n3) with 5.10% ( ). The total MUFA content in the gonad was 6.74 ± 4.60 mg FA g DW −1 ; the most abundant MUFAs were oleic (C18:1n9) with 21.24%, followed by gondoic (C20:1n9) with 4.84% ( ). The SFAs presented a total content of 5.75 ± 2.10 mg FA g DW −1 , the most predominant SFAs were palmitic (C16:0) and Stearic (C18:0), each with 10.96% and 5.42%; respectively ( ). In the muscle of X. gladius , MUFAs were the most abundant fatty acids with a total content of 170.55 ± 16.28 mg FA g DW −1 , the MUFAs with the highest percentages were oleic (C18:1n9), gondoic (C20:1n9) and palmitoleic (C16:1), with 38.69%, 5.40% and 3.80%, respectively ( ). The SFAs presented a total content of 49.12 ± 10.65 mg FA g DW −1 , with a high predominance of palmitic (C16:0) with 15.29% and stearic (C18:0) with 7.27% ( ). The PUFAs showed the lowest total content of fatty acids with 34.51 ± 10.53 mg FA g DW −1 . The n3 PUFAs demonstrated a higher total content in relation to the n6 PUFAs. Among the n3 PUFAs, DHA (C22:6n3) was the most abundant fatty acid with 14.65%, followed by EPA (C20:5n3) with 2.35%. In relation to the n6 PUFAs, Linolelaidic (C18:2n6t) presented a low abundance with 0.74% ( ).
This study explored the trophic interaction (predator vs . prey) between two highly migratory resources of the Humboldt System (jumbo squid and swordfish prey) by comparing the fatty acid profiles of their tissues and/or organs. Our findings revealed that analyzing the fatty acids used as trophic biomarkers among these species could be a useful tool that may reflect the predator dietary intake. In our study, and as reported for several species of marine animals classified as large predators (seals: Lobodon carcinophaga, Leptonychotes weddellii, Hydrurga leptonyx ( ); white shark Carcharodon carcharias ( ); fishes: Kajikia audax, Makaira nigricans, Coryphaena hippurus ( )), the lipids and consequently the key fatty acids (precursors: oleic acid (C18:1n9), linoleic (C18:2n6), gamma-linolenic (C18:3n6) alpha-linolenic acid, ALA (C18:3n3), ARA (C20:4n6); and essentials: EPA (C20:5n3), DHA (C22:6n3)) are acquired through the consumption of whole prey and/or parts of its body rich in these lipid biomolecules. Considering a time period that may vary among species ( ; ), these lipid components are incorporated conservatively in the tissues of predators, to be subsequently used in various fundamental physiological processes that strongly influence the survival of individuals and consequently the stability of their populations ( ; ; ). At an intra-individual and/or intra-specific level, in our study the comparison of the fatty acid profiles of the analyzed tissues of each species (in jumbo squid: muscle and gonad vs . digestive gland; in swordfish: liver vs . muscle and gonad) presented some significant differences, while the “intra-individual” comparisons showed no significant differences (PUFA of the gonad vs . mantle muscle in D. gigas ; while in X. gladius the muscle and liver in SFA, MUFA and PUFA). These can be explained by the integrative functionality of each organ depending on the role it plays in fundamental physiological processes such as maintenance, reproduction, and growth ( ; ). Similar integrative physiology links between organs, functionality and type of stored fatty acid have been described for both highly migratory species X. gladius ( ), D. gigas ( ; ), as well as for other bony fish ( Thunnus tonggol , ) and chondrichthyans ( ). In turn, in this interspecific comparison (between the organs of these species), a moderate level of proximity and/or similarity of the fatty acid profile was observed between some jumbo squid and swordfish organs. In particular, the digestive gland of the jumbo squid (prey), an organ rich in precursor and essential PUFAs, presented a similarity with the tissues and/or organs of its predator, mainly with the liver and muscle of the swordfish. According to the ANOSIM results, an adequate similarity was between the digestive gland of the jumbo squid and the muscle and gonad (rather than the liver) of the swordfish. This may indicate that swordfish when consuming large jumbo squids would prefer mainly the digestive gland, which is the nutrient storage organ and contains high amounts of essential fatty acids. This finding may be corroborated by the specialist hunting habit of the swordfish on the jumbo squid ( ), which first stab and cut their prey with the sword of the mouth structure of X. gladius ( ; ), and is then devoured right in this area of the squid body, where large amounts of fats and oils characterized predominantly by PUFAs are found ( ; ). Due to (i) the high energetic cost of biosynthesizing PUFA fatty acids in large predatory fish ( ; ; ), (ii) the availability of these PUFAs in their prey ( ; ), and (iii) the specialization of their mouth structures to capture only prey with certain energetic attributes ( e.g ., the swordfish X. gladius slashes the jumbo squid D. gigas with its sword and mainly consumes its digestive gland as a big size prey rich in fats) ( ; ; ), we propose a trophic context of acquisition of essential fatty acids from the jumbo squid (prey) and then their storage and mobilization dynamics in a conservative way in the organs of the swordfish (predator). Our results, according to the analysis of similarity of the fatty acid profiles of the digestive gland of the jumbo squid and the organs of swordfish, could reveal as a testable hypothesis in future studies, the pathway use and storage sequence of these FAs ( i.e ., first processed in the liver, then stored in muscle, and finally transferred to the gonad of swordfish). Identifying this storage sequence of FAs may be key to the functioning of swordfish since they are used in different physiological functions. Here, the predominant fatty acids in the muscle (SFAs, MUFAs) can be used as an energy source during periods of food absence or scarcity ( ), while essential long chain PUFAs (EPA (C20:5n3), DHA (C22:6n3)) are key to the overall success of the reproductive process from the maturation of the gonad to the formation of the embryo and larva ( ). A similar response has been described for species of teleost fish from the Humboldt Current system ( ). In turn, according to our results on the dynamics of the FAs in jumbo squid organs, we propose that these are first stored in the digestive gland and then transferred to other organs. This dynamic of fatty acids for jumbo squid has also been reported in the winter ( ) and spring ( , ) off the coast of Chile so coast of Peru has been reported in winter ( ) and spring ( ; ). On the other hand, FAs (16 carbon PUFA, 18:4n3, 22:4n3, 22:5n3), as found in previous studies on highly migratory fish ( ; ), were detected in traces ( i.e ., in very small amounts). A plausible explanation for the low presence of these FAs in the analyzed samples may be due to the role (highly dynamic) that these FAs play in the biosynthesis of long-chain highly unsaturated fatty acids, described through the complex metabolic pathway of Sprecher (for details of the model see: ; ). In the context of energy reserve dynamics, seasonal differences in FAs found in the organs of the studied species have also been reported in previous studies ( , ; ). The findings revealed by the present study showed the same trend for this time of year (winter), but with some slight variations between consecutive years (2021, 2022). Given that the study area (South Eastern Pacific Ocean: SEPO) presents pronounced seasonal variations in oceanographic conditions (temperature, upwelling) that may modulate the availability of food and/or prey in the environment ( ; ), and consequently, influence the capacity of these species to accumulate lipid reserves, future comparative studies should expand the time window of the analyses to include more seasons of the year (spring-summer). Also, due to the logistical limitations of the study, including the sampling event and restrictions on capturing specimens, future research should complement our findings and observations ( i.e ., swordfish eating jumbo squid), considering techniques (isotopes, barcoding, etc .) that can reveal long-standing feeding habits (seasonal, over years) ( ; ; ). Provided that predator-prey interactions between species occur at the spatial/temporal level of both of the evaluated species in the SEPO, the trophic migration described for swordfish (from 50°N to 50°S, ) may coincide spatially and temporally with the reproductive migration of the jumbo squid in this area of the SEPO ( ; ; ). Considering this background, a trophic cascade effect could be generated, in which the prey consumed by jumbo squid is finally reflected in the fatty acid profile of swordfish. Within the spatiotemporal context of trophodynamics in the marine food web, future studies should prioritize understanding current trophic dynamics, particularly the transfer and storage of fatty acids in predator-prey interactions. Subsequent research should then examine how the feeding habits ( i.e ., prey selection) of species in the Humboldt System have shifted under different climate and oceanographic conditions, such as El Niño Southern Oscillation (ENSO) and its phases ( ; ). Additionally, it is essential to assess how the phenology of these species may have been altered, as discussed in match-mismatch theory ( ; ), within the context of climate change. In this context, the phenomena of climate change in marine environments is characterized by an increase in water temperature and hypoxia events, which can modulate variations in the distribution range of species, as well as their density; and consequently, generate changes in the predator vs . prey interactions that occur in the Humboldt Current System ( ; ; ).
Finally, our findings on the use of fatty acids as biomarkers of the interaction between two highly migratory resources of the Humboldt System may reveal a moderate degree of preference of swordfish in preying on jumbo squid, where precursor fatty acids predominate ( i.e ., ALA (C18:3n3) and ARA (C20:4n-6)) along with essential long chain PUFAs (EPA (C20:5n3), DHA (C22:6n3)) for their growth and reproduction. We conclude that this feeding strategy (preference on jumbo squid) and integrative physiological strategy of incorporating specific fats for their subsequent use in different organs for fundamental physiological processes (growth and reproduction) could elucidate a possible convergent physio-energetic strategy in the lipid storage and use of essential biomolecules present in species of higher vertebrates considered top predators in their habitat (marine or terrestrial, as appropriate). Thus, further research is needed to fully understand the extent of these convergent strategies. As mentioned above, this strategy has been widely described for top predators in the food webs of terrestrial environments, but scarcely for top predators in the marine food web; therefore, our study is a pioneer in revealing this type of trophic interaction between two top marine species in the Humboldt Current System.
10.7717/peerj.19129/supp-1 Supplemental Information 1 Fatty acid profiles of the swordfish and the jumbo squid.
|
Effect of neurally adjusted ventilator assist versus pressure support ventilation on asynchronies and cardiac function in pediatric liver transplantation | d7824e05-19de-4655-be5a-955792f45f7a | 11871333 | Surgical Procedures, Operative[mh] | Liver transplantation (LT) is the treatment of choice for acute/chronic end-stage liver disease . In the postoperative period several risk factors such as intraoperative fluid administration, total transfusion volume, large graft size, ascites, and pleural effusion negatively affect cardiac and pulmonary function , prolonging the time of dependence from mechanical ventilation. (MV). In addition, intra-operative factors such as bilateral transection of the abdominal muscles, pain stimulation and the presence of a mesh affect the recovery of spontaneous respiratory function . In pediatric liver recipients diaphragmatic dysfunction is associated with prolonged ventilation and pediatric intensive care unit (PICU) stay , . During weaning from MV, an ideal support mode should improve patient-ventilator interaction while minimizing asynchronies and detrimental hemodynamic effects induced by rising intrathoracic pressure . Pressure support ventilation (PSV) is one of the most adopted modes of assisted spontaneous breathing to efficiently unload respiratory muscle , . However, PSV delivers a fixed level of assistance that is associated to higher occurrence of patient-ventilator asynchronies , . Neurally Adjusted Ventilatory Assist (NAVA) is a mode of assisted spontaneous ventilation that delivers positive pressure in proportion to electrical activity of the diaphragm (EAdi) thus optimizing ventilator cycling and reducing incidence of patient-ventilator asynchronies as compared to PSV , . Moreover, synchronization between positive pressure during inspiration and electrical diaphragm activity (Eadi) during NAVA leads to a reduction of pleural pressure and reduces the negative effects on cardiovascular function . Advanced hemodynamic monitoring is essential to evaluate hemodynamic status, especially during the perioperative period of high-risk surgery , . Echocardiography is the most common noninvasive method to assess cardiac function . The combination of cardiovascular and respiratory effects during NAVA is not well established, especially after major abdominal surgery as liver transplantation where several factors negatively affect respiratory system mechanics and diaphragm function. To the best of our knowledge the physiologic effects of proportional assisted mode of MV compared to PSV have not been investigated in pediatric liver recipients. The aim of this study is to evaluate patient ventilator interaction and cardiac function at the first transition to spontaneous assisted breathing of pediatric patients who underwent liver transplantation. We hypothesized that, compared to PSV, NAVA minimizes patient-ventilator asynchronies and improves cardiac performance.
Study ethics Protocol Primary outcomes Secondary outcomes Statistical plan The Use of Neurally Adjusted Ventilator Assist versus Pressure Support Ventilation During Weaning from Mechanical Ventilation in Pediatric Patients After Liver Transplantation (NAVIGATE) protocol was approved by institutional research board at Bambino Gesù Children’s Hospital (document 1695_OPBG_2018, March 19, 2019). All research was performed in accordance with relevant guidelines/regulations, and written informed consent was obtained from parents/legal guardian. The study has been performed in accordance with the Declaration of Helsinki. No organs/tissues were procured from prisoners. This is a single center, randomized, no profit, physiologic cross-over controlled trial comparing NAVA with PSV in children underwent liver transplantation. This study follows CONSORT recommendations. The study was registered on ClinicalTrials.gov (Clinical Trial Number: NCT04792788, Registration date: 2021-03-11).
All the enrolled patients were admitted to PICU in Bambino Gesù Children’s Hospital, Rome (Italy). Inclusion criteria were: (1) age between one month to 10 years of age; (2) liver recipients; (3) invasive mechanical ventilation. Exclusion criteria were: (1) neurological impairment; (2) hypotonia (by neuromuscular, mitochondrial, metabolic, or chromosomal diseases); (3) lesions of medulla; (4) hemodynamic instability requiring inotropes/vasopressors (dopamine > 6 mcg/kg/min, norepinephrine > 0.1 mcg/kg/min, epinephrine > 0.1 mcg/kg/min, dobutamine > 6 mcg/kg/min, milrinone > 0.35 mcg/kg/min) or almost one volume bolus (crystalloids/colloids > 20 ml/kg) during the past 6 h; (5) congenital cardiovascular disease; (6) patient extubated; (7) respiratory instability (paO 2 /FiO 2 < 200; SpO 2 < 90% with FiO 2 0.4); (8) need of controlled mechanical ventilation; (9) intravenous infusion of benzodiazepines or propofol; (10) pneumonia, pneumothorax, massive pleural effusion; 11) patient placed on extracorporeal circuit (continuous renal replacement therapy, extracorporeal membrane oxygenation, apheresis); 12) contraindications to insert nasogastric tube; 13) not expected to survive beyond 24 h; 14) parental/legal guardian refusal. After enrollment, the standard nasogastric tube of each patient was replaced with a specific nasogastric tube (Edi catheter) with an array of eight bipolar electrodes mounted at its distal end (Getinge Critical Care, Solna, Sweden). The description of verification of nasogastric tube placement is explained in Additional Material S1. Hemodynamic monitoring was performed by mathematical analysis of the arterial waveform analyzed through an arterial catheter placed in the radial, brachial, or femoral artery and recorded according to the PRAM (Pressure Recording Analytical Method) pressure analysis method (MostCare Vygon, Vytech, Italy). The PRAM is an algorithm that processes arterial pressure waveforms with high temporal resolution (1000 Hz) and does not require external calibration. This method derives cardiac index and other advanced hemodynamic parameters by evaluating the interplay between vascular resistance and arterial compliance in real time. It enables continuous and accurate monitoring of cardiovascular dynamics . Transthoracic echocardiogram performed by two expert pediatric cardiologists completed the hemodynamic monitoring during the period study. The images of the transthoracic echocardiograms of all enrolled patients were recorded and stored on a dedicated computer. The randomization process consisted of a computer-generated random listing of treatment allocation using block sizes of 4 ( Three Randomization Plan Generators . Available from URL: https://gdallal.pages.tufts.edu/random_block_size.htm ). The randomization started when the enrolled patients started spontaneously triggering the mechanical ventilator. Patients were ventilated in pressure regulated volume control before starting assisted spontaneous breathing. To initiate weaning from MV, the patient had to mantain a PaO 2 /FiO 2 ratio > 200 and SpO 2 > 90% with FiO 2 < 0.4 and PEEP 4–5 cm H 2 O. Each patient was randomized to a ventilation mode sequence (PSV/NAVA/PSV or NAVA/PSV/NAVA). Each patient was studied for a duration of 2 h, divided in three trials of 40-minutes (Fig. ). The first 30-minutes of each trial was considered for washout of the previous ventilatory mode. The results were recorded only in the following ten minutes of each trial. Considering PSV trial: the initial PEEP was set on 4–5 cm H 2 O and PS level was set to reach tidal volume of 6–9 ml/kg, a reduction of respiratory rate to physiological values for age , and absence of clinical signs of increased work of breathing (chest retractions, diaphragm paradox). The flow trigger was set to the maximum sensitivity level not causing autotriggering. The clinician set the expiratory cycling off to achieve the best synchronization according to the flow/pressure tracings, and the fastest pressure ramp time. During application of NAVA, Eadi trigger was at default value of 0.5 µV. To determine the corresponding NAVA level, able to achieve a similar inspiratory mean airway pressure to that obtained in PSV, a dedicated function called NAVA Preview was used. Periods of coughing/suctioning were excluded from the analysis. All measurements were performed if stable traces were correctly displayed. The measurements were recorded and analyzed after the patients were discharged from the PICU. Two blinded researchers (G.S. and R.C.) independently analyzed the recorded data. In case of disgreement between the two, a third blinded investigator (G.C.) reviewed the recordings and resolved the discrepancies.” Two blinded pediatric cardiologists with more than ten years of experience in critically ill children performed an echocardiogram during the last ten minutes of each trial. Arterial line was connected to MostCare Up monitor values were recorded in the last ten minutes of each trial. Analgesia was provided according to the PICU protocol for patient admitted after liver transplantation (morphine 10 mcg/kg/h after intravenous bolus of 50–100 mcg/kg). Perioperative variables were collected as follows: gender, age, weight, body mass index, Status classification, Pediatric End Liver Disease (PELD), primary disease, cirrhosis, PICU admission before transplantation, Pediatric Index of Mortality 3 (PIM3), living liver donor, liver ischemia time, total volume of transfusion per weight, cumulative fluid balance (according to formula described by Goldstein et al. , mesh, surgery duration, graft-to-recipient weight ratio (GRWR, graft weight in gram/recipient weight in kg × 10), if the patient needed noninvasive ventilation post transplantation (NIV), ventilator-free days, length of stay (LOS) in PICU and hospital, mortality at 28 days. Patient-ventilator interaction parameters. To estimate the asynchrony rate, we calculated the asynchrony index (AI), which is the ratio between the number of asynchronous events and the total respiratory rate, expressed as percentage. An AI > 10% was considered a high rate of asynchrony . The asynchronies observed and analysed were wasted efforts (defined as a patient inspiratory effort not assisted by the ventilator), auto-triggering (defined as a mechanical insufflation in absence of a patient inspiratory effort), late cycling (defined as a cycle with the mechanical inspiratory time greater than twice the patient’s neural time) and double triggering, defined as two mechanical breaths separated by a short expiratory time during the same inflection in Edi signal (< half of neural expiratory timing). Other variables collected were: tidal volume, peak airway pressure, mean airway pressure, oxygenation index, and PaO 2 /FiO 2 ratio, inspiratory trigger delay, expiratory trigger delay, time of synchrony defined as the time during which patient’s inspiratory effort and ventilatory assistance are in phase, and time during which respiratory effort and ventilator assistance were synchronous. The amount of inspiratory effort was calculated as the Pressure Time Product of Edi per breath and per minute (PTPEdi/breath and PTPEdi/min) defined as the area under the Edi trace from the neural inspiration to the end of the neural expiration. The method of sampling patient-ventilator interaction parameters is described in Additional Material S2. Echocardiographic evaluation. Variables were collected using Philips CX 50 echocardiography device with S5/S8 probes. Measurements: TAPSE (tricuspid annular plane systolic excursion), RVFAC (right ventricular fractional area change), IT (tricuspid valve insufficiency), TDI (Tissue Doppler Echocardiography to measure systolic velocities in right ventricle), LV EF (left ventricle ejection fraction), RV-PSV (right ventricle peak systolic velocity). Hemodynamic parameters . Collected hemodynamic measurements were heart rate, systolic blood pressure, diastolic blood pressure, mean arterial pressure, cardiac index, systemic vascular resistance index, and central venous pressure.”
Primary outcomes were: (1) To compare the magnitude of patient ventilator asynchronies between PSV and NAVA in patients undergoing liver transplantation. Evaluate the variation in asynchronies (AI) between patient and ventilator during weaning from MV with the use of NAVA (compared with PSV); (2) To investigate the physiological effects of each ventilation mode on cardiac function.
Evaluate the differences between the two ventilation modes in terms of pulmonary function (PaO 2 /FiO 2 , oxygenation index, mean airway pressure, tidal volume), patient ventilator interaction (auto triggering, late cycling, double triggering, wasted efforts, inspiratory and expiratory trigger delay, pressurization time, time of synchrony, PTPEadi/breath and PTPEadi/min), cardiac and hemodynamic parameters mentioned above.
Collected data were presented as count and proportions (categorical data) or median and interquartile range (continuous data). A bivariate quantile regression analysis was applied to estimate the effect of NAVA compared to PSV on each metabolic, ventilation and hemodynamic variable. Estimates of the outcome variables for change in a covariate were reported as medians and standard error. An adjusted quantile regression model was applied to account for potential confounders to evaluate the effect of ventilation mode NAVA (PSV mode was the reference value) on asynchrony index, TAPSE and left ventricles strain. The determinants of AI and RVFAC for which the p-value was < 0.20 in the bivariate analysis were included in the initial multivariable logistic regression model. Subsequently, Patient baseline characteristics such as weight, graft/recipient weight ratio, cumulative fluid balance after surgery and cirrhosis with ascites (which are clinically and pathophysiologically relevant to respiratory mechanics and related to diaphragm and abdominal muscle function during spontaneous ventilation) were considered as potential confounders. According to previous studies , we estimated an AI of 50% in PSV and we expected NAVA to reduce it to 2%. Considering α-error equal to 0.05 and power equal to 80%, the study would have needed 16 patients to detect a 48% reduction in AI. Statistical software Stata 15.0 (StataCorp) was used for statistical analysis.
From March 2021 to September 2022 thirty-seven pediatric patients were screened for inclusion. Twenty-four liver recipients were enrolled in the study, two showed hemodynamic instability after randomization, and one returned to the operating room (Fig. ). All the remaining twenty-one patients completed measurements of ventilatory and invasive hemodynamic parameters during the three post-randomization phases. Only 12 patients were studied by echocardiography (6 for each ventilation sequence) due to the unavailability of the cardiologists overnight. Patient characteristics are shown in Table . No differences were detected by gender. The most frequent primary disease is biliary tract malformation (15/21, 71%). One third of the patients had cirrhosis with ascites (7/21, 33%). All patients received massive transfusions during transplantation (> 40 ml/kg in 6 h) . Six patients had GRWR > 4% (6/21, 29%). In a regression model with one covariate (Table and Table ), no significant changes in respiratory gas exchange or metabolism were observed during the application of NAVA and PSV. In terms of patient-ventilator interaction, NAVA was associated with significantly lower asynchrony index (1.5 versus 6.8%, p 0.016), inspiratory delay time (80 versus 150 milliseconds, p < 0.001), peak airway pressure (7.2 vs. 10.3 cmH 2 O, p < 0.001). The main asynchronies (late ciclying, double triggering, and wasted efforts) were not included in Table due to collinearity among variables. In particular, the strong correlation between some asynchrony indices led to their automatic omission in the regression model. PSV is associated with lower pressurization time (415 versus 615 milliseconds, p < 0.001) and synchronization time (420 versus 615 milliseconds, p 0.046). In the multivariable logistic model analysis (Table ) NAVA is significantly associated with a reduction in patient-ventilator asynchronies (AI) compared to PSV (Coeff − 6.66, 95% CI −11.5 to −1.78, p 0.08). In contrast, NAVA did not have an effect on right/left ventricular function. These findings are confirmed in the logistic model, which accounted for covariates with a p-value < 0.2 in the bivariate analysis (Table s).
In this trial involving pediatric liver recipients, the application of NAVA reduces asynchronies between patient and ventilator. Improved synchrony during NAVA, as well as respiratory exchanges and reduced wasted efforts, have been shown in other studies in patients with PARDS, bronchiolitis and post-cardiac surgery – . In the previous studies in which NAVA was compared to PSV, the median AI varied between 1.7 and 11% during the application of NAVA, in our population NAVA resulted in an AI value of 1.5%. It should be noted that even during the application of PSV, AI was lower in comparison with the above-mentioned previous studies (8.6 vs. 8.8–25%). The presence of Edi monitoring allowed to optimize the expiratory trigger. In our study, we did not detect significant changes in respiratory exchanges. No studies have compared echocardiographic function during NAVA and PSV in pediatric patients undergoing major abdominal surgery. A study in adult population submitted to abdominal surgery showed greater efficacy of NAVA (vs. PSV) in respiratory exchange and Eadi but did not evaluate asynchronies and cardiac function . The advantage of NAVA, during weaning from MV, becomes more relevant when analyzing same specific aspects (abdominal and not) that characterize patients undergoing major abdominal surgery. During anesthesia and surgery, the insult on chest wall due to retractor and the reduction in muscle tone lead to a cephalic elevation of the diaphragm, which is prone to atelectasis development; in the post-operative period, pain and diaphragmatic dysfunction lead to a reduction in residual functional capacity and may consequently lead induce the persistence or development of new atelectasis . It should be added that, unlike ascites, which is drained by tubes, pleural effusion can get worse because of pulmonary and diaphragmatic dysfunction. Abdominal factors that may alter weaning and spontaneous/assisted ventilatory activity include abdominal surgery itself, the need for postoperative intravenous analgesia, graft-to-recipient weight ratio (GRWR), and the presence of abdominal mesh. Spontaneous ventilatory activity also relies on abdominal muscle action coordinated with the diaphragm; therefore, the bilateral transection of the abdominal muscles (with the lost of physiological continuity of abdominal wall), the presence of abdominal mesh and the painful stimulus already determine respiratory adjustment during postoperative period. Grimaldi et al. pointed out that a GRWR of > 4% increases the risk of abdominal distension and vascular complications of the graft . It is understandable that abdominal mass may alter diaphragmatic excursion and influence venous return to the right atrium. In our population, mesh was used in three (3/21, 14%) of the 6 patients (6/21, 29%) with GRWR > 4%. In both cases (simultaneous presence of abdominal mesh and GRWR > 4%) we have patients with abdominal features that may affect the effectiveness of diaphragmatic activity during weaning from MV. Despite these issues, NAVA is superior in terms of patient-ventilator. The effect of positive cumulative fluid balance and massive transfusion (> 40 ml/Kg in 6 h) are associated with prolonged invasive MV, difficulty weaning from MV and postoperative pulmonary complications. In this study, while in all patients we detected massive transfusions and postoperative fluid balance between 10 and 20%, the use of NAVA results in lower peak and mean airway pressure proving less stress of the lungs than PSV. In a cross-over study with an A-B-A design, the return to phase A allows for the assessment of washout effects, ensuring that any residual influence from the previous treatment in phase B does not persist. This controls for carry-over effects, which could confound results by allowing treatment effects to persist into subsequent phases. Additionally, the design evaluates the reversibility of treatment effects, enhances statistical robustness with repeated baseline measurements, and minimizes inter-individual variability by using participants as their own controls. There are several limitations to this study. Firstly, the study was conducted in a single center. Second, the number of patients, although higher than in previous studies, is still small. Thirdly, the potential role of echocardiography in assessing the effect of the two ventilation modes on asynchronies and cardiac function could not be fully explored in all patients studied. The absence of blinding of the investigators at the patient’s bedside (with the exception of the cardiologists and researchers who analysed the ventilatory curve recordings) may introduce bias by influencing themselves.
Diaphragmatic dysfunction, positive cumulative balance and massive transfusions may adversely affect weaning from invasive MV in children undergoing major abdominal surgery. Confirming data from previous studies , , NAVA significantly improves patient-ventilator interaction, as evidenced by a reduction in AI, compared to PSV mode. This study highlights a potential application in patients undergoing spine, thoracic and cardiac surgery, where the chest would combine the surgical factors and ventilatory management; additionally, the role of the diaphragm would be involved during the postoperative period after abdominal oncological surgery, where its function may also be altered. Incorporating diaphragmatic ultrasound may provide added value as a confirmatory tool for detecting diaphragmatic dysfunction, particularly in relation to duration of mechanical ventilation and type of surgery performed. Further multicenter studies are needed to evaluate the clinical effects of NAVA on weaning from MV in post-operative period and to assess the effects of NAVA on ventricular fun |
Bilateral massive leiomyomas in a bicornuate uterus, with torsion of the right horn | fc1b202a-68c0-4122-ae30-50150f7c8ab8 | 11937886 | Surgery[mh] | Uterine leiomyomas are the most common benign tumours of the female reproductive system, typically affecting women of reproductive age. While these tumours are often asymptomatic, their size, location and growth can result in various clinical presentations, which include pelvic pain, abnormal uterine bleeding and reproductive challenges. Rarely reported cases of torsion of the stalk of a subserosal leiomyoma may occur, which can lead to acute abdominal pain and may require prompt surgical intervention. Leiomyomas may also uncommonly be noted in patients with congenital uterine anomalies, such as a bicornuate uterus. There are even fewer reported cases of uterine torsion in a bicornuate uterus, those mainly associated in an obstetric setting. This case report highlights the rare combination of unilateral uterine horn torsion on a bicornuate uterus due to massive subserosal leiomyomas.
An African American woman in her 30s presented to the emergency room with a chief complaint of acute right lower quadrant abdominal pain. The patient reported a progressive worsening of this pain over the past week, despite using over-the-counter medications. She described the pain as sharp, non-migrating and persistent, with no periods of relief. She also noted episodes of constipation, which she had previously attributed as the cause of her pain in the past. She revealed a 2-year history of intermittent abdominal pain, bloating and abdominal distension, which she assumed was related to weight gain. Additionally, she reported a long-standing history of constipation that was typically relieved with over-the-counter stool softeners. Over the past year, however, she had noticed a marked increase in abdominal distension accompanied by more frequent and severe episodes of pain. Although she had successfully managed her pain with over-the-counter medications such as acetaminophen in the past, this current episode had persisted and worsened despite its use. She denied a history of unintentional weight loss, loss of appetite or difficulty with urination. Her gynaecological history included a history of irregular menstrual cycles with occasional heavy bleeding, and she was previously diagnosed with polycystic ovarian syndrome in 2021. She endorsed a known history of leiomyomas since 2020 but was not aware of the size or any other uterine abnormalities. She denied the use of contraceptives and had never been pregnant or attempted pregnancy before. She denied any medical issues or prior abdominal surgery. She had a desire for future fertility. On examination, the patient was alert and oriented, with some discomfort. Her vital signs were within the normal range. Her body mass index was 56.2 kg/m². Her abdominal examination was significant for large, non-mobile, solid, palpable masses on the left and right upper abdomen abutting the rib cage with tenderness to palpation and occupying most of the abdomen. The pregnancy test and the urinalysis were negative. She had a complete blood count showing a haemoglobin level of 10.4 g/dL and an unremarkable basic metabolic panel. Due to the size of the leiomyomas, the following tumour markers were obtained: cancer antigen (CA) 19–9, carcinoembryonic antigen, beta-human chorionic gonadotropin, inhibin B, alpha-fetoprotein and lactate dehydrogenase. Only the CA-125 level was elevated at 102.1 U/mL. The CT and MRI reports described two large dominant masses that appeared to arise from the uterus, measuring approximately 27 cm maximum dimension on the left and 24 cm maximum dimension on the right and favoured to be large myomas ( ). The patient was then hospitalised for pain optimisation with intravenous ketorolac and transitioned to oral acetaminophen and ibuprofen. After pain optimisation, the patient was counselled on her options, risks and benefits of treatment. She desired an abdominal myomectomy and was scheduled for surgery the following day. The surgery was performed through a midline vertical incision extending to the umbilicus to access the abdomen. On abdominal cavity entry, the right-sided leiomyoma base, classified as International Federation of Gynaecology and Obstetrics (FIGO) stage 6, was noted to be torsed 360° and was starting to avulse. The leiomyoma was exteriorised, and its base, which was later identified as the right uterine horn, was successfully detorsed. Next, the second leiomyoma was delivered along with the uterus. On further inspection, both large-appearing myomas originated from the fundus of each horn of a bicornuate uterus and were both FIGO stage 6 in nature. Each myoma measured approximately 25 cm in diameter ( ). To reduce operative blood loss, a Penrose tourniquet was used to compress the uterine vessels. To perform this, a window was made in an avascular portion of the broad ligament using a Bovie electrocautery tip at the level of the internal os. The ends of the Penrose drain were threaded through the windows, pulled tight and secured with a Kelly clamp to compress the uterine vessels. Vasopressin was also injected at the base of the leiomyoma. Myomectomy was performed with Bovie electrocautery and blunt dissection. The incisions were closed in multiple layers using V-loc suture, and complete haemostasis was achieved. The Penrose tourniquet was removed, and during reperfusion, the incisions remained haemostatic. The bicornuate uterus was clearly identified after closure ( ). The patient had an uneventful recovery period. Postoperative haemoglobin level was 9.8 g/dL. Pathology confirmed the masses were benign leiomyomas, showing a typical whorled appearance with a moderate amount of cystic changes and tan-yellow calcification. The total weight of both leiomyomas on the pathology was 9835 grams ( ).
The patient returned for her postoperative appointment a month later with significant improvement in daily activities, less weight burden and the relief of knowing her diagnoses of both benign leiomyomas and a bicornuate uterus. Education on bicornuate uterus, including future fertility concerns, was provided. The patient was unable to attend further follow-up visits due to the inability to take off from work. However, our plan is to assess the uterine cavity with office hysteroscopy.
The case presented involves the rare occurrence of bilateral large subserosal leiomyomas located on each uterine horn of a bicornuate uterus, with an intraoperative finding of unilateral uterine horn torsion secondary to its leiomyoma. Leiomyomas are benign smooth muscle tumours arising from the myometrium. The aetiology of leiomyomas is multifactorial, with genetic predisposition, hormonal factors and growth factors contributing to their development. Torsion of pedunculated uterine leiomyomas is extremely rare, with a reported incidence of less than 0.25%. Torsion of a pedunculated leiomyoma may be considered a surgical emergency due to the risk of ischaemia and necrosis leading to reactive peritonitis and morbidity. However, pedunculated subserosal uterine leiomyomas may also be asymptomatic or can be associated with varying degrees of pain. Although the diagnosis of torsion of a pedunculated subserosal uterine leiomyoma is typically confirmed intraoperatively in most cases, imaging modalities can provide valuable preoperative information. Doppler ultrasonography can raise suspicion of leiomyoma torsion by detecting reduced blood supply when the vascular pedicle is visible. In cases of inconclusive ultrasound findings, ideally, an MRI is performed for its higher sensitivity and specificity. The vascular pedicle of a subserosal leiomyoma is better appreciated on MRI in the form of T2 flow voids at the interface between the uterus and the mass indicating the ‘bridging vessel sign’, suggesting a uterine origin of the mass. In our case, an MRI was performed, but neither torsion nor the uterine anomaly were identified, perhaps due to distortion by the leiomyomas. This case also demonstrates the discovery of a bicornuate uterus intraoperatively with massive subserosal leiomyomas; our literature review on leiomyomas with congenital uterine anomalies showed very few results. A bicornuate uterus is a uterine malformation produced due to abnormal fusion of the Müllerian ducts. It is a rare anomaly and associated with reproductive outcomes such as recurrent pregnancy loss and preterm labour. Though the prior-reported cases of leiomyomas in bicornuate uteri have only been intramural and submucosal in location, our case presents the development of remarkably large subserosal leiomyomas, both of which were FIGO stage 6, on each uterine horn. Though leiomyoma torsion itself is rare, our case shows that uterine horn torsion should also be on the differential of subserosal leiomyoma torsion. Some elements of uterine torsion can be seen in , where the medial aspect of the right uterine horn was starting to avulse. Due to the rarity of the co-occurrence of large leiomyomas located on each uterine horn of a bicornuate uterus, one may mistake the stalk of a leiomyoma for the uterine horn without careful dissection and adequate blood control. This case highlights the importance of preoperative surgical planning, review of imaging and adding the differential diagnosis of uterine torsion in the setting of large leiomyomas and unknown uterine anatomy prior to removal of leiomyomas. In an acute setting of abdominal pain with a large uterine leiomyoma, leiomyoma and uterine horn torsion must be considered in the differential diagnoses. Though uterine anomalies are rare, patients with a history of abnormal uterine bleeding, recurrent pregnancy loss, preterm birth or infertility may have an underlying and undiagnosed diagnosis. Thus, comprehensive preoperative and intraoperative evaluation, including assessment of possible Müllerian anomalies and detailed leiomyoma mapping, is necessary to minimise inadvertent damage to a compromised uterus for optimal surgical outcomes and patient safety. Patient’s perspective I knew about my fibroids back in 2020, but due to the COVID-19 pandemic, I had issues with follow-up visits and getting my MRI to see why my stomach felt so large, hard and painful at times. Initially, I wasn’t even told about my fibroids when I got my imaging done, only that I had PCOS. Then, as time passed, I felt the pain gradually get worse, my clothes were starting to not fit, but my diet was always the same. Walking down the block or up the stairs felt like running a marathon while carrying extra baggage, and I would try to avoid stairs and take the elevators as much as possible. Eventually, the pain got so bad, I had to seek help. In the emergency room, I was glad to finally get some answers as to why I’ve been having pain this whole time and my stomach was getting so big. Before surgery, I was excited to get these fibroids out, but at the same time, I was scared of losing my uterus. After the surgery, I woke up noticing a flatter stomach and feeling lighter. At my postop visit, I saw a photo of my fibroids that were removed, and they were huge! I am so thankful everything went great. I also now know that I have a heart-shaped uterus. To my surprise, the recovery period was quick, and I even went to work earlier than anticipated. I can finally walk long distances with no hesitation, and I can return to my old lifestyle that I haven’t experienced in a long time. Learning points Leiomyomas in bicornuate uteri are uncommon. In obese patients, large leiomyomas can be clinically missed. Coexistence of congenital uterine anomalies should be considered in patients with severely leiomyomatous uteri. Torsion of a single horn of a bicornuate uterus can occur, which may not be detected clinically. When assessing abdominal pain with imaging showing likely leiomyomas, the differential should include torsion of either the stalk of a leiomyoma or of a uterine horn.
|
Does protruding headless cannulated screw reduce fixation stability in tension band wiring technique for patella fractures? a biomechanical study | 92c31fdc-d3c5-46a3-a623-ea52ad1688fc | 11804062 | Surgical Procedures, Operative[mh] | Most patellar fractures occur when the extensor mechanism is under too much tension, which predisposes a transverse fracture line . The modified tension band wiring (MTBW), described by Weber et al., is the most preferred surgical technique for the fixation of transverse patellar fractures. This method provides strong and stable fixation, which allows early joint mobilisation . Despite various technical modifications, the risk of early fixation failure is still between 10 and 20% after MTBW . Various alternative methods, including partially threaded cannulated screws, fully threaded headless cannulated screws, fixed-angle plates, mesh plates, suture materials, non-metallic implants, and orthobiologics, have been used to improve mechanical stability and decrease postoperative complication rates . Cannulated screws with a cerclage wire have been shown to provide sufficient and secure fixation. This technique has been proven to offer high endurance and a low clinical failure rate . Therefore, using cannulated screws combined with an anterior cerclage wire is a secure fixation technique for the treatment of transverse patella fractures . Despite it being proven to be effective, the MTBW technique has still high implant-related complications due to metallic wire irritation and skin damage . On the other hand, distal screw protrusion can also be a potential source of this complication when cannulated screws are preferred instead of k-wires. It is generally recommended to recess the screws into the bone in the surgical treatment of patella fractures with cannulated screws . This technique offers several benefits; it reduces the danger of cable breakage due to screw cutting and prevents soft tissue irritation . Furthermore, recessed screws can improve the construction stability by transferring all forces on the cerclage wire directly to the patella . Besides, protruding screws can prevent effective compression of the fracture fragments and deteriorate any advantage provided by tension band wiring . Another potential source of postoperative pain and irritation is prominent screw heads. Headless, fully threaded cannulated screws can be used as a viable option for reducing the incidence of pain and irritation caused by prominent screw heads . However, there have been very few biomechanical studies comparing that headless screw fixation of patellar fractures improves stability and strength. The present study aimed to compare the strength of headless screws based on the screw length at the tension band wiring. PreparationsSurgical techniqueBiomechanical testStatistical analysisThis is a biomechanical study that compared different fixation techniques for the transverse patella fracture. The present research was performed using the principles of prior studies . Forty-eight synthetic sawbone patellae and femurs were used for this study (Sawbones ® Europe AB, Malmö, Sweden). A plastic mould guide, produced by a three-dimensional (3D) printer, has been developed for the standardised application of screw pathways and transverse osteotomies (Fig. ). The medial-lateral position of k-wires: The patellar length was measured and partitioned into three segments. The samples were drilled in the coronal plane with k-wires from these lines through the guide created on the 3D printer (Fig. ). The anterior-posterior position of k-wires: The patella’s depth was measured and partitioned into two segments. The specimens were drilled in the sagittal plane with k-wires from this line through the guide (Fig. ). A total of three groups have been identified with various screw lengths: Group 1 (recessed headless cannulated screw fixation with cerclage wiring), Group 2 (full-length headless cannulated screw fixation with cerclage wiring), and Group 3 (protruding headless cannulated screw fixation with cerclage wiring) (Fig. ). The specimens underwent static and dynamic biomechanical tests using an electrohydraulic machines (static test: AG-1 5kN and dynamic test: AG-IC 50kN, Shimadzu, Japan). The polyester bands, used to mimic the quadriceps and patellar tendon, were initially tested. The polyester band was loaded to 2000 N, with a strain of less than 0.1%, which was ignored for all groups. The screw holes were created through k-wires and tapped prior to performing the osteotomy. A transverse osteotomy was performed using a 1 mm-thick saw blade through the guide. A transverse window measuring 25 × 3 mm 2 was created in the osteotomy line. Following a random selection process, the samples were separated into three groups. After reduction, the screw pathway length was determined using a conventional screw depth gauge. The models in Group 2 were fixed with full-length cannulated screws. The recessed models (Group 1) were fixed using cannulated screws 4 mm (-15%) smaller than the observed pathways. The screws were centred, creating a 2 mm recessed position. The protruded models (Group 3) were fixed using cannulated screws 4 mm (+ 15%) longer than the measured length . The screws were centred, creating a 2 mm protruded position. Two polyester straps were threaded through the transverse window to act as quadriceps and patellar tendons . The tension band wiring was created with 1 mm of stainless steel wire that was passed through the screws. All samples were prepared by a single surgeon, using 3.5 mm headless cannulated screws (FX Orthopaedics, The Aegean Free Zone, Izmır, Turkey) and cerclage wire. The patella specimens were placed in the setup to mimic the 60-degree extensor mechanism of the knee because in previous studies, maximum fracture separation after patella osteosynthesis was observed at 60 degrees to 30 degrees of flexion, and the highest force applied to the patellofemoral articular surface and the greatest tension on the quadriceps muscle occurred at 60 degrees of knee flexion . The patella samples were inserted over the artificial femur. The polyester loops were then attached to the electrohydraulic testing machine (Fig. ). Static testDynamic testFollowing the mounting process, twenty-four samples were preloaded with 100 N. A displacement-controlled axial traction was performed with a steady rate of 15 mm per minute. A video recording device was used to document the separation of the osteotomy line. Displacement values were recorded at 300 N, which was used in similar studies to simulate the forces acting on the patella during active extension of the knee against gravity . Force values were measured at a displacement of 2 mm, presenting as a threshold for intraarticular fracture fixation. Displacements over 2 mm prevents bone healing and may result in nonunion and arthritis . After, failure loads were documented (Fig. ). Implant failure was detected by force-displacement analysis. Failure criteria were described as an acute and definitive drop in a force-displacement graph, fixation breakdown (such as the breaks of the stainless steel wire), and the destruction of the sawbone. In addition, criteria for exclusion were described to include prior damage to the sample and technical mistakes throughout preparation. Eight specimens of each fixation group were used for the dynamic test. Samples were preloaded with 100 N after mounting. Each sample underwent 10,000 cycles at 15 mm/min and a linear load increase from 100 to 300 N to simulate tension around the patella in daily activity, according to a previous study . During the cyclic test, applied load (N) and the fracture gap (mm) were recorded (Fig. ). A video recorder was used to measure the gap at the osteotomy line in the dynamic test. Failure criteria were described as static test. Additionally, a gap of 2 mm or more was defined as failure of fixation at cyclic loading. An analysis of statistics was conducted by SPSS version 22.0 (IBM, NY, USA). The Kruskal-Wallis test, a nonparametric test, was used because the sample size was less than 30 and there were more than two different groups in our study. The Bonferroni-Dunn correction test was used to compare the pairwise relationships of the groups. The statistical significance was determined when the p -value was below 0.05 for the Kruskal-Wallis test and when the p -value was below 0.016 for the Bonferroni-Dunn correction test. A power analysis was performed to determine the capacity of the sample size. When the significance level was 0.05 and the number of groups was 3, a sample size of 8 specimens in each group was necessary to detect a difference with 82% power in the G*Power 3.1.9.7 program. Static testDynamic test There were significant differences between the groups in the cyclic loading test ( P = 0.006, Kruskal-Wallis test) (Table ). The mean ± standard deviation (SD) fracture gap was 0.26 ± 0.09 mm (%95 confidence interval [CI], 0.16–0.40 mm) in Group 1, 0.27 ± 0.08 mm (%95 [CI], 0.13–0.38 mm) in Group 2, and 0.45 ± 0.12 mm (%95 [CI], 0.26–0.64 mm) in Group 3. There was no statistically significant difference in the fracture gap between Group 1 and Group 2 regarding the binary relationship ( P = 0.948, Bonferroni-Dunn correction test). Although, the mean fracture gap after 10,000 cycles was significantly higher in Group 3 than in Group 1 and Group 2 ( P = 0.004 and P = 0.005, Bonferroni-Dunn correction test). All three types of fixation groups endured 10,000 cycles without breakage or displaying a fracture gap increase of 2 mm in the dynamic test. Significant differences were observed in loads at 2 mm displacement among groups ( P = 0.003, Kruskal-Wallis test) (Table ). The mean ± standard deviation (SD) load at 2 mm displacement was 731.62 ± 78.97 N (%95 confidence interval [CI], 645–852 N) in Group 1, 717 ± 72.45 N (%95 [CI], 578–823 N) in Group 2, and 515.62 ± 105.44 N (%95 [CI], 419–712 N) in Group 3. There was no statistically significant difference in load values at 2 mm fracture displacement in Group 1 and Group 2 regarding the binary relationship between the groups ( P = 0.944, Bonferroni-Dunn correction test). However, Group 3 had significantly lower load values at 2 mm fracture displacement compared to Group 1 and Group 2 ( P = 0.003 and P = 0.004, Bonferroni-Dunn correction test). There was no significant difference in respect to the displacement at 300 N axial traction ( P = 0.651, Kruskal-Wallis test) and failure loads ( P = 0.349, Kruskal-Wallis test) (Table ). Traditionally the tips of the cannulated screws should be properly recessed for the fixation of patellar fractures in tension band wiring . This technique is thought to have benefits such as minimised under-skin damage and improved fixation stability. However, there is little data on the biomechanical strength of the technique, specifically in relation to prominent. This study revealed that, under axial and cyclic loading, protruded screws are less stable biomechanically compared to recessed or full-length screws in tension band wiring. This result is very correlated with the study of Avery et al., which revealed that protrusion of headed, partially threaded, cannulated screws with cerclage wire leads to higher displacement at the fracture line . The importance of initiating early passive movement for intraarticular fractures has been highlighted in postoperative treatment. Therefore, any internal fixation technique must be able to withstand the stresses that occur during active or passive movement after surgery. Carpenter et al. used cyclic loads ranging from 0 to 30 kg (kg) to mimic knee extension in the presence of gravitational force in a human cadaver model . In order to mimic postoperative mobilisation, Scilaris et al. performed thirty cycles ranging from 20 N to 300 N. Based on previous studies, we adapted this technique to axial loading for static testing and also applied cyclic loading between 100 N and 300 N for dynamic testing . Although there was no difference at 300 N axial loading and at the ultimate load of failure among the constructs, the protruding screw fixation group was less stable in the cyclic loading test between 100 N and 300 N, which is necessary for active extension of the knee against gravity during daily activity . Also, the prominent screws resulted in less strength regarding 2 mm separation at the osteotomy line, which is clinically important in articular fractures . There is a trend towards using different screws to improve the fracture fixation’s stability by increasing the grip strength of screws. Headless screws gained popularity over the years for the fixation of varying fractures and osteotomies with predictable biomechanical and clinical results . A recent study of Martin et al. reported that a headless screw with cerclage wire revealed reliable biomechanical performance for the fixation of patellar fracture . They attributed this to having increased surface area between the screw and bone and grip strength. Moreover, Chen et al. showed that the addition of a tension band increases the stability when using headless cannulated screws in a finite element analysis model . The present study also shows that the headless cannulated screws with the tension band wiring provide sufficient stability similar to previous studies. Although the screw and cerclage combination provides strong stability, there are technical points that need to be considered in clinical practice. The most important of these is the selection of the appropriate screw length, as when protruding screws are used, the compression effect of cerclage on the bone is lost. In our study, although the screws were recessed in Group 1, there was no difference between full-length screws in Group 2. On the other hand, the protruding screw and tension band combination in Group 3 revealed less strength compared to the other groups. This supports the hypothesis of our study. Avery et al. showed that protruding screws reached 2 mm displacements at earlier load values compared to full-length, partially threaded, cannulated screws . They attributed this to the reduced ability to resist tensile forces due to the decreased number of threads in the distal patellar fragment. Nonetheless, while their data are reliable, we believe that their conclusions are unclear. Although distal screw grip was less in the recessed screw group compared to the full length, no difference was found between the stability in the present study. The main factor for the loss of stability found in the study of Avery et al. is the decreased stabilising effect of the cerclage on the protruding screws, which is similar to Group 3 in our study. Martin et al. utilised cadaveric bones in their research, which is hard to standardise because of the varying size and density. Additionally, the surrounding soft tissues might have hindered the accurate measurement of screw lengths. In addition, the headed or headless screw fixation may reduce the total height of the patella due to the screws are inserted in a way that creates compression between the fragments. This may result in the screw tip becoming more noticeable. Therefore, sawbone is preferred as a material to increase standardisation of the study. Also, we placed k-wires through a guide and determined screw lengths by direct measurement and visual inspection to obtain data more clearly. We believed that it is necessary to see the screw tips directly or take an anteroposterior x-ray view to ensure that the screw is properly buried during the surgery. According to clinical studies, cannulated screws with cerclage wire have proven to have a lower risk of skin irritation and fixation failure . In order to provide this, the screw length in the soft tissue is the most important factor. This study showed that the recessed screw group and the full-length screw group were biomechanically equivalent. This indicates that protruding screws should be avoided in order to provide adequate stability and to prevent complications related to soft tissue irritation in the clinical setting. It is crucial to acknowledge that our research has some limitations. The main limitation is that using an artificial patella (sawbone), which might not necessarily mimic the behaviour of a human patella. The implant-to-bone interface cannot be directly compared with an in vivo situation with soft tissue attachments and the inherent variability in bone quality seen in patients when the sawbone bone is used in research. On the other hand, sawbones provide a uniform material in the research of bone behaviour compared to cadaveric bones. This is because cadaveric human bone models often result in inhomogeneous groups regarding bone density and size variations . The present study was able to achieve greater homogeneity among the samples by using a synthetic bone instead of cadaveric bone. Although axial traction at 60 degrees of knee flexion simulates the maximum displacing force on the patella, well-designed clinical and biomechanical studies are needed that include different degrees of flexion. This study demonstrated that protruding screws decreased fixation stability when using headless screws with tension band wiring for the fixation of transverse patella fractures. Recessed or full-length screws may increase construction stability and bony healing. Thus, complications such as implant irritation, nonunion, and malunion in patella fractures can be prevented. Further studies using different flexion angles and cadaveric models are needed to confirm clinical applicability. |
Pathology of Self-Expanding Transcatheter Aortic Bioprostheses and Hypoattenuated Leaflet Thickening | c8a77754-ef1e-4bd8-95b4-664fc0ffc058 | 11827688 | Forensic Medicine[mh] | Hypoattenuated leaflet thickening is thought to represent leaflet thrombosis; however, no histological examination of hypoattenuated leaflet thickening has been conducted to date. Histologically, leaflet thickening corresponds to the presence of a thrombus and reflects its progression. Hypoattenuated leaflet thickening detected by microCT, to varying degrees, was observed in 43.4% of the examined leaflets. Treatment for hypoattenuated leaflet thickening is likely most effective in its early stage, highlighting the importance of early detection. , Cause of Death/Reintervention, and Pathological Diagnosis of Valve FailureGross/Radiographic/MicroCT ExaminationTissue Processing of Valve Leaflets and FrameSemiquantitative Scoring for Histopathologic FindingsMicroCT and Histological Findings of Leaflet Thickening: HALT AnalysisStatistical Analysis From 2011 to 2021, CVPath Institute received 123 self-expanding transcatheter aortic valves (TAVs; CoreValve, Evolut R, and Evolut PRO; Medtronic, MN) obtained at autopsy or surgical explant from a population of >7500 participants across 11 clinical trials ( Table S1 ). In these trials, clinical thrombosis rates ranged from 0% to 1.3% across surgical risk groups. – The patients or their relatives consented to autopsy examination or surgical explant for each case submitted to CVPath. All cases were received with a complete clinical history and information on implant duration. The institutional review board at CVPath Institute approved this study. The Clinical Events Committee determined the cause of death or reintervention by reviewing individual case report forms and subject data, classifying them based on the Valve Academic Research Consortium 3 end point definitions for aortic valve clinical research. Fixed whole hearts with transcatheter valves or explanted valves were received in 10% formalin, radiographed, photographed, and examined by experienced cardiovascular pathologists. MicroCT imaging was also performed on all cases following the installation of a microCT at our institution in 2016. The technical method for microCT imaging has been previously published, with further details provided in the Supplemental Material . Details of valve leaflets and frame tissue processing are in the Supplemental Material . In brief, the leaflets were removed from the frame, sliced longitudinally at 2 to 3 mm intervals, with 4 to 5 sections per leaflet, and embedded in paraffin. Each tissue block was sectioned (4–6 µm) and stained with hematoxylin and eosin and Movat pentachrome. If necessary, von Kossa stain was performed for the evaluation of calcification. The valve frame along with the surrounding ascending aorta, native valve, and left ventricular outflow tracts was dehydrated and embedded in Spurr’s resin medium, and transverse sections of ≈4 mm thickness were cut from the block. Three sections were selected from these parallel cuts for further grinding and staining, specifically the inflow region at the level of the native aortic valve, the bioprosthetic valve commissure region, and the outflow region. Sections were ground using Exakt method grinding technology at a thickness of 50 to 100 µm, polished, and stained with hematoxylin and eosin. Histological evaluation of the valve frames and leaflets was performed using light microscopy, with modifications made to the previous method. – Details are described in Supplemental Material . Briefly, valve frames were evaluated based on specific parameters, including the shape of the inflow portion of the valve frame, the presence and degree of paravalvular gaps (PVGs), inflammation at the valve skirt, neointimal growth, and valve frame thrombus. Valve leaflets were assessed for thrombus, pannus, inflammation, structural changes, and calcification using semiquantitative scores ( Tables S2 through S7 ). To explore the histological findings of HALT, we evaluated leaflet thickening in microCT and histology. In the histological findings, HALT was defined when the thickness of the thrombus/pannus on the surface was thicker than the normal pericardial leaflet thickness. The composition of HALT was determined histologically as an acute, organizing, and organized thrombus (ie, pannus). Acute thrombus consists of fibrin, platelet, and red blood cells. Organizing thrombus is a mixture of acute and organized thrombus tissues. Organized thrombus, or pannus, consists of smooth muscle cells and proteoglycan-collagen matrix with or without chronic inflammation but no fibrin. In microCT findings, HALT was defined as increased leaflet thickness (>normal) with a typical meniscal appearance on long-axis views. The extent of HALT on microCT and histology was semiquantified on long-axis views, which were carefully aligned with the leaflet center in terms of involvement along the curvilinear leaflet, beginning at the base. A 4-tier grading scale for leaflet involvement (<25%, 25%–<50%, ≥50%–<75%, and ≥75%) was used, as shown in Figure . The investigation was performed by 2 independent reviewers, Y.S. and T.K. In cases where the diagnosis was inconsistent between the 2 readers, a consensus was reached through further discussion. The normality of distribution was tested by the Shapiro-Wilk test. Normally distributed variables were expressed as mean±SD. Non-normally distributed variables were shown as the median and the first and third quartiles (Q1, Q3). Categorical data were analyzed by either χ 2 or Fisher exact tests. Comparisons of variables with normal distribution were analyzed with ANOVA, while results with non-normal distribution were tested by Kruskal-Wallis. Multiple comparisons were performed using the nonparametric Steel-Dwass all-pairs test. A P <0.05 was considered statistically significant. Statistical analyses were conducted with JMP 16.0 (SAS Institute, Cary, NC). Clinical Cause of Death/Reintervention and Pathological Diagnosis of TAV FailureSemiquantitative Score Results for Leaflet Pathological ChangesHALT Analysis: Leaflet Thickening in Histology and MicroCTLeaflet CalcificationStructural Changes and InflammationEvaluation of the TAV FrameWe received 89 autopsies and 34 surgically explanted CoreValve or Evolut R/PRO TAVs. Table summarizes the Clinical Events Committee adjudicated cause of death or reintervention based on Valve Academic Research Consortium 3. Of the 89 autopsy cases, there were 60 cardiovascular, 21 noncardiovascular, 6 valve-related, and 2 unknown deaths. Among 34 surgical explant cases, there were 10 structural valve deterioration, 15 nonstructural valve dysfunction, 1 clinically confirmed leaflet thrombosis, 5 infective endocarditis, and 3 other causes. Based on the Valve Academic Research Consortium 3 criteria, 34 cases were considered bioprosthetic valve failure, including 28 reinterventions (stage 2) and 6 valve-related deaths (stage 3). For the semiquantitative analysis, cases with infective endocarditis (10/123, 8.1%) or those with a TAV in a failed surgical valve (TAV-in-surgical aortic valve; 3/123, 2.4%) were excluded because these 2 conditions may affect the natural progression of the leaflet and frame. Thus 110 cases composed the main histopathologic analysis. Figure shows representative images of the main pathological findings seen in TAVs. Among the 110 cases, severe thrombosis was noted in 13 cases (11.8%) and severe pannus in 5 cases (4.5%). Severe structural changes such as leaflet tears or severe intrinsic calcification were observed in 5 cases (4.5%) and 3 cases (2.7%), respectively. Baseline characteristics for the overall population and each group are shown in Table . Median implant duration was 66 days (range, 0–1958 days [5.4 years]). At the time of autopsy or explant, patients were 80.3±9.5 years old, 37% were female, and the Society of Thoracic Surgeons Predicted Risk of Mortality score was 8.7±5.8 (Table ). The 110 cases were divided into 3 groups by implant duration: <30 days (n=42), 30 to 365 days (n=35), and >365 days (n=33). The median duration of the implant was 7.5 (1.8–15.3) days in the <30 days group, 74 (50–135.3) days in the 30 to 365 days group, and 925.5 (753.8–1276.8) days in the >365 days group. Other characteristics were similar among the groups except for the Society of Thoracic Surgeons Predicted Risk of Mortality score, which was significantly higher in the <30 days group (10.8±7.3) compared with the other 2 groups (7.6±4.1 and 7.2±4.6, respectively; P =0.013). Figure shows the results of semiquantitative scoring of leaflet changes for each patient within the 3 time points. Thrombus and inflammation scores showed no significant differences between the 3 groups. In contrast, pannus, calcification, and structural change scores were significantly highest in the >365 days group, followed by the 30 to 365 days group, and the <30 days group. The analysis for leaflet thickening in histology was performed in 110 cases. Ten leaflets were excluded from the analysis: 9 leaflets had intrinsic calcification that hampered the leaflet thickening evaluation, and 1 leaflet had fallen off when the microCT image acquisition was performed. Of the 320 evaluable leaflets, any degree of leaflet thickening was observed in 46.5% (149/320) of leaflets, with prevalence increasing over time (19.2%, 52.4%, and 77.8% for <30, 30–365, and >365 days groups, respectively; P <0.0001; Figure A). Histologically, leaflet thickening was confirmed as an acute, organizing, and organized thrombus (ie, pannus). In the <30 days group, all leaflet thickening was due to acute thrombus, while organizing thrombus was most often observed from 30 to 365 and >365 days, and thrombus organization increased with time (Figure A). When comparing the duration of implant among the different types of thrombi, acute thrombus was seen as early as 7 days (median duration of implant [minimum–maximum], 16 [7–1764] days), whereas organizing and pannus were seen in the chronic phase only (median duration of implant [minimum–maximum], 935 [48–1764] and 1014 [491–1351] days, respectively; Figure B). Of the 320 leaflets obtained from 110 cases, microCT imaging was performed in 40 cases. Four cases were subsequently excluded due to 3 severe intrinsic calcifications and 1 inadequate image quality. This resulted in a final data set of 106 leaflets from 36 cases for the microCT analysis. The characteristics of cases included in the microCT analysis were comparable to those of the overall histology population (N=106), except for the type of valves, reason for explant, and the Society of Thoracic Surgeons Predicted Risk of Mortality score ( Table S8 ). Those differences arise from the inclusion of more recent studies, which feature newer-generation valves and younger, lower-risk patients. This is due to the initiation of microCT imaging at our institution in 2016, following the introduction of the microCT system. HALT on microCT was seen in 43.4% (46/106) of evaluable leaflets; 26.4% (28/106) had grade 1, 4.7% (5/106) had grade 2, 4.7% (5/106) had grade 3, and 7.5% (8/106) had grade 4 thickening (Figure ). Histologically, leaflet thickening was observed in 40.6% (43/106) of cases; 23.6% (25/106) had grade 1, 6.6% (7/106) had grade 2, 3.8% (4/106) had grade 3, and 6.6% (7/106) had grade 4 (Figure ). The types of thrombi could not be differentiated by microCT images. Any leaflet calcification was seen in 10.0% of cases (11/110). When comparing cases with and without calcification, the median duration of the implant was significantly longer in cases with calcification than without calcification (976 [581–1470] versus 45 [9–264.5] days; P <0.0001; Table ). In contrast, no significant difference was observed between cases with and without calcification in terms of baseline creatinine clearance, the presence of diabetes, age, valve type, or anticoagulation therapy (Table ). Two types of leaflet calcification were identified: intrinsic (n=4) and extrinsic (n=7). Intrinsic calcification was localized within the pericardial leaflet tissue, whereas extrinsic calcification was associated with leaflet thrombus (Figure A). Extrinsic calcification was observed as early as 180 days post-implantation and in cases with longer implant durations (median, 865 days; maximum, 1764 days). In contrast, intrinsic calcification was only observed in cases with prolonged implant duration, ranging from 976 to 1958 days (Figure B). Leaflet structural change scores were generally low but increased over time. Severe structural changes were observed in 8 cases, including 5 leaflet tears and 3 severe intrinsic calcification. Severe structural change of leaflets was associated with longer implant duration (58 [10–491] versus 1770 [100–1958] days; P <0.0001). Inflammation scores were low overall with no significant difference between the earlier and the later time points (Figure ). Leaflet inflammation scores showed no correlation with other histopathologic scores ( Figure S1 ). The results of the TAV frame evaluation are summarized in Tables through . Coronary obstruction was observed in 1 case, where the patient died from myocardial infarction 10 days post-implantation. Overall, the prevalence of PVG was similar across the groups (<30 days: 53.9%, 30–365 days: 82.6%, >365 days: 44%; P =0.28). However, the prevalence of mild PVG decreased significantly in implants with a duration >1 year (<30 days: 53.9%, 30–365 days: 82.6%, >365 days: 44%; P =0.0012). In contrast, the prevalence of moderate-to-severe PVG remained consistent, regardless of implant duration (<30 days: 30.8%, 30–365 days: 26.1%, >365 days: 38.9%; P =0.68; Table ). The main findings of this study are as follows: (1) the pathological findings observed were infective endocarditis, leaflet thrombosis, pannus formation, calcification, and leaflet tear; (2) clinical thrombosis rates were extremely low in clinical trials of self-expanding valves, ranging from 0% to 1.3% across 11 studies. However, any degree of HALT, as observed by microCT, was identified in 43.4% of the 106 explanted leaflets and histologically confirmed as an acute, organizing, and organized thrombus (ie, pannus); (3) leaflet thrombi were observed to become organized over time histologically, although changes in thrombus structure could not be differentiated by microCT (Figure ); (4) semiquantitative scoring revealed that pannus formation, calcification, and structural changes increased with implant duration, whereas leaflet thrombosis and inflammation were not associated with the duration of the implant; (5) finally, for thrombus evaluation, these findings indicate that although thrombus thickness and length of leaflet involvement did not differ with implant duration (based on semiquantitative scoring), histological analysis showed that the thrombus became increasingly organized as the implant duration increased. To the best of our knowledge, this is the largest systematic pathological study of TAVs evaluating long-term implant duration (68 cases exceeding 1 month and 33 cases over 1 year) and the first validation study of HALT by histology. In 2016 and 2017, our group reported pathological studies of explanted balloon-expandable and self-expanding TAVR bioprostheses. , However, a small number of cases (n=22 and 21, respectively) with short follow-up duration (median of 16.5 and 17 days, respectively) limited the evaluation of the long-term durability and statistical evaluation of TAVs. In 2019, Sellers et al also reported a pathological study of 22 explanted TAVs. However, 64% of these TAVs were explanted due to procedure-related events within 30 days post-implantation, and specifically 10 were explanted within the first week. Furthermore, that analysis included 2 cases (9%) of infective endocarditis, a condition known to accelerate tissue degeneration, potentially leading to different degenerative processes compared with normal valve degeneration. Pathology of HALT and Antithrombotic TherapyDurability of Bioprosthetic Heart Valve Leaflets: Association Between Leaflet Calcification and ThrombusLeaflet Inflammation and Structural Valve DeteriorationStent Frame Healing and Paravalvular Leakage ReductionLimitationsConclusionsClinical and subclinical leaflet thrombosis is an area of growing concern in TAVs. HALT is thought to be associated with leaflet thrombosis and has been increasingly studied in recent years. Despite progress in the understanding of the clinical characteristics of HALT, its pathology has never been studied. In this study, we revealed that HALT occurred not only from acute thrombus but also from organizing or organized thrombus (ie, pannus), and the thrombus organization progressed over time. Once thrombi organize, it becomes resistant to antithrombotic therapy because their components have changed from fibrin and platelets (ie, targets of antithrombotic therapy) to smooth muscle cells and extracellular matrix. This suggests that identification of HALT and response to treatment may be most effective in the early time point. Many studies have shown that oral anticoagulants have a benefit to improve HALT however, whether anticoagulants improve outcomes is still controversial. In the GALILEO (Global Study Comparing a Rivaroxaban-Based Antithrombotic Strategy to an Antiplatelet-Based Strategy After Transcatheter AortIc Valve Replacement to Optimize Clinical Outcomes) and ATLANTIS (Anti-Thrombotic Strategy After Trans-Aortic Valve Implantation for Aortic Stenosis) trials, the routine use of anticoagulants in patients without a clear indication for it failed to show the clinical benefit due to the increasing risk of bleeding events, , , suggesting that patient selection for antithrombotic therapy is key to achieving good clinical outcomes. Recent studies have uncovered risk factors of HALT, which may help address the issue of patient selection. , Also, the timing of anticoagulation therapy is in discussion because organizing and organized thrombi are resistant to antithrombotic treatments. , Our results may explain why oral anticoagulation is not always effective in such cases. The association between HALT and clinical thrombotic events is still controversial. , In this study, while 13 cases of severe thrombosis were observed, only 1 was clinically diagnosed as leaflet thrombosis, with the other 12 cases being subclinical (ie, detected at autopsy or surgery). These findings support the fact that the influence of HALT may not be detected as clinical outcomes at least in the short term. In this study, 2 types of calcifications were detected on the bioprosthetic valve leaflet, intrinsic and extrinsic calcification. Intrinsic calcification occurs within the leaflet itself and has classically been recognized as leaflet calcification, typically observed in the chronic phase (ie, >10 years) following bioprosthetic heart valve implantation. Extrinsic calcification, in contrast, is seen within leaflet thrombus or pannus. Unlike intrinsic calcification, extrinsic calcification can develop at earlier time points and is not exclusive to the later stages post-implantation. This suggests that extrinsic calcification may contribute to leaflet stiffness and could have clinical consequences for valve dysfunction. Our study shows that approximately half of the calcification is derived from a thrombus thus antithrombotic therapy has the potential to prevent extrinsic calcification and ultimately, the degeneration of the leaflet. The associations between the durability of TAVR valves and HALT are still controversial , ; however, some studies reported that antithrombotic therapy enhanced valve durability and others showed an association between thrombus/calcification and valve degeneration. The involvement of the immune system in structural valve deterioration has historically been viewed with skepticism due to the belief that glutaraldehyde fixation eliminates the immunogenicity of xenografts. However, recent evidence indicates that glutaraldehyde treatment alone is inadequate to fully negate the host immune response. Several studies have identified leukocyte infiltration in explanted dysfunctional xenografts and homografts, suggesting ongoing immune activation despite fixation. , Our study also shows inflammatory cell infiltration into the pericardial leaflet. However, semiquantitative scores of inflammation did not correlate with leaflet histological changes, including structural valve deterioration. Since our study only assessed findings at the time of patient death or surgery, further studies are needed to assess the relationship between the degree of inflammation and clinical course. A limitation of the previous generation of TAVs is aortic paravalvular regurgitation (PVR). Since even mild PVR is known to worsen clinical outcomes, 1 major goal of device development was to reduce PVR. As a result, newer generations of devices have shown less prevalence of PVR and better clinical outcomes in recent clinical trials. PVG is reported to be correlated with PVR volume. In the current study, PVGs decreased over time because of the stent frame healing; however, relatively large PVGs, which could be recognized in gross observation, did not heal over time. This suggests that if a large gap is present at the time of the implantation, it is less likely to heal in the chronic phase. This study has several limitations. First, it included only autopsy or surgically failed cases and was based on a pathological analysis of a small subset (<2%) of self-expanding valves from 11 clinical trials. This limited sample size may introduce selection bias and the findings cannot be generalized to balloon-expandable valves. Second, valves explanted at autopsy were obtained from patients who may have died from either cardiovascular or noncardiovascular causes, which may not be representative of a broader population with valve failure. Third, the pathological findings are not linked to clinical outcomes, making it impossible to compare these results to those in living patients. Lastly, the high mean Society of Thoracic Surgeons Predicted Risk of Mortality score in the entire population may have shortened the observation period due to high mortality rates, limiting our ability to fully assess the long-term durability of bioprosthetic heart valves. This is the largest pathological study that evaluated the mode of TAV failures in explanted valves from clinical trials and the first study to compare microCT findings with histology in cases of leaflet thickening. Histological examination identified 3 types of thrombi: acute, organizing, and organized (pannus), which could not be differentiated by microCT. Implants >30 days displayed signs of thrombus organization, while the majority of valves implanted for >1 year showed either organizing or organized thrombi. These findings may explain why oral anticoagulation therapy is not always effective, suggesting that identification of HALT, and response to treatment, may be most effective within the first year post-implantation. AcknowledgmentsSources of FundingDisclosuresSupplemental Material Supplemental Methods Tables S1–S8 Figure S1 Susan Chow, PhD, CMPP, an employee of Medtronic, provided editorial assistance for the preparation of this article. This study was funded by Medtronic. CVPath Institute, which is a nonprofit organization, partially supported this work. CVPath Institute has received institutional research support from R01 HL141425, RECOVER Initiative (OT2HL161847-01), NIH RECOVER (Researching COVID to Enhance Recovery) 480 (OT2HL161847-01, PATHO-PH1-SUB_04_22), Biomedical; 4C Medical, 4Tech, Abbott Vascular, Ablative Solutions, Absorption Systems, Advanced NanoTherapies, Aerwave Medical, Alivas, Amgen, Asahi Medical, Aurios Medical, Avantec Vascular, BD, Biosensors, Biotronik, Biotyx Medical, Bolt Medical, Boston Scientific, Canon, Cardiac Implants, Cardiawave, CardioMech, Cardionomic, Celonova, Cerus EndoVascular, Chansu Vascular Technologies, Children’s National, Concept Medical, Cook Medical, Cooper Health, Cormaze, CRL, Croivalve, CSI, Dexcom, Edwards Lifesciences, Elucid Bioimaging, eLum Technologies, Emboline, Endotronix, Envision, Filterlex, Imperative Care, Innovalve, Innovative, Cardiovascular Solutions, Intact Vascular, Interface Biolgics, Intershunt Technologies, Invatin, Lahav, Limflow, L&J Bio, Lutonix, Lyra Therapeutics, Mayo Clinic, Maywell, MDS, MedAlliance, Medanex, Medtronic, Mercator, Microport, Microvention, Neovasc, Nephronyx, Nova Vascular, Nyra Medical, Occultech, Olympus, Ohio Health, OrbusNeich, Ossiso, Phenox, Pi-Cardia, Polares Medical, Polyvascular, Profusa, ProKidney LLC, Protembis, Pulse Biosciences, Qool Therapeutics, Recombinetics, Recor Medical, Regencor, Renata Medical, Restore Medical, Ripple Therapeutics, Rush University, Sanofi, Shockwave, SMT, SoundPipe, Spartan Micro, Spectrawave, Surmodics, Terumo Corporation, The Jacobs Institute, Transmural Systems, Transverse Medical, TruLeaf, UCSF, UPMC, Vascudyne, Vesper, Vetex Medical, Whiteswell, WL Gore, and Xeltis. Dr Forrest has received grant support/research contracts and consultant fees/honoria/speakers bureau fees from Edwards Lifesciences and Medtronic. Dr Reardon has received research grants from Abbott, Boston Scientific, WL Gore Medical, and Medtronic. Dr Finn has received honoraria from Abbott Vascular, Biosensors, Boston Scientific, Celonova, Cook Medical, CSI, Lutonix Bard, Sinomed, and Terumo Corporation, and is a consultant to Amgen, Abbott Vascular, Boston Scientific, Celonova, Cook Medical, Lutonix Bard, and Sinomed. Dr Virmaniordis, CSI, Lutonix Bard, Medtronic, OrbusNeich Medical, CeloNova, SINO Medical Technology, Recor Medical, Terumo Corporation, W. L. Gore, and Spectranetics, and is a consultant for Celonova, Cook Medical, CSI, Edwards Lifesciences, Bard BD, Medtronic, OrbusNeich Medical, Recor Medical, SinoMedical Sciences Technology, Surmodics, Terumo Corporation, W. L. Gore, and Xeltis. The other authors report no conflicts. Supplemental Methods Tables S1–S8 Figure S1 |
Changes in human skin composition due to intrinsic aging: a histologic and morphometric study | 75ecd89b-df1d-43fb-9f20-35f5881088d5 | 11364716 | Anatomy[mh] | Skin is the largest human organ and constitutes the main protective barrier against the external environment (Kanitakis ), not only in mechanical terms but also thanks to its physicochemical characteristics and to microbiota (Sanford and Gallo ). This barrier protects against dehydration, mechanical damage, and biological and physical agents. It also exerts other vital functions, such as thermoregulation, secretion of waste products through sweat, immunological and endocrine functions, as well as its sensory function. Apart from the roles mentioned above, skin also factors in human relationships as one of the main determinants of beauty (Fink et al. ; Sakano et al. ), thus influencing social interactions. Products that claim to improve skin tone, glow, and texture, as well as achieving a homogeneous appearance (Sun et al. ) are in great demand from the pharmaceutical/cosmetics industry (Baumann ). For this reason, knowledge on different levels about skin aging is of special interest, not only at a medical level but also at a social one. The skin is histologically organized into three layers (Khavkin and Ellis ): epidermis, dermis, and hypodermis, within which we find structures known as skin appendages. The epidermis is a multistratified epithelium comprising mainly keratinocytes, but also melanocytes, Merkel cells, and Langerhans cells. The dermis is a connective tissue, made up of cells, fibers, and ground substance, and unlike the epidermis also contains vascular and nervous plexuses. The dermis is divided into papillary (superficial) dermis, which forms the dermal papillae, and reticular or deep dermis, where the fibers (both collagen and elastic) are thicker and horizontally organized, and the deepest part of the skin appendages is found. Skin aging is manifested by a series of well-known clinical characteristics such as atrophy, wrinkles, rough texture, laxity or loss of elasticity, depigmentation, vascular ectasias, and neoplasms (Zhang and Duan ; Pezzini et al. ). Two mechanisms of skin aging have been described: extrinsic aging is induced by external factors, including ultraviolet radiation (Berneburg et al. ), while intrinsic or chronological aging is regulated by cellular senescence (Csekes and Račková, ), oxidative stress (Fisher et al. ), and the role of some metalloproteinases, among others. Since skin aging is an important factor in not only the medical but also the social sphere, the objective of the present study is to morphometrically characterize microscopic changes in human skin composition due to chronological aging. Specifically, alterations in human skin elements due to intrinsic aging are demonstrated via comprehensive histological analysis of human samples isolated from the periumbilical area, a photoprotected body region. Sample size and study populationMicroscopic and immunohistochemical analysisMorphometric analysis of the imagesStatistical analysisThis study conforms to the principles for use of human subjects outlined in the Declaration of Helsinki, and the study protocol was approved by the local research ethics committee. Abdominal skin samples were obtained from the autopsy of 25 patients, who were classified into one of four groups by age: group 1 (0–12 years; n = 5), group 2 (13–25 years; n = 5), group 3 (26–54 years; n = 10), and group 4 (≥ 55 years; n = 5). Additional information on case numbers, age, and sex is specified in Table . Samples, approximately 2.5 cm long and 2.5 cm wide, were isolated from the periumbilical region adjacent to the midline, chosen for its photoprotected location where aging can be attributed mainly to intrinsic causes. Samples were also examined using hematoxylin–eosin and orcein staining techniques to rule out the presence of solar elastosis (characterized by increased elastic material in the papillary dermis), considered one of the main histological characteristics of photoaging (Hunzelmann et al. ). Human skin samples were fixed in 4% paraformaldehyde for 24 h, and subsequently embedded in paraffin. Sections of 5 μm were obtained from the resulting blocks using a microtome, and afterwards fixed to the double gelatin-coated glass slides by heating at 60 °C in an oven for a minimum of 30 min. Hematoxylin–eosin stain was utilized for histological analysis. To further characterize skin samples, Masson’s trichrome for total collagen, orcein for elastic fibers, Alcian blue counterstained with periodic acid–Schiff staining for coloring mucins in blue, toluidine blue for mast cell granules metachromasia, and Feulgen reaction to reveal cell nuclei were performed. Further details on these protocols are specified in Table . Immunohistochemical stains using specific antibodies were employed to evaluate cell proliferation (anti-Ki67), Merkel cells (anti-CK20), Langerhans cells (anti-CD1a), and blood vessels (anti-CD31). Briefly, sections were firstly subjected to different antigen retrieval methods (Table ). For heat-induced epitope retrieval, sections were incubated with citrate buffer pH 6.0 or Tris/EDTA buffer pH 9.0 (Dako, Glostrup, Denmark) at 121 ℃ for 3 min. After peroxidase inactivation (H 2 O 2 0.3%) and blockade with horse serum, sections were incubated overnight (at 4 ℃) with the specific primary antibodies diluted in PBS/BSA 0.1%. Specific labelling was detected with biotin-conjugated goat anti-mouse IgG antibody (1:500 dilution, Dako Glostrup, Denmark). Further information about primary antibodies and antibody concentration is detailed in Table . Five microscopic photographs of representative areas were taken for each case and stain, using a Leica DM3000 (Leica Microsystems, Wetzlar, Germany) optical microscope. Specific information regarding how photographs were taken to quantify the specific parameters in each stain is detailed in Table . A total of 1879 images were morphometrically analyzed using Image ProPlus 7.0 software (Media Cybernetics Inc, Rockville, MD) performed in a blinded manner on coded slides. The Kolmogorov–Smirnov normality test was performed for each variable. Variables were expressed as the mean and standard deviation. Unpaired Student t tests were used for comparisons and statistical significance was considered for two-tailed p value less than 0.05. GraphPad Prism 9.0 (GraphPad Software, Boston, USA) was used throughout. Changes in epidermal and dermal width at different agesEffects of intrinsic aging on epidermal cellularityChanges in dermal composition with ageSkin appendages: sweat glands and hair folliclesTo provide a general overview of variations in skin layers and their relationship with aging, quantification of the thickness of the epidermis, epidermal ridges, and dermal papillae were firstly performed. Figure a displays representative images taken at lower magnifications from the four groups. A clear reduction in epidermal thickness was detected in the oldest group compared to group 1 ( p = 0.01), 2 ( p < 0.05), and 3 ( p < 0.05) (Fig. b). Epidermal ridge thickness peaks in youth (group 2), and a significant decrease ( p < 0.05) is seen between groups 3 and 4 (Fig. c). A similar trend was observed studying the dermal papillae (Liao et al. ), whose length was significantly shorter in groups 3 and 4 compared to group 2 ( p < 0.05) (Fig. d). Furthermore, an almost significant reduction was demonstrated comparing the papillary measurements of people over 55 years old with adults between 26 and 54 years ( p = 0.08). Finally, a slight although not statistically significant increase was seen in ridge and papillae measurements between group 1 and 2 ( p = 0.09). To analyze the stability of the dermoepidermal junction, the interdigitation index, which considers the surface contact between both layers of the skin, was calculated. In comparison to younger adults (group 2), the interdigitation index was significantly reduced ( p < 0.05) in the 26–54-year-old group and this decay was more pronounced in patients from the oldest group ( p < 0.05) (Fig. e). Lastly, papillary and reticular dermis thicknesses were measured, revealing a slight increase in papillary dermal thickness with age, statistically significant when comparing group 4 (≥ 55 years) with younger adults (groups 2 and 3) ( p < 0.05), and almost significant ( p = 0.07) when comparing it with the youngest age group (≤ 12 years) (Fig. f). In contrast, our study of changes in reticular dermis thickness (Fig. g) showed above all an increase after childhood and a posterior regression in old age. This translates into a significant increase in its width in group 2 ( p < 0.01), group 3 ( p < 0.01), and group 4 ( p < 0.05) compared to group 1 (children), being significantly lower in the older group (≥ 55) than in younger people (group 2) ( p = 0.059). Numerical results are presented in Table . After epidermal and dermal width were compared among different age groups, alterations in epidermis composition due to intrinsic aging were analyzed. According to our data, there was a slight decrease in epidermal cellularity ( p > 0.05) (Fig. b). Specifically, the percentage of melanocytes in the basal layer was significantly increased (Fig. d), especially comparing group 1 with groups 2 ( p < 0.05) and 3 ( p = 0.05), whereas no differences were observed regarding the proportion of basal cells in the epidermis (Fig. c). To further scrutinize the changes in other cell types, the presence of Merkel cells (neuroendocrine cells, type 1 mechanoreceptors involved in touch sensation) and Langerhans cells (antigen-presenting dendritic cells) was determined. Concerning Merkel cells, dynamic changes caused by intrinsic aging were noticed: the number of mechanoreceptors was highest in the early stages of life, with a sharp drop in the 13–25 years group and a partial recovery afterwards (Fig. f). Contrariwise, the number of Langerhans cells rose with increasing age (Fig. g). Lastly, the mitotic index, resulting from the ratio between mitotic epidermal cells and total epidermal cellularity, was calculated as a marker of tissue regeneration capacity. A dramatic drop in this index was detected in the oldest group compared to the 0–12 years ( p < 0.01) and the 26–54 years groups ( p < 0.05) (Fig. e). Numerical results are shown in Table . Morphometric analysis of alterations in different dermal components due to intrinsic aging was also performed. In both papillary and reticular dermis, a gradual reduction in total cell count was observed with aging, exhibiting a significant reduction in the number of cells in 26–54-year-olds and the ≥ 55 group compared to children in group 1 ( p < 0.01) (Figs. b and b). In contrast, a clear rise in the number of mast cells (Kritas et al. ), white blood cells that reside in connective tissues and play a fundamental role in innate immune defense, was detected inside the papillary dermis in group 2 compared to group 1 ( p < 0.05), and a subsequent decrease in adults between 26 and 54 years compared to people aged 13–25 years old ( p < 0.05), while the cell count remained unchanged in group 4 (Fig. c). In terms of extracellular matrix, a significant augmentation in the density of elastic fibers within the reticular dermis was detected in group 3 compared to groups 1 ( p < 0.05) and 2 ( p = 0.059), as was a slight although not significant trend towards an increase in old age (Fig. d). Regarding glycosaminoglycans (GAG), crucial for water retention and preserving tissue turgor, differing behavior was observed in the two dermal regions. In the papillary dermis (Fig. d), the percentage of area occupied by GAG increased significantly ( p < 0.01) from youth onwards and remained high with advancing age (in groups 3 and 4). In contrast, in the reticular dermis (Fig. c), a fall was evidenced in the presence of GAG in group 3 (p < 0.01) and group 4 ( p < 0.05) compared to the young people from group 2 (Table ). Lastly, dermis vascularization (Figs. e and e) was quantified in both areas as the number of vessels per area, showing significant reduction with aging. In papillary and reticular dermis, the number of microvessels was minimal in the oldest group compared to people between 13 and 25 years old ( p < 0.05). Both sweat glands and hair follicles reduced in number in the older age groups compared to children (group 1) (Fig. ). Specifically, significant differences ( p < 0.001) in the number of hair follicles were found between the children and groups 3 and 4. As regards sweat glands, statistically significant differences ( p < 0.01) also resulted from comparison between groups 1 and 2. Numerical results are shown in Table . According to our data, intrinsic aging has a dramatic effect on human skin composition as reflected by reduced epidermal thickness, flattened dermo-epidermal junction, and the increase in papillary dermis thickness, probably due to the accumulation of GAG. Additionally, reticular dermis thickness decays with age, mirroring GAG and elastic fibers decline. Lastly, the number of vessels, cells, and appendages is diminished in dermis throughout a person’s lifetime (Fig. ). Physiology and microscopic structure of human skinMorphometric changes in epidermal components due to intrinsic agingEffect of intrinsic aging in dermis compositionSkin is the largest human organ, extending over more than 2 m 2 of surface and representing about 15% of total body weight (Kanitakis ). It provides the principal barrier against dehydration, mechanical damage, and biological (i.e., microorganisms and toxins) and physical agents, including UV radiation and temperature changes. Furthermore, the skin plays an active role in thermoregulation, the secretion of waste products through sweat, as well as in immunological, sensory, and endocrine regulation. In terms of microscopic structure (Khavkin and Ellis ), the skin is stratified into three layers, from superficial to deep: epidermis, dermis, and hypodermis. The epidermis, ectoderm-derived, is the outermost layer of the human body, and its predominant cells are keratinocytes (producers of keratin for skin semipermeability maintenance), but it also contains melanocytes, Merkel cells, and Langerhans cells. The dermis, in contrast, is made up of fibers, ground substance, and cells, and contains vascular and nervous plexuses. Macroscopic changes in human skin due to aging are easily identifiable (Haydont et al. ), including wrinkles, atrophy, irregular pigmentation, and laxity (Zhang and Duan ). The aging process is regulated by two different mechanisms: extrinsic and intrinsic aging. The former is induced by external factors, known as the exposome: mainly ultraviolet radiation (Berneburg et al. ), although others such as environmental pollution and smoking can also play a role. Conversely, intrinsic or chronological aging is a consequence of cellular senescence (Csekes and Račková, ), a reduced capacity for cell proliferation, seen especially in basal cells (Zhang and Duan ); oxidative stress (Fisher et al. ) (a dysfunction of mitochondrial molecules that leads to the formation of reactive oxygen species), and even the activity of some metalloproteinases. The present research is focused on examining chronological skin aging, thus leaving aside the effect of external factors such as photoaging, given that skin samples are taken from a photoprotected body region, the periumbilical area using morphometric analysis. This technique has been previously employed to determine the microscopic changes in skin parameters due to different interventions (Costello et al., ; Demyashkin et al. ; Gogly et al. ). Turning to changes in epidermal thickness due to intrinsic aging, a decrease over age, especially notable in the oldest group, was detected. Firstly, one might think that this reduction could be due to a drop in the number of epidermal cells. To corroborate this hypothesis, total epidermal cellularity and mitotic index (reflecting the relationship between dividing and total cells) were morphometrically assessed. In terms of the mitotic index, epidermal regeneration has been shown to lessen with age, leaving a smaller number of keratinocytes in older subjects. Several mechanisms by which the mitotic index of keratinocytes is altered with age have been previously reported, such as decreasing levels of epidermal growth factor (Wang et al. ), and rising intracellular calcium (Micallef et al. ). Likewise, the augmentation in keratinocyte apoptosis found with aging (Wang et al. ) would also help explain the reduced count observed in our results. However, paradoxically, our data showed no variation in total epidermal cellularity, although a decrease in keratinocytes was suggested. To explain this observation, we also measured other epidermal cell types (i.e., melanocytes, Langerhans cells, and Merkel cells), observing a modest increase with age. These results might account for the stable total cell count in the epidermis. Melanocytes are cells that produce melanin, the pigment that protects against UV rays (Brenner and Hearing ). However, although these increase with age, this does not imply greater photoprotection if one takes into account the alterations in its functions (including impaired melanosome transport or glucose metabolism) demonstrated in some studies (Park et al. ). Contrariwise, Langerhans cells are cutaneous dendritic cells that exert immunological protection, being mainly antigen presenters (Romani et al. ). These are also elevated in the oldest group, probably due to immune system deregulation in these older individuals (Fülöp et al. ; Pawelec ). Lastly, although no significant changes were found in Merkel cells (sensory neuroendocrine cells), our results are in line with those reported by Moll et al. , who observed these cells in high concentration at the fetal level, they were decreased during childhood, and in older ages their production may increase again, probably due to greater exposure to harmful stimuli (Wright et al. ). Taken altogether, these findings suggest that the decrease in epidermal thickness caused by intrinsic aging most likely results from a loss in cellular turgor of keratinocytes (which become shorter and wider; Farage et al. ). This process is due to dehydration, mainly of the stratum corneum, because of alterations in binding proteins and lipid structures (Waller and Maibach ; Rogers et al. ). Dehydration would also contribute to maximize epidermal inflammation (Wang et al. ), which further supports the results obtained in this field. Our results from dermal layer analysis highlighted an increase in papillary dermis and a reduction in reticular dermis thickness with aging. To elucidate this, the area occupied by GAG was first quantified. According to our data, intrinsic aging influences the presence of GAG, as reflected by an augmentation in the papillary layer and a reduction in reticular dermis. Admittedly, these results are controversial in comparison with the current literature. Some studies have suggested that intrinsic skin aging induces overexpression of GAG (Timár et al. ), while others have reported that photoaging, but not intrinsic aging, provoked increased GAG expression (Waller and Maibach ). Further research has uncovered that certain key GAG skin components, like decorin proteoglycan, undergo a reduction in size, leading to a decrease in dermal volume occupied by GAG (Li et al. ). Moreover, the increased papillary dermis can be viewed as an indirect consequence of the reduced number and length of ridges and papillae, as a result of which this space becomes occupied by the papillary dermis. Since fibers are also components of dermal interstitium, modifications in their presence were also determined in subjects at different ages. A previous study from our group reported that the area of the papillary dermis occupied by collagen fibers and the thickness of its bundles were reduced (Marcos–Garcés et al. ). In terms of elastic fibers, although some studies suggest no differences in the density of elastic fibers due to intrinsic aging (Timár et al. ), our data indicates that the number of elastic fibers within reticular dermis is progressively reduced, especially when reaching adulthood. Contrasting our results with those of other studies, it seems that cutaneous elastin undergoes a series of changes in composition and structure over time (Pasquali-Ronchetti and Baccarani-Contri ), while elastic fibers show tortuosity and distortion which cause loss of elasticity (Imayama and Braverman ). Thus, unlike the impact of photoaging, which produces an increase in elastic fiber density (Hunzelmann et al. ), intrinsic aging therefore leads to progressive degradation (Vitellaro–Zuccarello et al., ) as well as cumulative damage in these fibers (Waller and Maibach ). In addition, alteration of skin elasticity with age may result from fibroblasts loss (Gunin et al. ), reduced biosynthetic activity, and variations in extracellular matrix macromolecules (Frances et al. ). A reduction in dermal cellularity was noted in both the papillary and reticular layers, thus lending credence to the assertion that fibroblast proliferation is decreased (Gunin et al. ). Initially, the possibility of an increase in the inflammatory cell population had been suggested (Lee et al. ), since other studies have shown a gradual increase in both mast cells and CD45 + cells (Gunin et al. ). However, our results regarding mast cell density displayed no changes affecting the older group. Finally, skin appendages (hair follicles and sweat glands) are notably reduced from youth onwards (Kamberov et al. ), which could be explained by a reduction in fibroblast number, circulating growth factors levels, and microvessels density. To corroborate this hypothesis, we subsequently analyzed changes in the number of capillaries at different age stages. According to our data, the number of dermal vessels peaks in children (0–12 years), diminishing notably in group 4 (≥ 55). These results are in line with a previous study demonstrating that a decrease in dermal vascular density is related to downregulation of the vascular endothelial growth factor signaling cascade and lower levels of von Willebrand factor (Gunin et al. ). Despite several studies that agree on the reduction of cutaneous blood flow (Waller and Maibach ), this finding is a bit controversial as other studies support just the opposite (Chung et al. ). Nonetheless, this reduction could explain numerous skin changes attributed to intrinsic aging, being partly a cause and consequence of atrophy, for example, of the appendages. Intrinsic aging has a dramatic effect on skin composition, as reflected by a decay in epidermal thickness and a flattened dermo-epidermal junction. The papillary dermis becomes wider and contains a larger GAG concentration, while exactly the opposite evolution is observed in the reticular layer, together with a decline in the quantity of elastic fibers. Both dermal regions experience a drop in vascularization, cellularity, and appendages. This study establishes the basis of skin chronological evolution, highlighting the need for further research on the molecular mechanisms responsible for intrinsic aging and potential targets of antiaging strategies. |
Validation of serrated polyps (SPs) in Swedish pathology registers | 39f9d3f7-4cf7-40cb-a071-69a5bf7f9c86 | 6938642 | Pathology[mh] | Colorectal cancer (CRC) is the third most common cancer and the third leading cause of cancer death worldwide. It kills over 600,000 people annually, accounting for 8% of cancer-related deaths . Adenomatous polyps, now referred to as conventional adenomas, have been regarded as the main precursor of CRC, but in recent years a new pathway to CRC has been identified, termed the serrated pathway . Serrated polyps (SPs) are characterised by a saw-toothed appearance of colonic crypts. As per recommendations from the World Health Organisation (WHO) , SPs are classified into three subgroups: hyperplastic polyps (HPs), traditional serrated polyps (TSAs) and sessile serrated adenomas/polyps (SSA/Ps)(Fig. 2 in ). The serrated pathway to CRC is mainly believed to originate from SSA/Ps, which are estimated to represent up to 20% of all SPs . Some data suggest that cancers evolving through the serrated pathway may account for up to 15–30% of all CRC cases, and that they are significantly overrepresented in interval cancers , i.e. CRC occurring before the next recommended screening after an initially negative finding. Even though the adenoma-carcinoma pathway still accounts for the majority of the CRC burden, a recent study comparing the risk of CRC development found that the increased risk of CRC in individuals with SPs is similar or higher than that seen in individuals with conventional adenomas . Little is known about the natural history of SP, which may in part be due to the lack of availability of large-scale data. Through the ESPRESSO (Epidemiology Strengthened by histoPathology) study , we contacted all pathology departments ( n = 28) in Sweden to construct a cohort of individuals with an SP diagnosis according to computerised histopathology reports. We then retrieved patient charts from 106 randomly selected individuals with a record of SP. The primary purpose of this study was to validate SP diagnosis according to computerised histopathology reports against patient chart data. A secondary aim was to describe the characteristics of individuals with SPs.
We validated SP diagnosis based on computerised histopathology reports in a random subset of individuals through a structured, retrospective review of histopathology reports and patient charts. Study population Study sample Case definition Data elements Statistics The ESPRESSO study consists of gastrointestinal histopathology reports from 2.2 million unique individuals with a total of 6.1 million separate data entries. Some 53.9% of individuals had been biopsied more than once. Data on gastrointestinal histopathology reports were collected between October 12, 2015 and April 15, 2017 from all pathology departments in Sweden ( n = 28). Overall we had data on 1,618,953 colon biopsies and 771,511 rectal biopsies . Through the unique personal identity number assigned to all Swedish residents, histopathology data were linked to the Swedish national health registers (Patient Register , Cause of Death Register , Cancer Register , Medical Birth Register , Prescribed Drug Register , The LISA database with socioeconomic data , as well as the Total Population Register ). Details about ESPRESSO and registry linkage have been described previously . For the current study on SPs, we included individuals with a colorectal biopsy (topography codes: T67–68) with the following Systematised Nomenclature of Medicine (SNOMED) codes: M82160, M8216, M82130, M8213. We also included individuals with a colorectal biopsy of which the histopathology report free text listed “serrated polyp” (Swedish “sågtand(ad)”).
Power calculation using EpiTools indicated a minimum of 139 individuals were needed to obtain a positive predictive value (PPV) for SP of 90% with a 95% confidence interval (95%CI) range of 85–95% (using an alpha of 0.05 and a beta of 0.20). For this validation, we requested patient charts from a random sample of 160 individuals with a histopathology report of SPs from five Swedish counties. We were able to retrieve patient chart data from 126 individuals, out of which 106 had sufficient information for our validation (Fig. ).
We defined a true SP as having a consistent histopathology report and a patient chart supporting an SP diagnosis. Individuals with an SP diagnosis could have one or multiple SPs. Assessment of histopathology reports and patient charts was executed by the principal author (SRB). Uncertain cases were discussed with JFL and MS. If no consensus was reached, the case was considered inconsistent with SP.
Data from patient charts were extracted using a standardised form, similar to the form used by Svensson et al. in their validation of microscopic colitis . The starting point of data extraction was set to 2 years before the diagnosis until March 2018. The data from the patient charts mainly included patient history, laboratory data, referral letters and endoscopy and histopathology reports. Individuals were excluded in the absence of a histopathology report or insufficient/incomplete data.
The main outcome of this study was the PPV for SP diagnosis in the 106 individuals with patient charts containing sufficient data. To identify any potential differences, results were stratified according to search method (SNOMED codes or free-text search). Given the changing nomenclature of SPs over time, we also analysed the data by year of diagnosis. For individuals identified by SNOMED codes, we validated the SP location by comparing the topography code with the patient chart. SNOMED codes were also used to identify SSA/Ps, for which a separate PPV was calculated. We estimated 95%CIs with the Wilson score interval using EpiTools . In addition to retrieving colonoscopy and histopathology reports, we collected data on sex, age, year of diagnosis, smoking, obesity, comorbidity, diagnostic tools and indication for endoscopy. For evaluation of anaemia, we used 132 g/L for men and 122 g/L for women as the lower limits of normal haemoglobin concentration as proposed by Beutler and Waalen . The size of polyp characteristics was determined as either larger or smaller than 10 mm as this size has been proposed as the threshold for determining the future management of SPs . Other aspects investigated were number of polyps (0, 1, 2–3 or ≥ 4), location (proximal, distal, rectal) and grade of dysplasia (none, low, high). The proximal colon was defined as the ileocecal valve until the splenic flexure, followed by the distal colon until the last 10 cm of the gastrointestinal tract that represent the rectum. For the descriptive analysis, we calculated the population and polyp characteristics according to SP subgroups. To reflect the previous version of the WHO recommendations on SP classification, SPs described as serrated adenomas (SAs) or mixed polyps with a serrated component were deemed consistent with SSA/P . Nonetheless, data were also analysed separately for these polyp subgroups. Data on false positive SPs were also presented separately.
Data Positive predictive value (PPV) Demographics and risk factors Indications and symptoms Polyp characteristics The charts of 106 individuals were retrieved from pathology centres distributed in five counties in Sweden: Dalarna, Norrbotten, Skaraborg, Stockholm and Örebro.
SPs were confirmed in 101/106 individuals, yielding a PPV of 95% (95%CI = 89–98%) (Table ). Of the five individuals with false positive SPs, one had SP ruled out by the pathologist. The other four individuals had SPs mentioned in the histopathology report but sufficient evidence to confirm the diagnosis was lacking. No false positive case was found among individuals identified by SNOMED codes ( n = 52), resulting in a PPV of 100% (95%CI = 93–100%). For individuals identified by free-text search of histopathology reports ( n = 76), the PPV was 93% (95%CI = 86–97). Out of these, 22 individuals also had a SNOMED code. By year of diagnosis, the PPV was 89% (95%CI = 69–97%), 96% (95%CI = 81–99%) and 97% (95%CI = 89–99%) for individuals diagnosed before 2001 ( n = 19), between 2001 and 2010 ( n = 26) and after 2010 ( n = 61), respectively. All individuals diagnosed before 2000 were identified by free-text entries alone, while 69% ( n = 36) of the individuals identified by SNOMED codes were diagnosed after 2010. For individuals identified by SNOMED codes, the recorded location was accurate in 49/52 (94%, 95%CI = 84–98%) cases of all SP histopathology reports. The three individuals with an incorrect recorded topography code had been biopsied in the distal (sigmoid) colon but recorded as having their polyp in the rectum (T68). Only five individuals had a subsite-specific topography code within the colon (T671-T677), all of which were accurate. Most individuals with SSA/Ps were identified by SNOMED codes ( n = 49, 70%), whereas most individuals with HPs were identified by free-text searches ( n = 31, 91%). Of all individuals identified by SNOMED codes, SSA/Ps were confirmed in 49/52 individuals, resulting in a PPV of 94% (95%CI = 84–98%). The false positive cases consisted of two TSAs ( n = 2) and one HP ( n = 1).
Of the 106 validated individuals, 50 were female (47%) and the median age at diagnosis was 70 years (Table ). Most SP cases were diagnosed through colonoscopy ( n = 86, 81%), with smaller proportions diagnosed through partial lower endoscopy (sigmoidoscopy, rectoscopy or proctoscopy, n = 15, 14%) or hemicolectomy ( n = 5, 5%). The data were stratified as follows: HP ( n = 34, 32%), TSA ( n = 3, 3%), SSA/P ( n = 70, 66%), unspecified SP ( n = 3, 3%), and false positive SP ( n = 5, 5%). The SSA/P subgroup also included polyps described as serrated adenomas ( n = 51) and mixed polyps ( n = 12). Because some individuals had polyps of different subtypes ( n = 9), the sum of individuals in the subgroups exceeds the total number of individuals reviewed. Notably, the HP subgroup was diagnosed earlier than the SPs overall (median year: 2003 vs. 2012) and there were no polyps specifically described as an SSA/P or TSA before 2011. Polyps described specifically as serrated adenomas were reported as early as 2002. Otherwise, population characteristics were similar in the different SP subgroups. At diagnosis, 16 (15%) individuals were current smokers, whereas 14 (13%) had a record of earlier smoking (Table ). Obesity (body mass index, BMI ≥30 or indication of obesity in the patient chart) was seen in 12 individuals (11%). Heredity for CRC, intestinal polyposis syndromes, or both was reported in seven individuals (7%). Common comorbidities consisted of diverticulosis ( n = 45, 42%), conventional adenomas ( n = 33, 31%), CRC ( n = 19, 18%) and inflammatory bowel disease (IBD) ( n = 10, 9%). Comorbidities were defined as having a diagnosis prior to or in conjunction with a diagnosis of SP, except for conventional adenomas for which prior diagnoses were not considered.
Most individuals underwent endoscopy for clinical symptoms ( n = 64, 60%) (Table ). For individuals with symptomatic indications and SP as their only significant endoscopic finding ( n = 28), the most frequent symptoms were change in stool form (diarrhoea or obstipation, n = 21, 75%), change in stool colour (haematochezia or melena, n = 14, 50%) and anaemia ( n = 11, 39%). Endoscopies carried out due to an asymptomatic indication ( n = 39, 37%) mostly consisted of surveillance endoscopies due to a history of previous polyps or adenomas ( n = 18, 46%), a history of CRC (n = 3, 8%) or a history of IBD ( n = 5, 13%). CRC screening was also a frequent indication of asymptomatic endoscopies ( n = 8, 21%), out of which six individuals also had a positive faecal occult blood test (FOBT) prior to the endoscopy. The most frequent symptoms in individuals in our cohort, regardless of endoscopy indication, were change in stool form ( n = 47, 44%), change in stool colour ( n = 36, 34%), anaemia ( n = 30, 28%), abdominal pain ( n = 22, 21%), weight loss ( n = 9, 8%) and fatigue (n = 8, 8%). Other less frequent symptoms included nausea (n = 4, 4%), anal burning (n = 2, 2%), fever ( n = 1, 1%), loss of appetite (n = 1, 1%) and dyspnoea (n = 1, 1%). Fifteen individuals (14%) had a positive FOBT before endoscopy. In total, clinical signs of gastrointestinal bleeding (change in stool colour, anaemia or FOBT) were seen in 58 (55%) individuals in total and in 28 (61%) individuals with SP as their only endoscopic finding.
According to endoscopy reports, 44 (42%) individuals had one polyp at diagnosis, 33 (31%) had 2–3 polyps and 27 (25%) had ≥4 polyps (Table ). Two persons with a false positive SP had no certain polyps. The total number of polyps was 155, which could be classified into HPs ( n = 61, 39%), TSAs ( n = 3, 2%), SSA/Ps ( n = 80, 52%), unspecified SPs (n = 8, 5%) and false positive SPs (n = 3, 2%). The size of the polyps was determined as either large (≥10 mm) or small (< 10 mm). In all, there were 58 (37%) small polyps and 37 (24%) large polyps. Only four (7%) HPs were considered large, out of which two were proximal. In contrast, all TSAs were large (n = 3, 100%), whereas SSA/Ps presented a more even distribution regarding size (small: n = 31, 39%, large: n = 28, 35%). In terms of location unspecified SPs were predominantly found in the proximal colon ( n = 7, 88%), whereas SSA/Ps were generally found either proximally ( n = 39, 49%) or rectally ( n = 26, 33%). TSAs were seen in the distal colon (n = 1, 33%) or rectum (n = 2, 67%), whereas HPs were relatively evenly distributed. Evaluating the grade of dysplasia, 26 polyps had no sign of dysplasia (17%). Low-grade dysplasia was seen in 70 (45%) polyps and high grade in 5 (3%). Polyps with no dysplasia were overrepresented among HPs ( n = 16, 26%) and unspecified SPs ( n = 5, 63%). In 43 (70%) HPs degree of dysplasia was not specified. The number of polyps with unspecified degree of dysplasia in the other subgroups was 0 (0%) for TSA, 9 (11%) for SSA/P and 2 (25%) for unspecified SP. Most TSAs and SSA/Ps exhibited low-grade dysplasia (TSA: n = 3, 100%; SSA/P: n = 61, 76%); cases of high-grade dysplasia were only seen in SSA/Ps (n = 5, 6%).
Our study found a high PPV (95%, 95%CI: 89–98%) for SPs according to colorectal histopathology reports based on SNOMED codes and free-text searches. The high PPV was similar over time. This finding suggests that histopathology reports are a reliable source to identify individuals with SPs. The PPV of this study is comparable with that of other gastrointestinal diagnoses based on histopathology: celiac disease (PPV 95%) and microscopic colitis (PPV 95%) . The high specificity for SPs is not surprising given that the assignment of the SNOMED code and free-text diagnosis is already based on histopathological evaluation. As to search method, the use of SNOMED codes to identify individuals with SPs had a higher specificity than the use of free-text search (PPV: 100% vs. 93%), but still the PPV using free text is consistent with the accuracy of having a physician-assigned diagnosis in the Swedish Patient Register (95%CI PPV = 85–95%) . Furthermore, the high PPV of SSA/P among individuals identified through SNOMED codes (94%, 95%CI: 84–98%) indicates that an exclusive use of SNOMED codes can serve to target these polyps specifically. For individuals identified by SNOMED codes, the corresponding topography codes can also be used to determine the location of the SPs and SSA/Ps (PPV: 94%; 95%CI = 84–98%). The cases of incorrect topography codes exclusively concerned individuals with a rectal topography code (T68), which were classified as distal (sigmoidal) according to our validation. This discrepancy occurred because we mainly used endoscopy reports to determine the macroscopic location of the polyps, whereas topography codes are assigned by the pathologist and sometimes based on histological appearance. Subsequent to the recognition of the different SP subgroups, several studies have investigated their respective prevalence. HPs have consistently been shown to be the most common subtype, representing 70–90% of all SPs . Likewise, SSA/Ps have been shown to represent up to 10–25% of all SPs while TSAs represent about 1% . In our study, we primarily targeted SSA/Ps. As such, we did not include SNOMED codes for HPs. Consequently, the proportion of HPs in our cohort does not reflect the overall proportion among SPs, as HPs are likely to have been included when they have been described as “serrated” in the histopathology report. As a result, most individuals with HPs have been identified by free-text searches ( n = 31, 91%). Given the evolving nomenclature of SPs, a large number of polyps in our study were described following the previous version of the WHO classification of colorectal polyps published in 2000 . This version recognised HPs separately and SAs as a subtype under adenomas. Within the SA subtype, there was no differentiation between SSA/Ps and TSAs. As such, polyps described as serrated adenomas can represent any of these two. However, given the predominate prevalence of SSA/Ps, it is reasonable to assume that the number of TSAs described as serrated adenomas is small. It is also reassuring to note that the specific SP descriptions correlated well with the publication year of the different WHO classifications, i.e. polyps described as serrated adenomas began to appear after 2000 and polyps described as SSA/Ps or TSAs were found only after 2010. The individuals in our study were equally distributed in terms of sex (female: 47%). However, the mean age of the cohort was 70 (range: 35–93) years, which is slightly higher than that found in previous studies . The age difference may, to some extent, be explained by the high proportion of dysplastic SPs in our study ( n = 75, 48%). Heredity, smoking and obesity have all been established as risk factors for SP, with smoking being more strongly linked to SSA/Ps than to the other subgroups . In this study mention of risk factors in the patient chart was regarded as indicative of that risk factor, while, for instance, an individual in which smoking was not mentioned in the patient chart was regarded as a non-smoker. Thus, the prevalence of some risk factors may have been underestimated. For instance, only 11% of our individuals had a record of obesity compared with 16% in the general Swedish population despite evidence showing that obesity is a risk factor for SP . Several studies have established low detection as a significant challenge in SP research, and endoscopy screening seems less effective for detecting proximal CRC, which is believed to originate predominantly from the serrated pathway . Moreover, HPs are considered less likely to bleed compared with adenomas, and SSA/Ps lack some genetic markers currently used in DNA faecal tests, decreasing the sensitivity of faecal tests for SPs. In our study 15 (14%) individuals had a positive FOBT prior to endoscopy and 58 (55%) unique individuals had at least one sign of gastrointestinal bleeding (FOBT, haematochezia/melena or anaemia). To some extent the high percentage of individuals with SPs and signs of gastrointestinal bleeding can be explained by the simultaneous presence of adenomas ( n = 18, 31%), as well as the overrepresentation of SPs other than HPs. However, we cannot exclude that bleeding-prone SPs are overrepresented in our cohort. Most individuals with SPs underwent endoscopy due to clinical symptoms ( n = 64, 60%). In addition, regardless of endoscopy indication, we found that 78 individuals (74%) had at least one symptom (which includes positive FOBT) at the time of diagnosis, including one individual with a false positive SP. Of note, false positive cases more often presented with clinical symptoms as an indication for endoscopy (80% vs. 60%). In a notable proportion of HPs ( n = 43, 73%), grade of dysplasia was not specified. The reason for this is probably that HPs are normally defined as non-dysplastic. Thus, any specification of dysplasia by the pathologist would therefore be redundant considering that it is already implied by the HP diagnosis . As such, the proportion of HPs without dysplasia should be interpreted as 97% (59/61) instead of 26% (16/61). Among the polyps classified as SAs, the vast majority exhibited low-grade dysplasia ( n = 45, 80%) and there were only three polyps (5%) with no dysplasia. This observation reinforces the idea that polyps described as SAs are consistent with SSA/Ps, or possibly TSAs, as HPs are typically non-dysplastic . More specifically, consistent with the literature on SSA/P location, we believe that proximal SAs will almost exclusively consist of SSA/Ps. However, SAs located in the rectum are likely to include a small number of TSAs. The literature has shown that only about 15% of SSA/Ps have any dysplastic features, implying that SSA/Ps with dysplasia are overrepresented in our study . We cannot rule out that a few SSA/Ps with no dysplasia may have been misclassified as HPs given the established difficulty of distinguishing SSA/Ps from large proximal HPs . Yet, it is also possible that SSA/Ps without dysplasia may have been overlooked and left undetected to a larger extent than SSA/Ps with dysplasia. Strengths and limitations The main strength of our study is the random selection of individuals with SPs from a nationwide histopathology cohort. Using a standardised form, we were able to examine not only the PPV for a histopathology report with SP but also describe Swedish individuals with SPs for clinical characteristics and risk factors. Our results are consistent with similar studies for which the gold standard of diagnosis is biopsy, further reinforcing the reliability of the present results. A limitation of our study includes the lack of re-examinations of actual biopsies. The ethics review board allowed us to collect digital data but not actual tissue samples. Instead, the validation was based on re-evaluation of patient charts that included, among other things, histopathology and endoscopy reports. The quality of the patient chart data varied, especially in the documentation of risk factors and symptoms. Still, given that SP is a strictly histopathological diagnosis, the difference in data availability among the individuals should not have affected the validation in that all individuals had to have the corresponding histopathology report available to be included in the study. Earlier studies have shown inter-observer variability for classification of SPs among pathologists , and we cannot rule out some misclassification, especially for the subgroup classification. This could potentially affect the validity of SSA/P since some of the SSA/Ps may have been misdiagnosed as HPs, and vice versa . The diversity of pathologists in this study, where some may not specialize in SPs, may have decreased the accuracy in polyp classification.
In conclusion, this study suggests that colorectal histopathology reports are a reliable data source to identify individuals with SPs.
|
Face and content validity of a virtual-reality simulator for myringotomy with tube placement | fac7e583-f7ab-402a-952a-977e5c404104 | 4615336 | Otolaryngology[mh] | Myringotomy with tube insertion is one of the most common procedures in Otolaryngology—Head & Neck Surgery, and is encountered by residents throughout their training. Despite the fact that it is a ubiquitous procedure, the instruction of junior trainees, who often have little experience in microscopic procedures, is often challenging. Montague et al. have analyzed surgical errors through video analysis of actual procedures and note that the 4 most frequently occurring errors in order from most to least occurring include (1) failure to perform a unidirectional myringotomy, (2) making multiple attempts to place the tube, (3) making multiple attempts to complete the myringotomy, and (4) setting the microscope magnification too high. More serious intraoperative complications can also occur including external auditory canal lacerations, medial displacement of tubes into the middle ear, and vascular injuries [ – ]. Although surgical residents can eventually perform standard cases well, they often struggle with narrow canals, retracted tympanic membranes, T-tubes, and procedures performed under local anaesthestic. The goal of simulation is to decrease the learning curve prior to entering the operating, minimize complications in patients, and provide the ability to practice difficult cases. Several physical models have been described in the literature to provide practice without potential harm to patients [ – ]. Generally, these consist of a tube to mimic the ear canal with a synthetic membrane attached to one end to represent the eardrum. These models do not appear to have gained general acceptance in residency programs, presumably because they are not able to represent anatomical variability easily and the mechanical properties of the materials used do not mimic that of the actual tissues. Compared with physical models, simulators based on virtual-reality (VR) technologies have the ability to simulate difficult anatomy, model various pathologies, provide automated feedback, and even allow trainees to practice on patient-specific models generated from CT/MRI scans. VR-based simulators have been applied in Otolaryngology, especially for endoscopic sinus surgery [ – ] and for temporal bone drilling [ – ]. In VR simulators, the trainee interacts with realistic 3D digital models of anatomical structures and views them using 3D displays. Simulated tissues can be operated upon using digital representations of actual surgical tools that can be moved in the workspace using devices such as a haptic arm. The sensation of contact force between a digital surgical tool and simulated tissue can be computed and applied to the trainee’s hand via the haptic arm. The Auditory Biophysics Laboratory at Western University has developed and reported on several aspects of VR-based myringotomy simulation. A blade navigation software system and a system for real-time deformation and cutting of the tympanic membrane were implemented on different software platforms as separate training modules. These versions of the simulator were not integrated and they did not include speculum placement, operating microscope controls for positioning/zooming, or tube insertion through the myringotomy. As recently reported , the Western myringotomy simulator has integrated the previous modules into a common software platform. Moreover, new software modules have been added to allow the user to adjust their surgical view through positioning and tilting of the virtual speculum and operative microscope, and to allow insertion of a ventilation tube into the myringotomy created in a deformable tympanic membrane. The goal is to further expand this simulator in the future to allow trainees to raise tympanomeatal flaps andto eventually perform tympanoplasty/ossiculoplasty on patient-specific anatomy. In order for training simulators to be accepted into a residency curriculum, a variety of validation studies need to be conducted starting with face validity and culminating in the demonstration that skills acquired in the VR environment transfer to the OR (operating room) environment. Face validity refers to the degree to which a simulation appears like the real situation and content validity measures whether the simulator would be appropriate or useful in training . Although face validity has previously been established for individual software modules [ – ], validation testing has not been performed on the current integrated system, which simulates the entire procedure from microscope positioning to ventilation tube insertion . The objective of this paper is to determine the face and content validity of the new integrated Western myringotomy simulator.
Simulator Participants Protocol Questionnaire Statistical analysis The responses were initially divided by group (junior resident or practising Otolaryngologist), and the median, quartiles, minimum, and maximum response values were computed for each question. The sample size was maximized to include all eligible participants at a single academic institution. For each question, the Mann–Whitney U -test was used to test the significance of the differences in responses between the two groups. A frequency distribution histogram was plotted to investigate the number of favourable responses (score ≥ 5), neutral responses (score = 4), and negative responses (score ≤ 3) to each question. All data were computed and analysed using the SPSS statistical software (SPSS Inc, Chicago, IL). The significance was set at p ˂ .05 and the Holm-Bonferroni method was used to correct for multiple comparisons.
An overview of the major features of the simulator is given here; in-depth technical details on the system can be found in a previous publication . The simulator consists of 3 major components: the simulation software, a display system, and a haptic arm as shown in Fig. . The simulation software was developed in the Auditory Biophysics Laboratory at Western University [ – ]. The simulator runs on a Z420 Hewitt-Packard personal computer, equipped with an Intel(R) Xeon E5-1620 processor (Intel Corp., Sanata Clara, CA) and a NVIDIA Quadro 4000 graphics card (NVIDIA Corp., Santa Clara, CA). The system is capable of real-time rendering of the 3D digital models of the ear, surgical tools, and tympanic membrane as shown in Fig. . The simulator can import various ear canal and tympanic membrane models, however for the purposes of this study, a normal pediatric ear canal and tympanic membrane was used. The system also incorporates multi-point collision detection to monitor for all interactions between the virtual tools and virtual ear and performs real-time deformation and tissue cutting as required. The software displays the models and all interactions on a silver screen mirror that is part of the DevinSense Display 300 system (DevinSense Display Solutions, Sundbyberg, Sweden). When the screen is viewed using active 3D glasses (Nvidia Corp., Santa Clara, CA) provided with the DevinSense system, the 3D digital scene consisting of the virtual ear and tools appears to exist in the space below the silver screen mirror. The display in this region is correctly co-located with the haptic arm (Omni haptic arm, Geomagic, Inc., Morrisville, NC) so movements of the haptic arm appear to occur in the same space as the 3D scene. Using the haptic arm, the user can move the virtual surgical tools. Currently, a single haptic arm is used to control the various instruments, however a second haptic arm could be added to simultaneously manipulate multiple instruments (e.g. speculum and myringotomy blade). The haptic arm can be used to position and rotate the virtual speculum, position and tilt the microscope, and adjust magnification to obtain different views of the operative site as shown in Fig. . The user can then create a myringotomy as shown in Fig. using a virtual myringotomy blade; the position and orientation of the blade are controlled by moving the handle of the haptic arm. A tube may be inserted using virtual forceps, which is also controlled by the user using the haptic device [Fig. ]. The opening and closing of the forceps can be toggled using a button on the haptic arm. During tube insertion, the eardrum deforms and the incision splays as the tube enters the myringotomy. The tube may also be repositioned with various instruments until it is in its final position [Fig. ].
Research ethics board approval was obtained from Western University (#105239) and participants were contacted via telephone or electronic mail. All participants were recruited from the Department of Otolaryngology - Head & Neck Surgery, Western University. A total of 12 subjects agreed to participate, which included seven junior Otolaryngology residents (postgraduate years 1 to 3) and five senior Otolaryngologists who routinely performed ventilation tube insertions in their practice. These groups were chosen to reflect the target group of the simulator (junior residents) as well as experts in the field (Otolaryngologists). The participants did not have any previous exposure to myringotomy simulation.
All participants were initially given an orientation session which consisted of: 1) an information sheet outlining the software features of the simulator, 2) a demonstration video of how to perform a myringotomy and tube insertion using the simulator controls, and 3) a live demonstration of the simulator and haptic arm. The same graduate student and surgical resident performed the orientation session for each participant, and a standardized script was used to ensure consistency. The participants were specifically asked to perform the tasks listed in Table so that they could comment on all the various aspects of the simulator. Finally, the participants were given an unlimited period of time to use the simulator until they felt comfortable completing the face and content validity questionnaires.
Previously, we had tested individual software modules focusing on blade navigation , haptics and tympanic membrane deformation and cutting . Since this new simulator refined each of these components, including the graphical representations of the ear and virtual tools, and included new features such as microscope handling, speculum positioning and tube insertion, the Myringotomy Surgery Simulation Scale (MS 3 ) used in previous publications was modified to include these features. The questionnaire was divided into three sections (A, B, and C) with a total of 20 questions. Section A included 14 questions focusing on face validity as listed in Table . The appearance and realism of the surgical instruments; anatomy of the auricle, ear canal and eardrum; movement of surgical instruments; deformation and cutting of the eardrum; tube insertion and 3D microscopic view of the scene were assessed. Section B included six questions focusing on content validity as listed in Table . These questions were used to determine training potential on specific surgical tasks. In Sections A and B, study participants were asked to answer each question using a 7-point Likert scale, an equal appearing interval measurement. The scale had values of “1”—Strongly Disagree, 2—“Mostly Disagree”, 3—“Disagree”, 4—“Neither Agree/Disagree”, 5—“Agree”, 6—“Mostly Agree” and 7—“Strongly Agree”. In Section C, a free-form comment area was provided for each participant to provide feedback to elaborate on previous questions and to address issues not covered in Sections A and B.
Demographics Comparison of groups Face and content validity Face validity Content validity first group was comprised of seven junior Otolaryngology residents in postgraduate years 1 to 3. They were all familiar with the operating microscope and the procedure, however they were in the active phase of learning with each resident having performed fewer than 20 myringotomy and tube insertions in training. The second group had five fellowship trained Otolaryngologists who routinely performed myringotomy and tube insertions in their practice. Each member of this group had performed at least 200 procedures since completing their fellowship.
The mean response and confidence interval for each question in Section A (face validity) and Section B (content validity) are summarized in Fig. . Application of the Mann–Whitney U -test indicates no statistically significant differences between residents and senior Otolaryngologists once the Holm-Bonferroni correction was applied. However, the largest differences between the groups were seen in Question 13 ( U = 5.5, p = 0.043) and Question 20 ( U = 7, p = 0.097), which related to the movement of the tube within the myringotomy.
Given that mean responses were not different at the p = .05 level, the results for the two groups were pooled when analyzing face and content validity. The responses to the questionnaires were categorized as positive (score ≥ 5), neutral (score = 4) or negative (score ≤ 3).
The realism of the simulator was investigated through the 14 questions in Section A of the questionnaire. As can be seen in Fig. , the number of positive responses exceeds the number of neutral and negative responses except in the case of Questions 9 and 11. Question 9 focuses on the realism of the visual appearance and splay of the myringotomy, whereas Question 11 focuses on the realism of the movement and stability of the myringotomy blade and forceps. Overall, when the 14 questions over 12 participants (168 total responses) were considered, there were 116 (69.0 %) positive responses, 21 (12.5 %) neutral responses, and 31 (18.5 %) negative responses.
The training potential of the simulator was tested through 6 questions in Section B of the questionnaire. As shown in Fig. , the number of positive responses was greater than the number of negative responses for each question in this section. Among the total 72 responses (6 questions x 12 participants), 46 (63.9 %) were positive, 15 (20.8 %) were neutral, and 11 (15.3 %) were negative.
The MS3 scale used in this study had to be developed at our institution as no other validated measure was available to assess a virtual-reality myringotomy simulator. This questionnaire has not been externally validated by other centres, however content validity was assessed by a group of experts during the development of the questionnaire. In addition, previous publications did demonstrate reliability of the MS3 with a strong correlation across raters. The MS3 was also correlated against a visual analogue scale measuring the same construct, thus providing a measure of concurrent validity . The lack of statistically significant differences in mean responses between residents and senior Otolaryngologists to Questions 1 to 20 suggests that even with limited exposure to the actual procedure of myringotomy with tube insertion, junior residents had similar assessments of the realism and utility of the simulator as those experienced in the OR. The only differences between the groups approaching significance were in Questions 13 and 20, which pertained the movement of the tube within the myringotomy. Senior Otolaryngologists perceived the simulated tube movement to be less realistic than did residents. Similarly, Question 9 in the pooled responses dealt with the splay of the myringotomy, and this had a higher number of negative responses overall. From the written comments in Section C of the questionnaire, it appears that splaying (i.e., spreading) of the virtual eardrum when it is contacted by the virtual blade is realistic, and this was also the case in our previous report ; however, splaying is less realistic during tube insertion when the virtual tube contacts the eardrum and causes it to spread. This difference could be explained by a design decisions made during the development of the tube insertion module. First, although the tympanic membrane has real-time deformation, the physics of the interaction between the edges of the myringotomy and a ventilation tube is quite complex. In order to detect contact with the tube, the tympanic membrane is represented as a discrete collection of spatially distributed points as shown in Fig. . Collision detection is performed at each of these discrete contact points. When the spatial density of points is high (i.e. the points are close together) the location of contact can be calculated with more precision than when the spatial density is lower. Unfortunately, multi-point collision detection is computationally intensive, therefore the rendering speed decreases rapidly as the spatial density and precision is increased. The particular choice of density in the simulator was chosen to permit animations to occur at a realistic pace on an inexpensive personal computer, however this negatively affected the precision of the tympanic membrane splay in response to the tube. Second, the physics of tympanic membrane ‘tearing’ with large forces and displacements during tube insertion are difficult to model in real-time. To overcome this, pre-programmed animations were used based on the length of incision, the trajectory of the tube, and the contact between the flange of the tube and the myringotomy. Although this significantly reduced computation time, Question 13 revealed that this lack of realism was noted by the experts and not the residents. This could be explained by the fact that senior surgeons would have had much more experience knowing how the ventilation tube should slide into the incision, therefore they were able to notice the subtle differences more than the junior trainees still learning the procedure. On average, Otolaryngologists’ rankings fell between “Disagree” to “Neither Agree/Disagree”, suggesting that slight improvements to the tube insertion simulation could make this aspect more acceptable. Question 11 was the only other question with a higher proportion of negative responses, and this pertained to the movement and stability of the blade and forceps. Section C clarified this finding as concerns were raised about the limited range of motion of the haptic device and that the friction of the device affected the movements of the virtual blade and forceps. The haptic arm used in this study is a low-cost device that is suitable for design of a prototypical simulator. The device can easily be swapped for a higher fidelity device with greater range of motion and substantially reduced friction (e.g., Geomagic Phantom Premium device from Geomagic, Inc., Morrisville, NC), albeit at greater financial cost. Utilizing the higher fidelity device may result in acceptable range of motion and unnoticeable friction. A second concern with the device was the feel of the handle of the haptic arm when it was used to control the blade and forceps (Fig. ). As the handle is thick, it feels unnatural compared to holding an actual surgical tool. We have implemented approaches described in the literature to replace the haptic arm handle with actual surgical tools to improve the feel and realism of the simulation . The goal in this hybrid simulator would be have one haptic arm attached to a myringotomy blade or forceps, and have the second haptic arm attached to a real speculum to maximize realism. Face and content validity are only initial steps in validation, and they do not ensure that a simulator will be useful in training residents . Future development on the Western myringotomy simulator will address concerns raised in this study. Refinement and optimization of the tube insertion and tympanic membrane splay may help to increase the realism of the simulator, but it is unclear if increased fidelity will actually result in additional skills transference . In order to determine the construct validity of the simulator, automated metrics including time, length and direction of incision, collisions, magnification, etc. have been incorporated into the simulator. A separate study will examine if these metrics are capable of distinguishing experts from residents, and a skills transference study will be needed to determine if the simulator can result in better operating room performance. A multi-centred study will be considered at that time to maximize sample size and feedback from different centres. The authors hope that by using standardized libraries while programming the simulator, and the ability of the simulator to run on low-cost hardware, will allow easy adoption by Otolaryngology training programs and allow other groups to make modifications as needed.
The Western myringotomy simulator has a number of new features including microscope handling, speculum positioning and ventilation tube insertion. The simulator has good face and content validity, except with respect to splaying of the myringotomy during tube insertion and with respect to the haptic arm. These issues are currently being addressed with further refinements and adaptations. Automated metrics have been developed and they will be used to assess for construct validity of the simulator. Although the entire myringotomy and ventilation tube insertion can now be simulated, a skills transference study is needed to establish training efficacy and clinical impact.
|
Current state of noninvasive, continuous monitoring modalities in pediatric anesthesiology | aecd2c5d-8071-4074-90d8-ab7e856c0209 | 7752231 | Pediatrics[mh] | Patient safety is the number one issue in anesthesiology. At present, anesthesia is absolutely safe in uncomplicated patients undergoing low-risk procedures, as improvement of monitoring modalities and anesthetics, and the preparation of the perioperative process have led to optimization of care. In general, intraoperative mortality has dramatically decreased in the last decades . This overall safety has led to a change of the paradigm of anesthesia, from survival of the surgery and avoiding direct side effects into concepts based on quality of life and value-based health care. This requires a new view on monitoring to optimize organ preservation by controlling local oxygenation and metabolism. In perioperative monitoring of pediatric patients, we face specific challenges, which postponed the development of appropriate age and size-related pediatric monitors. First, it is not always possible to get baseline measurements and some equipment is not validated for children or has size limitations. Moreover, there is no consensus on safety margins of some parameters, while goal directed monitoring in adults has already been established. Due to rapid hemodynamic and respiratory changes under anesthesia, continuous and noninvasive monitoring would be favorable. Most parameters daily used in anesthesia are only proxies for end organ function. The brain is perhaps the most vulnerable, but also the least monitored organ. Due to the development of encephalopathy in (ex)preterm neonates requiring multiple surgeries, pediatric anesthesiologists are especially interested in brain perfusion . We know that a short anesthetic in healthy children is harmless, but if this is still the case in high-risk neonates and infants undergoing multiple procedures remains unknown [ ▪▪ ]. It is unclear what exactly happens within the brain during anesthesia, due to changes in fluid status, cerebral perfusion pressure, CO 2 pressure and unknown local factors. The current review focuses on recent developments and current evidence on noninvasive monitoring in noncardiac pediatric anesthesia. We will concentrate on cardiac output (CO), near-infrared spectroscopy (NIRS) and transcutaneous blood gas analysis as monitors that may guide our interventions to optimize end organ function of our patients.
Blood pressure (BP) measured noninvasively with the oscillometry technique (NIBP) has a good correlation with intra-arterial BP (IABP), also in infants and neonates . However, changing the site of measurement from the arm to another location may provide less reliable information. Large deviations are common when NIBP is measured from the leg or forearm in children under anesthesia, compared with arm NIBP. Leg NIBPs are usually lower than arm measurements in children, in contrast to higher leg NIBPs in adults. In children the soft, compliant pediatric arteries produce less augmentation of the signal than stiffer adult arteries. Also a reduced sympathetic tone and a relatively reduced blood volume in the lower limbs of small children may play a role . Continuous noninvasive BP can be measured with a finger cuff, measuring noninvasive finger arterial pressure (FINAP) by clamping the finger artery to a constant volume and varying the counter pressure . With the Nexfin monitor (Table ), FINAP is reconstructed into a brachial arterial pulse pressure waveform. In children, the FINAP was reliable, with a good level of agreement for DBP and mean arterial pressure between the Nexfin and IABP. However, underestimation of Nexfin SBP was observed . The CNAP monitor (Table ) provides beat-to-beat noninvasive pressure readings. In pediatric patients, the continuous BP readings were clinically useful. However, there is some variation in accuracy, especially with SBPs. Cuff placement was sometimes problematic, so further development in finger cuffs for children is necessary .
CO is the product of cardiac stroke volume (SV) and heart rate (HR). CO is measured by transpulmonary dilution techniques, requiring central venous catheterization . Bolus thermodilution is still the most accepted reference method . Less invasive techniques have become available, such as pulse contour cardiac output analysis, arterial pressure curve-based CO measurements, transesophageal Doppler (TED) and partial rebreathing of CO 2 . Transthoracic echocardiography or ultrasonic monitors are noninvasive, but noncontinuous measures . Pulse contour analysis (PCA) of IABP waveforms can estimate CO continuously . PCA can be measured noninvasively with devices such as the Nexfin monitor or Mobil-O-Graph (Table ). Pediatric studies using this method are limited. The PCA-derived CO values of the Mobil-O-Graph were measured in awake adults and children at least 10 years of age, and showed to be comparable with two-dimensional echocardiography CO values; however, the values were not interchangeable [ ▪▪ ]. At low CO values, PCA-derived data were higher than data from echocardiography. This type of CO measurement needs further refining in accuracy and precision, before it can be used in pediatric anesthesia. Another technique of measuring CO continuously is based on the bioimpedance method. Bioimpedance cardiography measures changes in thoracic electrical bioimpedance during the cardiac cycle via electrodes on the skin, from which SV, and subsequently CO can be calculated . Several devices are on the market measuring bioimpedance, electrical velocimetry or bioreactance (Table ). Electrical velocimetry relates the maximum rate of change of impedance to peak aortic blood acceleration during the cardiac cycle. The change in orientation of the red blood cells in the aorta, from random during diastole (high-impedance state) to an aligned or parallel orientation during systole (low-impedance state), causes changes in electrical conductivity and electrical impedance . In pediatric patients studies showed agreement, but not consistently . Observational studies with the ICON monitor in 402 children, ranging from preterm neonates to teenagers, showed that continuous cardiovascular parameter assessment was feasible during anesthesia for patients of all sizes and that it provided useful, real-time information regarding adverse hemodynamic changes and the response to interventions . Bioreactance is the analysis of the variation in the frequency spectra of a delivered oscillating current that occurs when the current traverses the thoracic cavity. It is less susceptible to interference than bioimpedance . NICOM CO values showed a good correlation and agreement with echocardiography during anesthesia in pediatric patients with normal heart anatomy, but no agreement in pediatric patients with a cardiac defect . In children undergoing major abdominal surgery, the NICOM showed poor correlation between confidence interval values obtained by bioreactance and TED . A meta-analysis of CO monitoring devices in adults found that no noninvasive device or technology was interchangeable with bolus thermodilution; the percentage of error was 42% for bioimpedance and 45% for noninvasive PCA, where a maximum of 30% percentage of error is considered acceptable . Still, the noninvasive CO monitors could be interesting bedside monitors, as the percentage of error was similar to that of minimally invasive CO monitors, such as FloTrac (Edward Lifesciences Corp., Irvine, California, USA).
Almost 30 years after the introduction of the first commercially available NIRS monitor the value of NIRS and its applicability in pediatric anesthesia are still a matter of debate. NIRS is still misunderstood while a short introduction to its technical background would help to use it in the best interest of patients at risk of inadequate tissue oxygenation . NIRS provides blood flow independent real time information regarding regional tissue oxygenation (r-SO 2 ), and the oxygen uptake/consumption balance. It should not be confused with pulse oximetry. Cerebral NIRS monitoring has become a standard monitoring tool in many pediatric cardiac centers and neonatal ICUs. In noncardiac pediatric anesthesiology, however, NIRS has not yet become part of the standard monitoring equipment, and the price of the disposables certainly requires careful patient selection. Despite significant scientific efforts during the last two decades aiming at the definition of normal ranges and lower safety margins of cerebral r-SO 2 in children, consensus regarding these important targets has not yet been reached. Many pediatric anesthesiologists have adopted common adult patient intervention limits like baseline r-SO 2 −20% or an absolute value less than 55% . Gómez-Pesquera et al. [ ▪▪ ] recently demonstrated the association of a decrease in cerebral r-SO 2 of less than 20% and negative behavioral changes on postoperative day 7 in noncardiac pediatric patients. Kamata et al. reported a decrease in cerebral r-SO 2 values during laparoscopic surgery in children, not reaching awake baseline levels, while hemodynamic and respiratory parameters remained unchanged. Costerus et al. reported decreases in cerebral r-SO 2 (≤10% from baseline) during neonatal thoracoscopic surgery and favorable neurodevelopmental outcome within 24 months despite severe intraoperative acidosis. Two recent studies conducted in infants found no evidence of an effect of awake caudal and spinal anesthesia on cerebral r-SO 2 .
The list of new applications of NIRS monitoring in pediatric anesthesiology is continuously growing. Combined cerebral and peripheral (muscle) NIRS monitoring is a new trend, with some initial evidence of its capability to detect early stage centralization . The calculation of fractional regional tissue oxygen extraction [FTOE = (SaO 2 − rSO 2 )/SaO 2 ] , a composite parameter reflecting the regional oxygen delivery/consumption balance is also becoming increasingly used. Jildenstål et al. found an acceptable level of agreement between frontal and occipital recordings of cerebral rSO 2 , introducing the possibility to apply NIRS during surgical procedures where the forehead is not available for sensor placement. Neunhoeffer et al. found a positive effect of red blood cell transfusion on FTOE and cerebral r-SO 2 in postsurgical infants, suggesting the feasibility of both parameters as transfusion triggers. Smarius et al. observed a significant reduction in cerebral r-SO 2 induced by hyperextension of the neck during positioning for cleft palate repair surgery in children. Lang et al. found initial evidence of additional value of perioperative cerebral NIRS monitoring as a measure of intracranial pressure in symptomatic pediatric hydrocephalus patients.
We recently developed a hemodynamic management algorithm using cerebral r-SO 2 as the single target parameter, using BP, PaCO 2 , HR and SaO 2 as major contributing parameters . A preinduction awake baseline r-SO 2 is defined as the lowest acceptable value during the anesthetic. Our experience from several hundred patients has confirmed the feasibility of this approach.
The principles of transcutaneous blood gas analysis have already been described in the late fifties by Clark and Stow-Severinghaus . Although continuous and noninvasive, it was prone to errors compared with simpler techniques such as pulse oximetry. As the introduction of user-friendly transcutaneous sensors, their use is increasing. Especially, measurement of CO 2 is reliable. This is particularly important due to the increase of video-assisted procedures. Insufflation of CO 2 could lead to an increase in arterial CO 2 , which is a highly vasoactive substance. This is especially the case in neonates, whose brains are very sensitive for changes in CO 2 . However, arterial blood gas analysis, despite the risks of invasive arterial lines, and capnography remain the gold standard. Transcutaneous CO 2 measurement could also be useful during endoscopic airway procedures or in spontaneously breathing children without a definitive airway during procedural sedation. Therefore, further developments on the use of continuous and noninvasive measurements would be favorable.
Transcutaneous sensors locally heat the skin improving diffusion of oxygen and CO 2 through the skin . This results in a close approximation of arterial values, although accuracy on oxygen measurements is restricted due to limited diffusion capacity and due to increasing skin thickness with age . It is mostly used on neonatal and pediatric ICUs. However, its use in the pediatric operation theatre is limited and concerns still remain on the accuracy of measured oxygen values and its usability. Membranes of the device must be switched carefully and calibration has to be taken into account afterwards. Furthermore, a short equilibration time of 10 min after skin attachment is necessary, before measurements can be interpreted safely. Nevertheless, due to improvements in sensor application , its use perioperatively has increased. During an operation, changes in hemodynamics or fluid status and anesthetic agents as well as vasoactive medication could have effect on transcutaneous measurements by influencing the microcirculation, so doubts remain about the perioperative validity of measurements.
Only few studies have been published on this subject. Nosovitch et al. performed the first perioperative study in children in 2002. They concluded that of noninvasive measurements of CO 2 , transcutaneous values were slightly more accurate than end-tidal measurements. Dullenkopf et al. compared end-tidal and transcutaneous measurements of CO 2 in 60 children under general anesthesia and found no significant difference in accuracy between the two methods. Karlsson et al. concluded on a relatively small group of neonates under general anesthesia that measurements where technically possible but not yet accurate. Recently, Chandrakantan et al. [ ▪▪ ] compared end-tidal and transcutaneous CO 2 to venous blood gas values in children under 10 kg and showed that transcutaneous measured CO 2 has good correlation to venous values which are slightly better than standard end-tidal CO 2 . May et al. reported similar results comparing single CO 2 values simultaneously obtained during arterial, venous, transcutaneous and end-tidal analysis in 47 children (mean age 13.4 ± 7.8 years old) with cystic fibrosis during anesthesia. Transcutaneous monitoring was more accurate and closer to PaCO 2 than capnography.
The ultimate monitor should be easy to set up and should provide the pediatric anesthesiologist of continuous, noninvasive, accurate, reproducible and real-time measurements. Ideally, this would display end organ function. So far, this monitor has not yet been available. Some techniques, however, seem very promising. Regarding BP measurements and CO monitoring improvements are being made with regard to availability and accuracy in children. Further development of finger cuffs for smaller children is necessary. Although the bioimpedance technique seems very promising, drawbacks are that in young children the electrodes may be difficult to place, electrocautery induces loss of data, and arrhythmia or pleural effusion may limit its use . Most importantly, more research needs to be conducted on the accuracy of the absolute CO values of these devices before it can be applied routinely during anesthesia in pediatric patients. NIRS is not the holy grail, but it is the best currently available to continuously and noninvasively measure regional tissue-oxygenation and tissue-perfusion. Using the r-SO 2 as the single outcome parameter in hemodynamic monitoring requires a paradigm shift in pediatric anesthesia toward tissue oxygenation, away from BP. Additional muscle NIRS monitoring may become the ultimate addition to ensure adequate oxygenation of all tissues. Transcutaneous measurements are complimentary to, and not a replacement of other modalities. It is, however, a great advantage that noninvasively and continuously measurements are now available. But the gold standard for assessment of gas exchange remains blood gas analysis, and for correct tube placement capnography. In the near future more studies are required confirming validity in children under anesthesia and in areas where these measurements can contribute to safety such as laryngeal surgery, video-assisted procedures and procedural sedation.
Small steps are being made to improve the monitoring modalities in pediatric anesthesiology as new techniques are available to assess a child's hemodynamic and respiratory status while anesthetized. As perioperative safety is high nowadays, we face the challenge to take these small steps and use these new monitors as complementary tools together with standard monitoring in benefit of the most vulnerable patients.
The authors wish to thank Wichor Bramer, PhD, from the Erasmus MC Medical Library for developing and updating the search strategies, and Gail Scoones, MD, from the Department of Anesthesiology, Erasmus MC Sophia Children's Hospital, for critical appraisal of the article. Financial support and sponsorship None. Conflicts of interest There are no conflicts of interest.
None.
There are no conflicts of interest.
Papers of particular interest, published within the annual period of review, have been highlighted as: ▪ of special interest ▪▪ of outstanding interest
|
Uma articulação conceitual para boas práticas preventivas (ou para a
prevenção quaternária) | e0bf950c-5ded-456f-9602-e27aa068ce7b | 11405023 | Preventive Medicine[mh] | É comum que gestores, médicos e outros profissionais da saúde operem com noções sobre prevenção e atitudes positivas relativas às ações preventivas, sobretudo em doenças crônicas e especialmente nas não transmissíveis como cânceres e doenças cardiovasculares, que são as maiores causas de morbimortalidade no Brasil. Os conceitos usados geralmente envolvem prevenção primária (P1: intervenções preventivas antes do estabelecimento de uma doença, agravo ou situação a ser prevenida), prevenção secundária (P2: intervenções para identificar e tratar precocemente uma doença ainda assintomática, visando reduzir sua morbimortalidade) e terciária (P3: reabilitar e prevenir complicações de doenças com lesões já estabelecidas), de Leavell & Clark , e pouco mais que isso. Boa parte das práticas clínicas preventivas podem ser resumidas a solicitar exames e orientar condutas aos usuários, e exortá-los a segui-las. Subjacente a tal mensagem está um apelo moral e ético otimista e afirmativo de que ações preventivas são desejáveis e necessárias. Com a transição epidemiológica brasileira, ainda que com uma tripla carga de doenças - alta morbimortalidade por doenças crônicas não transmissíveis coexistindo com uma elevada incidência e prevalência de doenças infecto-parasitárias, sobretudo no primeiro decênio, e de causas externas, principalmente homicídios, na população masculina jovem (15-29 anos) -, esse apelo preventivo foi amplificado até se tornar uma obrigação moral , , e também foi reforçado pelo discurso da nova promoção da saúde, que a ele se associou . Por fim, a proliferação e sofisticação de tecnologias preventivas diagnósticas e terapêuticas operacionalizou tal apelo e impulsionou o crescimento das ações preventivas na clínica , dando consequência tardia ao movimento da medicina preventiva, defensor de uma atitude preventiva nos médicos focada no indivíduo e sua família . Hoje, no Brasil e fora dele, cuidados preventivos são um dos mais frequentes motivos de consulta médica na atenção primária à saúde (APS) , . Todavia, a proliferação de ações clínicas preventivas pouco criteriosas pode gerar mais danos que benefícios às pessoas e produzir iniquidades - diferenças desnecessárias, evitáveis e injustas moral e socialmente - ao desviar a atenção clínica, sobretudo na APS, para os usuários mais jovens, saudáveis e menos pobres, que têm mais condições de procurarem e se preocuparem com prevenção, dificultando o acesso e o cuidado aos mais doentes, idosos e pobres , , (adoecem mais e mais gravemente, têm menos condições de se preocupar e prevenir), os quais enfrentam a precariedade e o subdimensionamento crônicos da APS e do Sistema Único de Saúde (SUS) . Vale lembrar que, em geral, o adoecimento/sofrimento presente deve ter prioridade sobre o bem-estar futuro (prevenção) . Há um excesso de práticas preventivas mal fundamentadas que produz grande iatrogenia evitável e reforça o imperialismo preventivista ou healthism , (tendências, crenças, valores e práticas que enfatizam obrigações das pessoas buscarem a saúde e evitarem doenças ou riscos), associado à medicina de vigilância , . Embora a prevenção seja consensualmente relevante, não parece desejável fomentar em profissionais e gestores uma atitude de repetição vaga, genérica e incondicionalmente otimista dessa mensagem. Não encontramos estudos empíricos brasileiros, mas uma revisão de literatura mostrou que profissionais têm expectativas exageradas sobre a eficácia de rastreamentos (realização de testes diagnósticos em pessoas assintomáticas) . Com a proliferação e o uso aumentado de ações clínicas de P1 específica, voltadas para doenças ou agravos definidos, e P2 tipo rastreamento, diagnósticas e terapêuticas, houve sua incorporação na cultura profissional e da sociedade em geral, tornando necessário que profissionais e gestores tenham uma melhor formação técnica, superem a exortação preventivista genérica e orientem ações preventivas mais específicas e bem fundamentadas. Os conteúdos sobre prevenção, embora extensos na literatura científica e nos manuais clínicos por estarem presentes em uma miríade de doenças, carecem de uma estrutura sintética organizada e articuladora de conceitos que orientem profissionais de saúde (sobretudo da APS) e gestores quanto a essas ações. O objetivo deste ensaio é apresentar uma articulação conceitual mínima orientadora da consideração pelos profissionais e gestores, de medidas de P1 específicas e P2 tipo rastreamento. Tal articulação visa favorecer a prática da prevenção quaternária (P4), definida pelos médicos de família e comunidade como “ realizada para identificar o paciente em risco de sobremedicalização, protegê-lo de nova invasão médica e sugerir-lhe intervenções eticamente aceitáveis ” (p. 110). A P4, ao reduzir a iatrogenia e a medicalização derivadas da prevenção (comumente excessivas e evitáveis), contribui para a humanização e a melhoria das práticas preventivas , sendo, por isso, importante e necessária. O texto está estruturado em uma argumentação sequencial e didática, dividida em tópicos que progressivamente apresentam a articulação proposta. O esquematiza os principais conteúdos apresentados. Há uma tendência atual de se alterar critérios diagnósticos e com isso amplificar o que pode ser considerado patológico e diagnosticável como doença (e assim tratado), reduzindo a faixa da normalidade e reclassificando faixas de risco cada vez menores como de alto risco. Outra tendência é a incorporação de estados de alto risco nas definições de patologias e síndromes , empurrando para dentro das doenças o que era normal ou alto risco, que passa a ser manejado como patológico. Adicionalmente, há significativos avanços tecnológicos na detecção de variações ou alterações estruturais ou funcionais mínimas . Assim, detecta-se anormalidades e disfunções cada vez menores, o que torna os prognósticos mais duvidosos e abre dúvidas sobre se a diagnose e o tratamento em fase assintomática ou precoce compensam. Um exemplo é o fenômeno do sobrediagnóstico e sobretratamento, adiante comentado. Esses processos tendem a tornar cada vez mais difícil a distinção entre prevenção (P1 e P2) e o cuidado dos que se sentem adoecidos . Ocorre que o contrato entre profissional e usuário difere nas situações preventivas em comparação às de cuidado aos que se sentem doentes . A relação de equilíbrio sempre buscado entre os quatro valores ou princípios bioéticos clássicos (beneficência, não-maleficência, justiça e respeito à autonomia) tende a ser significativamente diferente nessas situações devido às singularidades quanto à tolerância aos danos, ao manejo da incerteza, à fundamentação das ações e à garantia de benefícios, que exigem distintas atitudes dos profissionais e gestores , . No caso do cuidado clínico ao que se sente adoecido, a presença do sofrimento e dos sintomas impõe um contrato curativo que envolve certa proatividade, em que a beneficência é fortemente valorizada e a não maleficência é comumente relativizada em função dos benefícios do tratamento (próximos no tempo), tornando aceitável um manejo flexível da incerteza e uma maior tolerância aos danos iatrogênicos. Tal manuseio é respaldado pelo estado da arte do saber e das técnicas disponíveis, não se exigindo garantia de benefício, mas ações técnica e eticamente “corretas”. Essa correção está referida ao conjunto dos saberes profissionais teóricos e técnicos vigentes, nos quais se projeta grande confiança, estendida também à experiência dos profissionais . Tudo isso é diferente na P1 específica e nos rastreamentos (P2): não há adoecimento e sofrimento sentidos. A princípio, as pessoas estão saudáveis. O potencial de benefícios está projetado no futuro e restrito a uma minoria que adoeceria dos problemas que se busca prevenir. Nessa circunstância, a não-maleficência comumente é mais valorizada e rigorosa , , , . O primum non nocere não pode ser relativizado pelo potencial de benefício imediato da intervenção do mesmo modo que no cuidado ao adoecido. Enquanto danos e benefícios no cuidado ao adoecido incidem na mesma pessoa, facilitando a decisão esclarecida de aceitar o tratamento, na P1 específica e nos rastreamentos o potencial de benefícios incidirá sobre uma parcela pequena das pessoas (as que adoeceriam no futuro), enquanto o potencial de danos se dissemina no presente e no futuro em todos que recebem a intervenção. Nesse caso, a compensação dos danos pelos benefícios não existe em muitas pessoas prejudicadas, e o princípio da não-maleficência é cruamente violado. Não está claro, eticamente, que o padrão de relação clínica e de consentimento dos usuários possa ser transferido da situação de cuidado ao que sente doente para a P1 e a P2, inclusive porque, ao oferecer medidas preventivas, os profissionais induzem implicitamente a aceitação das mesmas pelos usuários com seu poder e autoridade. Logo, a postura genericamente otimista, a atitude proativa e relativamente tolerante ao intervencionismo e à iatrogenia, e a flexibilidade para com a incerteza - típicas da clínica do adoecido - devem ser substituídas, em P1 e P2, por uma atitude cética resistente ao intervencionismo, com manejo mais rígido da incerteza. Para tal resistência atitudinal à intervenção preventiva ser superada, devem ser exigidos estudos científicos experimentais rigorosos, mostrando os resultados empíricos da aplicação de uma medida preventiva e confrontando benefícios com danos. Apenas um balanço amplamente favorável deve vencer a resistência e pender a decisão a favor da intervenção, sempre que houver um razoável potencial de danos justificador dessa cautela , . Devido a essas diferenças, para viabilizar boas práticas preventivas é necessário que a diferenciação entre cuidado ao que se sente adoecido e P1 (específica) e P2 (tipo rastreamento) seja valorizada e realizada no cotidiano clínico (e pelos gestores) . Ambas as situações podem ocorrer em uma mesma consulta, a serem manejadas diferentemente. Reconhecida uma situação de P1 específica ou P2 tipo rastreamento, há que abordar os danos potenciais das intervenções. Geoffrey Rose chamou de medidas preventivas “ redutivas ” as ações que removem ou reduzem “ alguma exposição artificial, de modo a restaurar um estado de normalidade biológica ” (p. 148). Trata-se de restaurar a normalidade biológica, vista como “ as condições para as quais somos considerados geneticamente adaptados devido a nossa história evolutiva ” (p. 148), tornando as condições ambientais e os modos de vida favoráveis à saúde. Tais medidas se concretizam em aconselhamento clínico e em ações de saúde pública e organização social: redução do sedentarismo, do tabagismo e dos alimentos multiprocessados; eliminação dos agrotóxicos nos alimentos; universalização do saneamento básico; redução das desigualdades de renda etc. Embora as medidas preventivas redutivas possam significar mudanças no modo de viver, elas não são artificiais. Ao contrário, diminuem artificialismos patogênicos tornados banais nas sociedades moderna . Muitas dessas medidas são relativamente aproblemáticas quanto à fundamentação científica da sua recomendação, sendo consideradas seguras (nulos ou mínimos riscos) e com presunção de benefício razoável cientificamente aceita . O relativo consenso sobre elas sustenta uma postura afirmativa e otimista, que inclusive torna prescindível a exigência de evidências científicas de alta qualidade e hierarquia (ensaios clínicos aleatorizados) sobre seus resultados. Seria inviável e antiético realizar um ensaio clínico em que o grupo controle seria exposto ao tabagismo, enquanto o grupo intervenção ficaria livre do tabaco, pois o consenso dos estudos observacionais a respeito é muito forte. Por outro lado, Rose chamou de medidas preventivas “ aditivas ” às ações que introduzem ou “ adicionam ” no ser humano, na sua alimentação ou no meio ambiente um fator artificial, protetor e preventivo não existente na economia-fisiologia-ecologia das pessoas: vacinas, fármacos preventivos (hipotensores, hipolipemiantes), complementos alimentares artificiais (ou naturais em doses artificiais), rastreamentos etc. Essas medidas não podem ser consideradas seguras porque têm grande potencial iatrogênico, que deve ser criteriosamente avaliado. Por isso, tais ações exigem evidências de que sua realização produz resultados significativamente benéficos com nulos ou mínimos danos . Somente um balanço benefícios-danos amplamente favorável, obtido da convergência de estudos experimentais de intervenção de alta qualidade (ensaios clínicos aleatorizados) e idoneidade (pouco ou nenhum conflito de interesse) revisados, pode gerar fundamentação para sua recomendação . Nesse balanço não devem ser aceitas justificativas teóricas ou resultados intermediários (substitutivos) aos desfechos clínicos finais (mortalidade, morbidade, qualidade de vida), porque é justificável e necessária uma desconfiança do saber teórico e da experiência dos profissionais, diversamente da situação de cuidado ao já adoecido. Como já dissemos: é necessária a avaliação dos resultados empíricos da aplicação da medida preventiva em estudos experimentais . A distinção entre ações preventivas redutivas e aditivas é um divisor de águas que gera marcada preferência pelas primeiras, cuja segurança e benefícios são relativamente consensuais, facilitando sua recomendação por várias razões: sua segurança e eficácia; sua convergência com ações de promoção da saúde, com impacto benéfico individual e coletivo em determinantes gerais e sociais da saúde-doença; e seu caráter econômico (menor custo), sustentável e ecológico . Por outro lado, a mesma distinção dificulta a recomendação das ações aditivas, cujo grande potencial de danos exige muita cautela. Tal cautela está escasseando na sociedade e entre os profissionais devido à socialização disseminada há décadas de várias dessas medidas, ao persistente encantamento com o desenvolvimento tecnológico (em boa parte enganoso ) e à idealização ingênua da medicina baseada em evidências (a que retornaremos). Isso torna importante que essa distinção seja valorizada e exercida na prática clínica e sanitária. Ante uma medida preventiva aditiva, cabe observar que tipo de estratégia preventiva está envolvida. Rose discutiu dois tipos de estratégias preventivas: abordagem populacional e abordagem de alto risco. A primeira é usada em situações em que fatores de risco conhecidos são distribuídos universalmente na população. Ela consiste em intervir na população toda, visando reduzir (deslocar para a esquerda, na a) toda a curva de risco. Quando toda a população usa cinto de segurança, bebe água tratada, recebe vacinas na infância, aprende a ler e escrever, e são proibidos: o fumo em lugares fechados, a bebida alcoólica ao dirigir e a propaganda de tabaco e bebidas alcoólicas, a sociedade está recebendo medidas preventivas em abordagem populacional. Se todos comem alimentos sem agrotóxicos e com menos sal, não padecem de privação socioeconômica, têm estímulo à mais atividade física em ciclovias e belas áreas verdes de lazer, via políticas de infraestrutura urbana e mobilidade sustentável, são reduzidos riscos e pode haver alto impacto na redução da morbimortalidade coletiva. Várias dessas medidas são redutivas e outras aditivas. As redutivas incidem em determinantes gerais da saúde-doença, e sua aplicação demanda amplo apoio social/político e leis e políticas públicas difíceis de obter. Porém, uma vez efetivadas, são sustentáveis e incorporadas na vida social. Por esses motivos, Rose defende uma ampla preferência e prioridade para essa estratégia, via ações preventivas redutivas, seguras, baratas e eficazes, por elas terem o potencial de universalizar parte importante do cuidado preventivo à saúde, concretizando com segurança esse direito humano fundamental e de cidadania. Todavia, a estratégia que vem sendo cada vez mais aplicada é a abordagem de alto risco: identifica-se uma fração populacional de maior risco e sobre ela são aplicadas ações preventivas, sem abordar o restante da população ( b). Isso faz sentido para profissionais e usuários, que entendem o porquê da intervenção. É custo-efetiva, pois os recursos preventivos são direcionados aos com maior risco, e se encaixa no cotidiano dos serviços de saúde, que operacionalizam as ações manejando as pessoas de alto risco como doentes crônicos . Contudo, essa estratégia tem desvantagens significativas: (1) medicaliza a prevenção; (2) precisa ser mantida indefinidamente, pois não se intervém nos determinantes gerais sociais, econômicos e culturais (sendo, por isso, cara); (3) é difícil quantificar seu real benefício à pessoa, visto que opera no campo da probabilidade; (4) gera pequeno impacto positivo na morbimortalidade, pois o pequeno grupo com alto risco produz um número bem menor de doenças e mortes do que o restante da população, de baixo risco mas muito mais numerosa; e (5) é inadequada comportamentalmente, pois exige que a pessoa de alto risco adquira novos hábitos de vida, distantes de seu entorno familiar, cultural e social, o que demanda atitudes heroicas difíceis ou inviáveis devido a iniquidades socioeconômicas e fatores psicossociais. Geralmente, tais medidas frustram médicos e usuários e culpabilizam indevidamente os últimos . Todavia, em vez de restringir essa estratégia, tais limitações e desvantagens têm sido enfrentadas com a sua intensificação por meio do deslocamento dos pontos de corte para a esquerda ( b). Isso é o paraíso dos lucros da indústria farmacêutica e acentua ainda mais suas desvantagens, ao converter maiores proporções da população em pacientes crônicos frustrados, preocupados, vitalícios e com resultados precários . Logo, gestores e profissionais devem preferir abordagens populacionais via medidas redutivas . Tem sido comum associar ambas as estratégias com ambos os tipos de medidas preventivas (aditivas e redutivas). A abordagem de alto risco é mais efetiva se acompanhada da abordagem populacional, mais poderosa. Um exemplo é o programa brasileiro de redução do tabagismo, exitoso (prevalência reduziu de 34% em 1989 para 14,8% em 2011) e que usou ambas estratégias e tipos de prevenção . Independentemente da estratégia, devemos avaliar o balanço benefícios-danos perante medidas preventivas aditivas, para o que outra distinção é importante e útil. A partir da década de 1990, intensificou-se a chamada medicina baseada em evidências (MBE). Uma das suas propostas centrais foi que se deve aproveitar na clínica dos estudos científicos sobre resultados das intervenções médicas e em saúde. Devemos migrar de uma fundamentação das decisões clínicas antes assentada na fisiopatologia, no saber acumulado pelos especialistas e seus consensos e na formação e experiência dos profissionais, para agregar a essas bases outra importante fonte de evidências: os estudos de intervenção (e observacionais) sobre os resultados clínicos das intervenções. No topo da hierarquia das evidências ficaram os ensaios clínicos controlados aleatorizados e suas revisões sistemáticas e metanálises. Estas últimas comparam vários estudos similares nos seus resultados clínicos e produzem um conhecimento inacessível via casuísticas de profissionais e serviços ou pela análise de ensaios individualmente. O aumento da literatura científica sobre as pesquisas clínicas produziu o que foi chamado de selva da literatura médica . Um profissional clínico não tem tempo e não consegue se manter atualizado sobre o que é publicado, mesmo com a internet. Para contribuir na seleção do que deve ser priorizado para leitura, Shaughnessy et al. e Slawson et al. propuseram uma distinção entre dois tipos de evidências, que chamaram DOE (do inglês disease oriented evidence ) e POEM (do inglês pacient oriented evidence that matters ). As evidências tipo DOE são o universo das publicações sobre as doenças: seus mecanismos fisiopatológicos, epidemiologia, técnicas diagnósticas, mecanismos de ação das terapêuticas, resultados dos tratamentos sobre os parâmetros fisiopatológicos etc. Esse conjunto é a grande maioria do saber médico, mas, embora seja a base da abordagem biomédica, não é ele o conhecimento mais importante nem para os clínicos nem para as decisões preventivas. O que mais importa para a prevenção são os resultados das ações para as pessoas, ou seja, quanto à morbidade, qualidade de vida e mortalidade . Essas são as evidências POEM, as únicas que devem ser usadas na avaliação do balanço benefícios-danos na prevenção aditiva. Além de estritamente necessárias, elas são uma pequena proporção da literatura. É crucial que os desfechos avaliados nesses estudos sejam finais, ou seja, que interessem às pessoas (POEM) e incluam os danos. Isso é importante porque frequentemente são usados desfechos intermediários (ou substitutos) nos ensaios clínicos, que exigem suposições teóricas para sua valorização. Há esforços em mensurar a ligação empírica entre desfechos finais e intermediários para fundamentar a aceitação de evidências sobre desfechos intermediários como prova de eficácia . Todavia, a peculiaridade da prevenção em suas diferenças já citadas não é reconhecida e considerada nessas discussões. A grande valorização da não-maleficência em assintomáticos sustenta que a prevenção deve ser destacada e desfechos intermediários não devem ser aceitos nela. Se uma ação preventiva aditiva é usada há tempos, a permanência de sua recomendação exige avaliação periódica baseada em revisões sistemáticas dos ensaios clínicos e também dos estudos observacionais nas populações. Esses últimos estão abaixo na hierarquia das evidências, mas são muito importantes nas medidas preventivas já em uso e na avaliação da eficácia e segurança (poucos danos). Isso implica uma maior aproximação para com a MBE. A MBE progressivamente se impôs como um novo regime de poder epistemológico e cultural sobre as práticas médicas e sanitárias . Deslocou em grande parte o poder de decisão clínica dos profissionais (suas escolas e experiências acumuladas) para um novo circuito de legitimidade e discussão que envolve a literatura científica (ensaios clínicos, revisões sistemáticas e metanálises), bem como as instituições produtoras de revisões e diretrizes clínicas baseadas em evidências. Como consequência, a MBE ganhou um novo poder que atravessa sociedades e culturas locais, ignora as experiências dos profissionais e tende a se impor como uma superior norma de excelência técnica, com inédita legitimidade científica. Ela induz uma padronização das condutas clínicas, produzindo, paradoxalmente, um potencial efeito contrário ao aperfeiçoamento das práticas médicas: o fortalecimento de uma medicina burocrática , via aplicação de protocolos/diretrizes baseados nas revisões/metanálises. Indústrias farmacêuticas, capazes de financiar ensaios clínicos e metanálises e influenciar os especialistas que se envolvem nisso, tornaram-se atores dominantes e com poder incomparável sobre a produção do saber médico, agora produção industrial . Apesar das boas intenções e de vários sucessos relevantes, a MBE tem limitações e já foi identificado uma crise em seu interior, devido aos valores e interesses que a atravessam. Trata-se do afastamento de sua proposta original na direção de uma nova governança aberta a interesses escusos. A MBE foi apropriada por interesses das indústrias farmacêuticas e de equipamentos médicos, o que torna a sua confiabilidade uma ilusão . O próprio aumento de exigências de maior qualidade nos ensaios clínicos e revisões, a fim de sanar os vieses dos estudos patrocinados pelas indústrias, aumenta o seu custo e os torna paradoxalmente mais dependentes delas . O volume de diretrizes clínicas se tornou enorme, sendo impossível para o clínico, de novo, se atualizar. Benefícios estatisticamente significativos podem ser marginais na prática clínica, mas vem com a etiqueta da MBE. Regras inflexíveis e diretrizes via MBE podem produzir cuidados orientados pelo gerenciamento em vez de centrados na pessoa; elas mapeiam mal a multimorbidade complexa comum nos usuários da APS , , . As incertezas e problemas são tantos que foi proposto o conceito de “incerteza médica ampla” (BMU, do inglês broad medical uncertainty ) para descrever a situação da desconfiança no saber médico atual . Particularmente importante é a situação já apontada de opacidade dos dados, fechados ao escrutínio independente, apesar do quase consenso e das conclamações sobre a necessidade de compartilhamento dos dados primários para melhorar a confiabilidade . Apesar disso, a comunidade médico-científica vem tolerando que sejam mantidos ocultos os dados primários de pesquisas financiadas pelas indústrias, inacessíveis aos pesquisadores independentes. Essa digressão sobre os problemas da MBE visa fundamentar que a avaliação das evidências sobre tal ou qual medida preventiva aditiva necessita (não raramente) ir além da consulta aos portais de evidências disponíveis, da Colaboração Cochrane (mais respeitada instituição produtora de revisões sistemáticas), das forças tarefas nacionais de serviços preventivos e das diretrizes institucionais nacionais. Isso não significa que não se deva aproveitar essas fontes, mas elas não devem ser o final da avaliação. Em vários e relevantes casos, devem ser o começo. Havendo evidências confiáveis (poucos ou nenhum conflito de interesse) e de boa qualidade, se o balanço benefícios-danos apresentar amplo benefício líquido com poucos danos, uma rápida consulta digital confirmará o consenso da recomendação ao mostrar um nível de evidência alto e/ou força de recomendação forte (usando o GRADE - Grading of Recommendations, Assessment, Development, and Evaluations , ) ou um grau de recomendação A, na classificação da Força-tarefa dos Serviços Preventivos dos Estados Unidos (USPSTF) , para a medida preventiva em questão. Porém, em casos relevantes, há grau de recomendação B, que indica precária qualidade das evidências e ou polêmica nas suas interpretações, merecendo, então, um estudo mais cuidadoso. Dois exemplos de alta relevância sanitária ilustram essa situação. O rastreamento mamográfico de câncer de mama é recomendado por todas as diretrizes preventivas nacionais ocidentais (salvo da Suíça), mas com grau B de recomendação (pela USPSTF). Há polêmica intensa na literatura especializada, com menos benefícios do que anteriormente estimado, vários estudos observacionais de boa qualidade mostrando pouco e mesmo nenhum benefício, e danos graves (os sobrediagnósticos/sobretratamentos, principalmente) , . As diretrizes baseadas em evidências sobre o uso em P1 das estatinas (drogas redutoras do colesterol sanguíneo), incorporadas nos manuais e na prática clínica, tinham sérios problemas de conflitos de interesse e mostram pequeno benefício e total opacidade dos dados primários sobre os efeitos adversos, inviabilizando um confiável balanço benefícios-danos , . Na metanálise que embasou a recomendação de uso de estatinas em P1, de 2012 , todos os autores dos ensaios metanalisados foram em grande parte ou totalmente financiados por indústrias farmacêuticas . O grupo que realizou a metanálise (financiado pelas mesmas indústrias) não teve acesso à totalidade dos dados primários sobre os efeitos adversos, e nenhum outro grupo teve acesso a quaisquer dados primários, mantidos em sigilo industrial . A situação duvidosa/opaca ou de dissenso científico nesses dois casos, envolvendo potencialmente ou concretamente danos graves/extensos, justifica precaução (a seguir detalhada). Localizada uma medida preventiva aditiva duvidosa, uma busca adicional em periódicos científicos de alta qualidade pode revelar as polêmicas e seus fundamentos. Nos dois casos acima, artigos no BMJ esclarecem e dirigem uma análise crítica da questão, às vezes remetendo a outros artigos e periódicos. Pesquisadores independentes sintetizam os problemas, e uma consideração cuidadosa pode/deve ser realizada. Pode ser o caso de se agir contrário às diretrizes clínicas e/ou institucionais para se manter a excelência do cuidado preventivo e proteger os usuários. Nesses casos críticos, os conceitos antes apresentados enriquecem a avaliação. Porém, um conceito (e prática) já bem desenvolvido, mas pouco utilizado na medicina preventiva, é especialmente adequado e útil nos casos duvidosos: o princípio da precaução (PP). O PP nasceu na Europa nos anos 1970, no contexto da crise ecológica. Ele se expandiu e se consolidou no direito ambiental devido à necessidade de se tomar providências diante de perigos ecológicos de grande monta, na vigência de dúvidas científicas ou de polêmicas sobre as causas desses perigos (chuva ácida, diminuição acentuada dos peixes, aquecimento global, buraco na camada de ozônio etc.). O PP orienta que, ante o perigo de danos extensos e graves ao ambiente e às pessoas e mesmo havendo dúvidas científicas sobre as causas, os governos e agências reguladoras devem agir para proteger do dano: “ ...a ausência de absoluta certeza científica não deve ser utilizada como razão para postergar medidas eficazes e economicamente viáveis para prevenir a degradação ambiental ” (p. 157). Tesser & Norman sintetizaram a operacionalização do PP em cinco componentes: (1) evitar ativamente o dano gerado por atividade ou produto em face da incerteza; (2) inverter o ônus da prova sobre a atividade suspeita: são os seus defensores que devem provar sua eficácia e segurança; (3) explorar alternativas inofensivas para os mesmos fins da atividade suspeita; (4) aumentar a participação pública na tomada de decisão; e (5) monitorar ativamente o estado do conhecimento científico sobre o problema, pois novas evidências podem mudar sua avaliação. Há discussões a favor e contra e versões distintas do PP. Uma proposta que o entende como uma regra de decisão destacou o sentido comum de várias versões do princípio, por meio do chamado “tripé de decisão”: (1) uma condição de dano (D), que especifica uma ameaça de dano catastrófico que deve ser evitado; (2) uma condição epistêmica (E), indicando que a probabilidade desse dano ocorrer não é desprezível e existem bons fundamentos epistêmicos para levar a ameaça a sério; e (3) um remédio sugerido (R), que recomenda medidas para evitar a catástrofe , , . Em outras palavras: se um resultado previsto é considerado prejudicial (D) e a perspectiva de que o dano se materializará é suficientemente plausível (E), então medidas cautelares (o remédio sugerido) devem ser tomadas (R) . Steel , acrescentou dois componentes ao tripé, restringindo sua aplicação: (1) a regra da proporcionalidade, em que as medidas de precaução devem ser calibradas para o grau de incerteza e a gravidade das consequências temidas: o “remédio” não deve ser pior do que a “doença”, e os efeitos colaterais negativos das medidas de precaução devem ser reduzidos ao mínimo; e (2) o princípio da metaprecaução, em que a incerteza científica não deve levar à paralisia na tomada de decisões diante de uma ameaça de dano grave. Hopster propôs a regra da ligação inversa: quanto maior a catástrofe prevista, menos evidências são necessárias para desencadear uma ação cautelar para evitá-la; quanto menor a catástrofe, mais evidências são necessárias para justificar a precaução. Essa regra só entra em jogo se as evidências de risco de danos existirem e forem minimamente plausíveis, razoáveis, o que serve para evitar a “paranóia da precaução”: devemos ser capazes de ignorar riscos suficientemente improváveis. Em outra abordagem, Sandin & Peterson defendem que o PP é um princípio moral de nível médio que pode ser considerado como os outros quatro princípios de Beauchamp & Childress , comuns nas discussões clínicas e sanitárias: beneficência; não-maleficência; justiça; e respeito à autonomia. O PP seria um quinto princípio a ser ativado nas circunstâncias discutidas, sendo “adicionado” aos anteriores. Por seu grande potencial de danos, as medidas preventivas aditivas (P1 específica e P2 tipo rastreamento) são candidatas naturais a serem escrutinadas pelo PP 36 em situações de incerteza sobre o balanço danos-benefícios (já demandado usualmente na medicina e saúde pública) devido à forte valorização da não-maleficência. Esse balanço pode ser duvidoso, sendo possível haver polêmica na interpretação das evidências e/ou dúvidas científicas dificultando uma conclusão consensual. Porém não é necessário um consenso sobre a dimensão dos benefícios e danos ou dobre o seu balanço para uma decisão bem fundamentada. A existência de polêmica científica razoável sobre tal - ou a ausência de consenso sobre uma ampla margem de benefícios líquidos com poucos danos, tornando o balanço duvidoso - demanda aplicação do PP para se evitar danos iatrogênicos graves e/ou extensos. Uma primeira operacionalização de medida corretiva é teoricamente simples e fácil: não implantar a medida preventiva ou inverter a recomendação para sua realização, se já em uso. Na P1 específica e nos rastreamentos, geralmente todos os envolvidos compartilham dos mesmos interesses preventivos, não havendo conflitos explícitos - diferentemente da quase totalidade dos contextos de aplicação do PP. Profissionais, gestores dos sistemas de saúde, governos, agências reguladoras e usuários almejam a prevenção de doenças, mortes e sofrimentos evitáveis. Se uma ação preventiva aditiva já é usada rotineiramente, ela foi aprovada e recomendada por instituições governamentais, científicas e ou profissionais (saúde pública, agências reguladoras, saber científico, associações de especialistas). Logo, se cabível, a primeira providência por precaução para evitar o seu dano iatrogênico é a não aprovação da ação preventiva, ou a suspensão da sua recomendação positiva se já em uso. Propomos a seguinte formulação para o PP na prevenção: a ausência de certeza científica de que uma medida preventiva aditiva proporciona amplos benefícios líquidos com poucos danos iatrogênicos deve ser suficiente para a não implantação ou suspensão de tal medida. Isso permite evitar danos iatrogênicos sem problemas de proporcionalidade, pois a suspensão ou inversão da recomendação não tem custos ou efeitos colaterais negativos. Devido à inversão do ônus da prova, a discussão de um uso mais restrito ou rigidamente regulado de uma medida aditiva de P1 específica ou P2 tipo rastreamento, que foi recusada ou suspensa em aplicação mais generalizada, deve constituir um novo processo de avaliação da medida, então proposta para outro uso/contexto. Finalizada a avaliação sobre tal ou qual medida preventiva, restará promover no contexto clínico com pessoas concretas a participação do usuário, seu empoderamento e uma decisão compartilhada. Não há espaço aqui para tratar desses temas complexos, cruciais para uma boa efetividade clínica e um bom relacionamento profissional-usuário, mas eles nos resgatam da elaboração cognitiva unilateral e nos devolvem ao contexto da interação clínica, que demanda: uma abordagem centrada nas pessoas ; habilidades de comunicação adequadas , para um diálogo aberto e horizontalizado ; e uma abordagem ampliada , que parta da situação psicossocial e existencial dos usuário e acesse a sua perspectiva, para com ele construir uma base comum de entendimento sobre sua situação e a ação preventiva nela contextualizada, fomentando sua autonomia e viabilizando uma maior humanização do cuidado. A melhor proposta preventiva corre grande risco de se frustrar caso não seja contextualizada, compartilhada e pactuada com o usuário. Estando fora do alcance deste trabalho, apenas registramos e enfatizamos, sem desenvolver, a necessidade desse retorno ao relacionamento profissional-usuário e seus desafios para viabilizar boas práticas clínicas preventivas. Vários fatores dificultam a proposta aqui apresentada. Como vimos, a presença de conflitos de interesses na produção e na interpretação das evidências é um deles. O poder dos interesses econômicos envolvidos na MBE tem enviesado e fraudado as evidências e suas interpretações, introduzindo, mudando ou mantendo intervenções preventivas , . Outro fator é a ampla tolerância social e cultural a danos derivados de ações preventivas, que são diluídos nos efeitos adversos comuns e largamente tolerados nas intervenções médicas . Avalizada pela biomedicina e pelo Estado, essa tolerância se disseminou e se legitimou social e culturalmente, e sempre pode ser defendida com base em sucessos prévios (a exemplo de vacinas, antibióticos etc.). Adicione-se a crença, o “estado de opinião” e o valor moral, favoráveis às ações preventivas, e se configura uma situação de grande dificuldade sociocultural e política para a postura anti-intervencionista, o ceticismo e o rigor da aplicação do PP aqui defendidos na prevenção aditiva. Tal situação facilita a introdução de novas ações preventivas aditivas, mesmo que duvidosas, por muitas vezes virem embaladas com o rótulo da MBE. Acirrando esses fatores, há o fenômeno do paradoxo da popularidade , : o grande apoio a algumas ações preventivas aditivas é alimentado pelos efeitos danosos de algumas delas, que não podem ser percebidos individualmente. É o caso dos sobrediagnósticos, comuns nos rastreamentos: diagnósticos corretos de doenças que não se desenvolveriam clinicamente na vida da pessoa, mas que não podem atualmente ser distinguidos de doenças que avançariam (p.ex.: o rastreamento de vários cânceres), pelo que são todas tratadas (sobretratamento). Como o sobrediagnóstico é um fenômeno perceptível apenas nos dados epidemiológicos (coletivos e retrospectivos), os profissionais e usuários vivem uma cegueira epistêmica : todos os sobrediagnosticados se sentem salvos pela detecção e pelos tratamento precoces e propagandeiam o rastreamento, tendo sido gravemente e vitaliciamente prejudicados. Os rastreamentos são os maiores produtores de sobrediagnóstico, mas profissionais e usuários só recebem feedback positivo ao rastrear (das instituições e do entorno profissional e sociocultural) , mesmo quando não há um balanço benefícios-danos claramente favorável. Esse fenômeno dificulta o distanciamento crítico e o ceticismo científico anti-intervencionista aqui defendidos, afastando o PP. O sobrediagnóstico ocorre em grandes proporções nos rastreamentos de câncer de mama, próstata, pele, tireoide e rim . Outro fator dificultador é a comum crença no avanço tecnológico, tido como sempre benéfico e poderoso. Crê-se, ingenuamente, que eventuais danos, se foram tolerados pelas agências reguladoras e pelos médicos, foram compensados por benefícios significativos, e que o desenvolvimento tecnológico vai atenuá-los ou evitá-los no futuro - veja-se as “promessas” para a prevenção do câncer , por exemplo. Por outro lado, fatores estruturais, sistêmicos e institucionais são relevantes. Mesmo quando profissionais e usuários desejam resistir a intervenções duvidosas (não fazer, fazer menos), o sistema dentro do qual o atendimento é prestado pode tornar isso difícil . Auditorias e diretrizes clínicas geralmente induzem a fazer mais intervenções, especialmente as preventivas, e os profissionais têm dificuldade de se afastar delas . A maior consciência do potencial de reclamações e do risco de litígio torna a prática profissional mais defensiva , podendo gerar solicitação de mais exames e tratamentos “para segurança” do profissional . Finalmente, outro fator é o poderoso apelo emocional preventivista vigente no ambiente do cuidado especializado às doenças crônicas (cardiovasculares e cânceres, principalmente), que extravasa para a APS, para os sistemas de saúde, e para a sociedade e a cultura geral. Os especialistas são referência social e técnica, com alta legitimidade sociocultural e epistêmica sobre determinadas doenças. Isso transforma sua opinião pessoal (e os consensos dos especialistas e diretrizes clínicas respectivas) em fonte potente de poder cultural, social e político, geralmente pró-intervenções preventivas. Entretanto, diferentemente do cuidado ao adoecido, em que pequena porção das decisões é suprida pela MBE, as decisões em P1 específica e P2 aditivas “ devem ser baseadas estritamente nas melhores e mais atualizadas evidências, porque elas são a única fonte confiável de informação ” (p. 5). Por isso, o apelo emocional, as atitudes, os saberes e as experiências desses especialistas devem ser postos em suspensão e evitados nas decisões sobre prevenção aditiva . Todos os fatores acima citados criam expectativas ficcionais atenuadoras da incerteza do futuro, que enfatizam benefícios potenciais em vez de danos , produzindo um otimismo pró-intervencionista e moralista que dificulta a atitude crítica/cética/anti-intervencionista e a aplicação do PP. Todavia, essas dificuldades não diminuem a necessidade de redução de danos e medicalização desnecessária, melhorando as práticas preventivas. Apresentamos um conjunto de conceitos e critérios articulados para serem usados por clínicos e gestores no manejo da prevenção primária específica e secundária tipo rastreamento. Eles inovam ao defender a introdução de um radical estranhamento e desconfiança, indutores de inéditos rigor científico e precaução (hoje inexistentes) quanto às medidas preventivas com significativo potencial de danos (ações preventivas aditivas). As principais novidades são a valorização do divisor de águas que é a distinção entre ações preventivas redutivas (redutoras dos riscos sem intervenção artificial) e aditivas (adição de fatores artificiais de proteção) e a defesa da aplicação do princípio da precaução aos casos de prevenção aditiva duvidosos, pouco discutidas e praticadas na medicina e na saúde pública. A ausência dessa aplicação tem facilitado a introdução e a permanência de práticas preventivas duvidosas ou pouco eficazes e iatrogênicas, geradoras de iniquidades e fomentadas pela cultura biomedicalizada, pelo imperialismo preventivista e pela medicina da vigilância, em sinergia com os interesses escusos e corporativos, os argumentos da autoridade (profissional ou científica), e as manipulações da MBE ou a sua desconsideração . |
Acute Atherosis Lesions at the Fetal-Maternal Border: Current Knowledge and Implications for Maternal Cardiovascular Health | 60521308-07fe-4f81-956a-c8d10d3d4fa0 | 8712939 | Anatomy[mh] | Arterial lesions specific to the spiral arteries at the fetal-maternal border were first reported in 1945 ( ). These lesions were later termed acute atherosis, described as lipid-laden foam cells within the intima, surrounded by fibrinoid necrosis and perivascular immune cell infiltrate ( , ). Acute atherosis is associated with lower birthweight ( ) and lower placental weight ( ), and some studies show that acute atherosis may be correlated with an antiangiogenic profile ( , ), all three of which are indicators of placental dysfunction. Moreover, the well documented high concomitance of acute atherosis and preeclampsia and other obstetric syndromes suggests shared underlying mechanisms ( – ). Causal factors and effects of acute atherosis during pregnancy, as well as the long-term effects on maternal cardiovascular health, remain uncertain. There are several constraints on studying the acute atherosis lesions histologically. Systematic sampling of the decidua in large amounts following delivery is quite difficult, and a uniform, evidence-based research definition of acute atherosis lesions is historically lacking. As discussed below, both issues have been addressed by us ( , ). However, even when these constraints are overcome, only a subset of the decidua can realistically be evaluated in any morphological tissue study, thus one can only with certainty determine the presence of acute atherosis, and never the definitive absence. Acute Atherosis Sampling Methodology Nonuniformity in Acute Atherosis Definitions The ideal method for studying the impact of decidual acute atherosis on placental function requires specimens of a challenging nature; there are today only some very rare hysterectomy specimens of severely preeclamptic women with the placenta still in situ ( ). In one such published case report, acute atherosis of spiral arteries with severe narrowing of the vascular lumen were associated with substantial infarcted areas in the overlying parts of the placenta ( ). Moreover, in this case the lesions could be traced as deeply as the inner myometrium, implying that the severity of the placental defects may be related to the depth of the lesions. It is also noteworthy in this study that remodeling of the placental bed spiral arteries, including the myometrial segments, was absolutely normal in a few invaded spiral arteries at the very center of the placental bed. This suggests that non-invaded, more laterally situated vessels run a higher risk of developing acute atherosis, again highlighting the need for uniform sampling of relevant tissues for the study of acute atherosis, preferably of the whole placental bed. Several methods have been employed for sampling decidua basalis for research purposes. These include placental bed biopsies ( ), biopsies from the basal plate of delivered placentas ( ), and our method of vacuum suctioning the placental bed ( ). Of these sampling methods, placental bed punch or knife biopsy is the most invasive, but has the advantage of providing myometrium, which is needed if the goal is to study spiral artery remodeling or other features of this tissue ( ). Biopsies from delivered placentas is the least invasive sampling method, and will yield moderate samples of decidua basalis tissue. However, if the goal is to study decidua basalis alone, our vacuum suction technique is the superior method ( – ). The vacuum suction technique is performed during caeserean section, after delivery of the placenta, by applying vacuum suction to the uterine wall. This method has the advantages of an unbiased sampling and a large tissue yield. It is also time efficient and without danger to maternal health ( ). One drawback is that tissue orientation is lost due to suctioning. Still, the vacuum suction method provides tissue applicable for acute atherosis research. We showed that higher rates of acute atherosis detection was achieved using vacuum suction samples, as compared to routinely sampled basal surface placental tissue and fetal membrane roll biopsies from the same pregnancies ( ). The rate of decidual acute atherosis is thus likely underestimated in most studies, and we recommend using the vacuum suction technique if the goal is to study the lesions independent of tissue orientation. Our large Oslo Pregnancy Biobank, consisting of decidual tissue collected during elective cesarean section, along with placental tissue biopsies, fetal (umbilical artery as well as umbilical vein) and maternal blood samples, amniotic fluid, and maternal muscle and fat tissue biopsies, has enabled multiple studies comparing the presence of decidual acute atherosis and dysregulated features of other anatomical compartments ( – , , , – ).
Historically, a uniform definition of acute atherosis has been lacking. This may have led to discrepancies in reported rates of acute atherosis across pregnancy groups. In addition, differences in patient populations studied as well as in tissue collection and evaluation methodology (e.g. antibody selection) may have contributed. Moreover, clear definitions of perivascular infiltrate (PVI) and fibrinoid necrosis have been lacking. We set out to address these issues by attempting to establish an evidence-based research definition of acute atherosis ( ). After examining 278 decidua basalis samples, we observed that perivascular leukocyte infiltrates and increased fibrinoid did not always correlate with adjacent foam cell lesions. Instead, we concluded that these are features of the decidual pathology of preeclampsia, while CD68 + foam cells are an essential aspect of acute atherosis ( ). Thus, we proposed that acute atherosis should be diagnosed solely by the presence of foam cell lesions, defined as two or more intramural, adjacent, vacuolated CD68 + cells. Nonetheless, throughout this review, we will include studies with other acute atherosis diagnosis criteria as well.
Hertig, who first described acute atherosis, proposed that vessel damage followed by lipophage infiltration is what initiates acute atherosis lesion development ( ). Endothelial injury has long been suspected as integral to decidual lesion development by others as well ( ). However, the lack of an association between acute atherosis and the severity and duration of hypertension, or antihypertensive treatment, implies that hemodynamic forces alone are not adequate for lesion development ( ). Similar to the heterogeneity of preeclampsia, acute atherosis is likely a multifactorial pathology with several pathways leading to an adverse uterovascular phenotype endpoint. We now have access to 75 years of research into the nature of the histological lesions known as acute atherosis, but have not finished elucidating the complexity of its etiological and mechanistic molecular constituents. One such potential constituent, however, may be endothelin-1 (ET-1) ( ). ET-1 is a highly potent vasoconstrictor ( ). It is upregulated by mechanical stretch ( ) and hypoxia ( ), and plasma ET-1 is elevated in preeclampsia and gestational hypertension ( , ). It exerts its effects by binding G-protein coupled receptors on vascular smooth muscle cells and endothelial cells ( ), a byproduct of which may be intracellular lipid accumulation ( , ). Thus, ET-1 may be a common trigger for lipid accumulation within endothelial cells in acute atherosis and in hepatocytes in the associated rare disease, acute fatty liver of pregnancy ( ). Another vasoconstrictor of interest is angiotensin II, which may play a role in the pathogenesis of atherosclerosis ( ). We have postulated a role in acute atherosis for activating antibodies against the angiotensin II type 1 receptor (AA-AT 1 ), after demonstrating a clear association between AA-AT 1 and preeclampsia ( – ). Based on our cesarean delivery population, we were unable to demonstrate any association between AA-AT 1 and acute atherosis ( ). However, angiotensin II is known to work synergistically with ET-1 ( ), and studying both of these vasoconstrictor systems simultaneously would shed more light on the possible involvement of G-protein cascades in acute atherosis development. The regulator of G protein signaling 2 (RGS2) likely has implications for ET-1 and AA-AT 1 signaling ( ). Interestingly, we have observed an association between acute atherosis and a genotype associated with lower RGS2 expression ( ). If G-protein cascades indeed cause intracellular lipid accumulation, as proposed by Coffey ( ), we would expect early acute atherosis lesions to contain lipid-laden endothelial or vascular muscle cells. Accordingly, arterial lesions containing vacuolated endothelial cells and myofibroblasts have been observed in first-trimester curetted endometrium samples from therapeutic and spontaneous abortions ( , ). A higher incidence of such lesions was observed in primigravida as compared to multigravida ( ). Primigravidity is associated with an increased risk of preeclampsia ( , ), often considered due to excessive inflammation, although evidence is lacking ( ). Whether these lesions are precursors to full-blown acute atherosis, and whether lipid accumulation within endothelial cells and myofibroblasts is indeed the insult that catalyzes intramural immune cell infiltration, remains to be investigated. Acute atherosis shares morphological features with early atherosclerotic lesions, which is recognized as an inflammatory disease of the arterial walls ( ). Both lesions present with increased numbers of intimal macrophages, lipid-laden foam cells, lipoprotein(a) throughout the vessel walls and extracellular droplets of lipid as well as similar expression of intracellular lipid-handling proteins ( , – ). Moreover, both acute atherosis and atherosclerosis are associated with preeclampsia and other states of systemic inflammation ( , ). However, there are several differences between acute atherosis and atherosclerosis. Firstly, vessel caliber differs enormously. Atherosclerosis is found in major arteries with a thick intimal layer, and the vessels are supplied with oxygen and nutrients from the vasa vasorum ( ). Notably, the vasa vasorum may be instrumental as a source of lipids in the development of atherosclerosis ( – ). Spiral arteries are much smaller and do not have an external blood supply. Preeclampsia is associated with elevated lipid content in decidua basalis tissue, which may act as a source of lipid compounds for lesion development ( ). Moreover, acute atherosis is not associated with plasma lipid contents, further indicating a local rather than a systemic excess of lipids ( ). Secondly, research into the molecular composition of acute atherosis versus atherosclerosis reveals several dissimilarities. For instance, we have observed no LOX-1 positive endothelial cells or foam cells within the lesions of spiral arteries ( ), while this lipid scavenger receptor is a key contributor to atherosclerotic development ( ). Finally, endothelial activation is important in atherosclerosis ( ), whereas evidence is conflicting with regards to endothelial status in acute atherosis. Although one study reported endothelial and interstitial extravillous trophoblast ICAM-1 expression in placentas with acute atherosis ( ), we were unable to detect any ICAM-1 expression within the acute atherosis lesions ( ). Moreover, the endothelial lining of the artery wall is often destroyed in acute atherosis and there is evidence of leakage of fibrin-like material from the circulation into the vessel walls ( , , ). Accordingly, in a study of women with preeclampsia, we demonstrated elevated levels of thrombomodulin – a marker of endothelial dysfunction ( ) or damage ( ) – in those who had concomitant acute atherosis ( ). Acute atherosis is not found outside of the uterus ( ). The lesions are focal and patchy, mainly localized downstream in the circulation, at the tips of the decidua basalis spiral arteries. The major remodeling problems occur upstream in the myometrium ( , ). Yet, there is a link between acute atherosis formation and poor remodeling, as lesions are commonly found downstream of inadequately remodeled spiral arteries ( ). The fully remodeled decidual segment of the spiral artery may be considered as being somewhat “naked” and is then likely more at risk for attacks both from the inside (by luminal, circulating factors) as well as from the outside (by components in the surrounding decidual tissue) in addition to being especially exposed to ischemia-reperfusion injury due to turbulent blood flow. Specifically, we postulate that these areas are especially exposed to local inflammatory signaling molecules. This may be compounded by their unique local environment close to the semi-allogenic fetal cells and the resulting inflammatory changes, potentially explaining why this uteroplacental location seems to be a prerequisite for the development of these particular atherosis lesions. Inflammation does indeed appear to be clearly linked to acute atherosis development. Spiral arteries affected by acute atherosis contain huge deposits of IgM, as well as smaller amounts of IgG and IgA within the arterial wall. In addition to immunoglobulins, complement component 3 (C3) is often observed within acute atherosis lesions ( – ). In addition, early immunohistochemistry studies of the leukocyte infiltrate demonstrated T lymphocytes in acute atherosis ( ). We later expanded on these findings, demonstrating increased concentrations of CD3 + , CD8 + and CD3 + CD8 - intramural T cells in the walls of spiral arteries with acute atherosis compared to arteries from samples without acute atherosis ( ). Higher numbers of CD3 + and CD3+CD8 - T cells were also observed in the surrounding perivascular space. Furthermore, a study by Gill and colleagues conducting flow cytometry of basal plate samples demonstrated higher numbers of M1-macrophages in acute atherosis, displaying a pro-inflammatory phenotype ( ). Fluorescence staining revealed M1-macrophage localization within vessel walls of spiral arteries affected by acute atherosis. Interestingly, a recent study from India showed that the incidence of acute atherosis – after exclusion of placental associated syndromes like fetal growth restriction, pregnancy hypertension and diabetes – was significantly higher in asymptomatic or mildly symptomatic SARS-CoV-2 positive pregnant women as compared to SARS-CoV-2 negative pregnant women ( ). Finally, another support of acute atherosis representing an inflammatory lesion is its resemblance to systemic vasculitis, a general term applied to inflammation of vessel walls that progresses to fibrinoid necrosis ( , ).
We hypothesize that several mechanisms may trigger acute atherosis. Our hypothesis places inflammation at the center of lesion and as the final common pathway converged upon by these different triggers ( ). We believe these mechanisms may act individually or in concert to produce acute atherosis, as illustrated in . Maternal Alloreactivity Ischemia-Reperfusion Injury Preexisting Systemic Inflammation Microbial Infection Human pregnancy is a dynamic balancing act for the maternal immune system. The fetal allograft must peacefully coexist with the maternal immune and cardiovascular systems, whilst the mother and fetus simultaneously remain protected against microbial infections. Under any other circumstances, immune cells would quickly target genetically foreign tissue. However, a cascade of immune-modulating molecules acting throughout pregnancy enable the conceptus to evade rejection until parturition. The innate immune system is strengthened, while adaptive immunity is weakened ( ). Serial blood samples collected at different time points during pregnancy have revealed precise timing of particular immunological changes ( ). Aghaeepour and colleagues suggest deviations from this “immune clock of human pregnancy” could indicate pregnancy-related pathologies. The fetus inherits half of its genetic material from the mother, and the other half from the father. Given the extremely high variability in major histocompatibility complex (MHC) genetics, it is unlikely that the paternally inherited fetal MHC alleles are identical to the maternally inherited fetal MHC alleles. Fetal trophoblasts circumvent this obstacle and avoid rejection by local maternal immune cells by not expressing most classical MHC class I or class II surface molecules ( ). They do, however, express human leukocyte antigen (HLA) C and the non-classical MHC molecules HLA-G, HLA-E and HLA-F ( – ). HLA-C and HLA-G attenuate immune activation by binding to killer immunoglobulin like receptors (KIR), which are abundantly expressed on maternally derived immune cells ( , ). KIR activation prevents cytotoxicity by alloreactive T-cells ( ), and may induce apoptosis of activated T-cells and NK cells ( , ). Invading extravillous trophoblasts rely on these KIR/HLA-interactions to avoid immune cell attacks. In fact, there is a positive correlation between the amount of surface HLA-G expression and the depth of trophoblast invasion into the decidua ( ). Several factors may influence these tolerogenic pathways. As HLA-C is the only polymorphic histocompatibility antigen expressed by fetal cells at the fetal-maternal interface, paternal HLA-C genotype is particularly important. In fact, HLA-C mismatched pregnancies are characterized by a higher percentage of activated maternal T-cells ( ). Moreover, the combination of fetal HLA-C and maternal killer immunoglobulin like receptor (KIR) genotypes may greatly predispose pregnancies to preeclampsia ( ). We have expanded on this finding by showing that the combination of fetal HLA-C2 with the maternal KIR-B haplotype was significantly associated with acute atherosis in preeclampsia ( ). Similarly, inadequate induction of tolerance by HLA-G is detrimental to pregnancy health ( ). Membrane bound HLA-G expression is lower on trophoblasts from preeclamptic placentas ( , ). Circulating soluble HLA-G (sHLA-G) is also lower in preeclampsia throughout all trimesters, compared to pregnancies that remain normotensive ( ). We have shown that maternal sHLA-G inversely correlates with the level of placental dysfunction, the latter evaluated by maternal levels of the antiangiogenic factor sFlt-1, or by the sFlt-1/PlGF ratio ( ), and that fetal polymorphisms in the 3’UTR region of HLA-G are associated with presence of acute atherosis in preeclampsia ( ). Our hypothesis is that these polymorphisms lead to altered HLA-G expression in the decidua basalis, affecting local fetal-maternal immune tolerance and in turn promoting development of acute atherosis. Failure to establish fetal-maternal tolerance may also influence trophoblast invasion into the decidua. These extravillous trophoblasts are involved in the plugging and remodeling of uteroplacental spiral arteries in early pregnancy ( ).
The first 10 weeks of pregnancy, the spiral arteries extending from the placenta to the endometrial surface of the decidua are effectively plugged, and the fetus exists in a state of physiological hypoxia ( , ). At the end of the first trimester, the maternal vessels of the decidua open up and the placenta is submerged in maternal blood ( ). This marks a dramatic shift in the fetoplacental exposure to the maternal cardiovascular and system. Throughout pregnancy, uteroplacental blood flow increases, reaching up to 750 ml/minute at term, about 25% of maternal cardiac output ( ). At the same time, the approximate 5-fold increase in diameter of the terminal coils of fully remodeled spiral arteries dramatically slows down the speed of the blood entering the intervillous space ( ). Failure of proper spiral artery remodeling results in downstream placental malperfusion. The retention of smooth muscle within the arterial wall likely causes ischemia-reperfusion injury ( ), and not impaired flow volume nor uteroplacental hypoperfusion. In addition, we argue that placental malperfusion may not be exclusively secondary to failure of spiral artery remodeling. Failed remodeling may be considered an “external” cause of placental malperfusion and is typically seen in early-onset preeclampsia. Late-onset preeclampsia affects a greater rate of women than the early-onset form, and spiral artery remodeling is rarely affected. In this setting, malperfusion may be caused by two “internal” pathways. In the one pathway, placental senescence causes a syncytiotrophoblast stress response as the pregnancy progresses towards term and thereafter. The other pathway occurs with particularly large placentas, such as in multiple gestations, in which compression leads to placental congestion and thereby malperfusion ( , ). The ensuing disturbances in calcium-homeostasis may cause endoplasmic reticulum stress and initiate the unfolded protein response, ultimately leading to cell death ( , ). Moreover, intracellular buildup of reactive oxygen species induces upregulation and secretion of the proinflammatory cytokines TNF and IL-1b ( , ). ER stress and oxidative stress in placental tissues are both features of preeclampsia ( , , ), and may also play a role in the strongly associated acute atherosis lesion development. High pressure and turbulent blood flow may also damage the endothelial lining of the terminal coils of spiral arteries, as well as syncytiotrophoblasts coating placental villi. Analysis of hemodynamic forces on vascular endothelial cells has shown that disturbed blood flow and continuous low grade shear stress acting on the arterial wall may promote atherogenesis ( ). This is in line with the increased incidence of atherosclerotic lesions at arterial branch points or sections with high curvature ( , ). Endothelial cells possess surface molecules capable of detecting shear stress and inducing gene transcription through the Ras-MAP kinase signaling pathway ( ). Among several changes in gene expression is a transient upregulation of the monocyte chemotactic protein 1 (MCP-1) ( ). Overexpression of MCP-1 attracts macrophages and may induce infiltration into vessel walls ( ). Interestingly, endothelial MCP-1 is upregulated in preeclampsia ( , ), possibly due to atherogenic blood flow, and could possibly be involved in CD68 + cell recruitment to the sites of lesion development ( , ). Ferroptosis is a recently discovered mode of iron-dependent cell death ( , ). Lipoxygenases and other enzymes may induce ferroptosis in a controlled manner ( ). In addition, and more relevant to the topic at hand, ferroptosis may occur due to iron dysregulation and free-radical chain reactions, leading to hydroxyl and peroxyl radicals ( ). Recently, ferroptosis has gained attention as a possible target against ischemia-reperfusion injury ( – ). Thus, ferroptosis may play a role in early-onset preeclampsia following incomplete spiral artery remodeling. Interestingly, huge amounts of iron have been observed in atherosclerotic lesions ( ), and lipid peroxidation is known to play a significant role in atherogenesis ( ). Moreover, animal experiments have shown alleviation of atherosclerosis through inhibition of ferroptosis ( ). However, the role of iron-dependent cell death due to ischemia-reperfusion in acute atherosis development remains to be investigated.
Systemic inflammation is associated with many chronic diseases, such as obesity, diabetes mellitus and cardiovascular disease, as reviewed in ( – ). Considering that even normal pregnancy is associated with elevated systemic inflammation ( ), one would expect heightened baseline inflammation to be associated with higher rates of obstetric complications linked to the maternal immune system. This is indeed what is observed. Obesity, diabetes mellitus and high blood pressure are all substantial risk factors for miscarriage ( – ), preeclampsia ( , ) and fetal neurodevelopmental disorders ( – ). In line with the effects of other chronic inflammatory conditions on pregnancy, pregnant women with autoimmune disease experience higher rates of hypertensive disorders of pregnancy, intrauterine growth restriction, preterm delivery and autism spectrum disorder in their offspring ( – ). Interestingly, autoimmune disorders are also commonly observed in association with acute atherosis ( , ). In fact, lesions have been observed as early as the first trimester in women with systemic lupus erythematosus ( ).
Many tissues first thought to be sterile have been shown to harbor dormant bacteria. This includes blood ( , ), seminal fluid ( ) and possibly the placenta ( ) – although this latter claim is disputed ( ). The source of these bacteria may be the gut ( ), the oral cavity ( ) or the urinary tract ( ). The iron dysregulation and dormant microbes hypothesis proposes these bacteria may be resuscitated from dormancy by free iron and manifest a diverse range of chronic inflammatory diseases previously thought to not possess infectious properties such as preeclampsia and atherosclerotic disease ( ). Viable bacteria release lipopolysaccharide (LPS) or lipoteichoic acid (LTA). This initiates a cascade of immune responses, including a dramatic upregulation of many circulating cytokines and other acute phase signaling molecules like serum amyloid A1 (SAA1) and C-reactive protein ( , ). As extensively reviewed by Kell and Kenny, there are several lines of evidence pointing towards microbial contribution in the development of preeclampsia ( ). There is high co-occurrence between bacterial infections and preeclampsia. Examples include Chlamydia pneumoniae ( , ) and Helicobacter pylori ( ). Several biomarkers associated with preeclampsia have also been linked to microbial infections, including sFlt-1/PlGF ratio ( , ) and SAA1 ( ). In support of the concept that preeclampsia development has a microbial component, is the virtual absence of preeclampsia in pregnancies with Toxoplasma gondii infection treated with anti-microbial medication (spiramycin) ( ). Kell and others have also argued for the existence of a microbial component in atherosclerosis with quite compelling evidence ( , , ). Patients with chronic bacterial infections are substantially more at risk for atherosclerosis ( , ), and LPS is regularly used to generate animal models of the disease ( , ). Moreover, atherosclerotic plaques contain bacterial DNA ( , ) and elevated levels of iron ( ), adding credence to the iron dysregulation and dormant microbes hypothesis. The similarities to atherosclerosis have led others to speculate that an infectious trigger underlies the development of acute atherosis as well ( , ). However, this matter remains unsettled.
Cellular fetal microchimerism (cFMC) arises when cells of fetal origin are transferred to maternal blood and tissues during pregnancy ( ). These cells are known to possess stem cell-like properties, capable of differentiating into endothelial cells, smooth muscle cells and even leukocytes ( , ). During pregnancy, a lot of fetal material leaks into maternal circulation. While cell free fetal DNA and other debris is rapidly cleared following delivery ( ) and likely completely absent from maternal systems postpartum, cFMC may persist for decades. In fact, fetal cells have been observed in maternal circulation up to 27 years postpartum, indicating that these cells may inhabit maternal systems throughout life ( ). Restorative as well as detrimental effects have been attributed to cFMC, possibly tied to fetal-maternal histocompatibility ( ). Of particular interest in the context of this review is the apparent detrimental effect of cFMC on autoimmunity ( , ). The trigger has been proposed to be a maternal alloimmune response towards fetal cells expressing foreign HLA surface peptides ( ). By far the majority of patients diagnosed with autoimmune disorders are women ( ), which could partly be due to the acquisition of fetal cells during pregnancy. In comparison to healthy pregnancies, circulating fetal microchimeric cells are more prevalent in pregnancies complicated by preeclampsia ( , ) or severe fetal growth restriction combined with impaired placental perfusion ( ). If this is due to increased cell transfer, reduced clearance or reduced migration from maternal blood into maternal tissues is unclear ( , ). We hypothesize that when the placenta is dysfunctional, fetal cells leak more freely across into maternal blood and subsequently other tissues. These cells may then inhabit maternal vessels, in the presence or absence of endothelial damage, and induce a maternal anti-fetal immune response towards the vascular endothelium. This could explain the association between male cFMC and an increased cardiovascular mortality hazard ratio ( ). However, this last observation was based on a total of only 5 cardiovascular deaths. A recent study on a much larger cohort found that male-origin microchimerism was associated with reduced risk of ischemic heart disease as well as no association between microchimerism and ischemic stroke ( ). Microchimerism is not unique to pregnancy. Low levels of donor cells may be acquired following solid organ transplantation ( ). The presence of such microchimerism has been linked to graft acceptance ( ) as well as graft rejection ( ). As with autoimmune diseases and cFMC, the effect of donor microchimerism may depend on how well the host tolerates the graft. Interestingly, vascular lesions have been observed in arteries surrounding transplanted organs following kidney transplant rejection ( ), after liver transplantation ( ) and after heart transplantation ( ). These lesions are characterized by large amounts of lipids, IgM and C3 ( ), and thus bear striking resemblance to acute atherosis. Our novel hypothesis states that placental dysfunction leads to augmented cFMC. If persistent in the circulation or alternatively engrafted in maternal endothelium, these semi-allogenic cells could cause further inflammation, particularly in vessel walls, and initiate development of inflammatory arterial lesions such as acute atherosis. As cFMC persist decades after pregnancy, there may be a role for these cells in the pathogenesis of chronic cardiovascular and neurovascular disease in the long term as well. We believe this hypothesis merits further testing in translational clinical studies.
During Pregnancy Postpartum: Targeting the Women at Highest Risk of Premature Cardiovascular Disease? We have put forth the hypothesis that women with concomitant preeclampsia and acute atherosis are at especially high risk for developing atherosclerosis and premature cardiovascular disease ( ). If pregnancy is a physiological stress test ( , ), then preeclampsia is an unmasking of compromised maternal cardiovascular health. Time and time again preeclampsia has been linked to future maternal cardiovascular disease in one form or another ( – ). The severity of preeclampsia also correlates with ischemic heart disease incidence rate ( ). Moreover, fetal manifestations of placental dysfunction, such as intrauterine growth restriction, further add to maternal risk of cardiovascular disease ( ). Abnormal placentation also associates with infertility or subfertility, both of which are associated with poor long-term cardiovascular health ( , ). As described above, acute atherosis may disturb placental perfusion ( , ), and is associated with low birth weight ( ) and low placental weight ( ). Acute atherosis may thus play a role in producing and/or exacerbating maternal and fetal symptoms of placental dysfunction, associated with high cardiovascular disease risk. There are striking similarities between acute atherosis and atherosclerosis, indicative of shared pathophysiology. We know that preeclampsia is associated with a substantial atherosclerotic load ( ). Compared to normotensive women and women with gestational hypertension, women with preeclampsia have higher carotid intima-media thickness (CIMT) ( , ) – increasingly used as a surrogate marker for preclinical atherosclerosis ( ). In some studies, these differences remained evident up to 18 months postpartum ( , ). However, other studies with longer follow-up did not corroborate these findings ( , ). Notably, one study unexpectedly reported thinner CIMT 7 Years postpartum in women with preeclampsia compared to controls ( ). Instead, they found increased intima thickness as well as intima-media ratio in cases versus control. The authors suggest that these measures are preferable to the conventional CIMT for assessing cardiovascular disease risk in women with a history of preeclampsia. As lesions of the placental bed manifest during a shorter time span compared to atherosclerosis (possibly due to the proximity to the foreign fetus and the excessive inflammation of pregnancy), acute atherosis may possibly be used as an indicator of women with excessive atherosclerotic load ( ). Several studies have examined the link between decidual lesions and subsequent cardiovascular health. Two retrospective cohort studies by the same research group have revealed long-term cardiovascular consequences of decidual vasculopathy ( , ). Decidual vasculopathy was in these studies defined by vascular fibrinoid necrosis and lipid-filled foam cells in the vascular wall, thus sharing some of the features traditionally used for acute atherosis ( ). The first study examined cardiovascular parameters 2-74 months postpartum in women whose index pregnancies were complicated by preeclampsia. Women with concomitant decidual vasculopathy and preeclampsia had higher diastolic blood pressure, lower left ventricular stroke volume and higher total peripheral vascular resistance as compared to women with only preeclampsia ( ). Decidual vasculopathy did, however, not correlate with circulating lipids or thrombophilia postpartum. The second study demonstrated a higher prevalence of chronic hypertension several years postpartum in women who had concomitant decidual vasculopathy and preeclampsia, compared to women who only had preeclampsia. These results remained significant after correcting for chronic hypertension before index pregnancy ( ). A small study comprising only 3 cases of acute atherosis showed higher levels of triglycerides and low-density lipoprotein in these women on the first day postpartum as compared to women without acute atherosis ( ). Our group conducted a larger study where we also measured triglycerides and cholesterols (among other circulating biomarkers) the day of cesarean section. We observed no differences in circulating cardiovascular biomarkers between women with acute atherosis and women without. However, when restricting the analyses to women of advanced maternal age (age 36-44) we observed significantly elevated low-density lipoprotein and ApoB in women with acute atherosis ( ). These studies highlight the potential use of acute atherosis in targeting women at particularly high risk of cardiovascular disease – a concept promoted by us and others previously ( , ). Utilizing the more readily detected maternal vascular malperfusion lesions of the placenta has also been suggested ( ), although some claim cardiovascular disease risk should be linked with atherosclerotic lesions of the uteroplacental artery instead of decidual basal artery or placental lesions ( ).
Acute atherosis is associated with arterial thrombosis, placental infarction and perinatal death ( , , , , ). This has led researchers to propose aspirin as a possible treatment for acute atherosis ( ). Aspirin is an effective prophylactic treatment against thrombosis ( ). Outside of pregnancy, aspirin is widely used for primary and secondary prevention of atherosclerotic CVD ( ). Daily low-dose aspirin started during the first trimester has shown a substantially reduced risk of preeclampsia in women at high risk ( – ). Whether this effect is related to a reduced tendency of blood clot formation, and how this may relate to acute atherosis, remains to be investigated. Acute atherosis lesions may also effectively reduce the diameter of uteroplacental spiral arteries, causing aberrant blood flow ( , ). The evidence of this latter claim is, however, conflicting. Supporting this, an early study of only 6 cases showed a trend between acute atherosis and high uterine artery pulsatility index, which could reflect obstructed spiral artery blood flow ( ). Furthermore, another research group showed that acute atherosis is associated with a higher incidence of placental lesions characteristic of maternal vascular hypoperfusion ( ). Placental lesions are also associated with abnormal uterine velocimetry measurements among women with intrauterine fetal growth restriction ( ). In contrast to these findings, one study found no association between uterine artery pulsatility index and acute atherosis ( ). The reasons for these discrepant findings may be that the spiral arteries are not visualized directly by Doppler studies, but rather indirectly by studying the blood flow of the larger uterine arteries ( , – ). Uterine and spiral arteries differ much in structure, size and function. Uterine artery ultrasonography has historically been viewed as a reflection of spiral artery remodeling ( ), but recent studies indicate that the radial arteries, as well as the maternal vascular system, may have a larger impact on uterine artery waveforms than the spiral arteries ( ). Doppler ultrasonography is hence unlikely a reliable tool for diagnosing uteroplacental acute atherosis.
s outlined above, there is substantial evidence backing up the concept of acute atherosis as a pregnancy-specific inflammatory arterial lesion. However, many uncertainties regarding acute atherosis remain. Firstly, the risk factors and triggers that initiate lesion development have not been fully elucidated, though there seems to be a consensus among researchers that endothelial damage is part of it. Damage could stem from ischemia-reperfusion injury, infections or excessive activation of G-protein cascades, to name a few. Further research into the molecular constituents of acute atherosis – in particular early lesion stages – could shed some light on this issue. Secondly, there are many candidates for drivers of inflammation and lesion development following endothelial insult. Further knowledge of which pathways play a substantial role could guide the development of prophylactic treatments of obstetric syndromes tightly associated with acute atherosis. This includes our suggestion of testing whether the use of anti-atherogenic statins during severe preeclampsia or fetal growth restriction, such as in women with systemic lupus, may ameliorate acute atherosis, improve uteroplacental perfusion and enhance pregnancy outcome ( ). Thirdly, the long-lasting implications of pregnancy affected by acute atherosis on maternal health need further research. There is a clear lack of studies with hard endpoints to show if acute atherosis, as we have proposed, can be used to identify women at substantial risk for premature cardiovascular disease and death.
DPJ wrote the review. HEF, GMJ, IKF, KM, PA-K, RD, MS, and ACS revised the manuscript and gave expert scientific input on its content. All authors contributed to the article and approved the submitted version.
DPJ and HEF receive salaries for the MATCH (ref 2019012) and FETCH (ref. 2017007) studies funded by the South-Eastern Norway Regional Health Authority, as well as the BRIDGE (ref. 313568) study funded by the Research council of Norway.
Author RD was employed by company HELIOS-Klinikum GmbHThe handling editor declared a past collaboration with one of the authors, AS |
Angiogenesis in Glioblastoma—Treatment Approaches | 1539305c-6057-472b-b852-8d9c8b2988e3 | 11941181 | Pathologic Processes[mh] | Glioblastoma (GBM) stands as one of the most aggressive and prevalent primary brain tumors originating from glial cells within the central nervous system (CNS) . Classified as a high-grade glioma (grade IV glioma differentiating from astrocytic cells) , GBM is the most aggressive form, accounting for 54% of CNS gliomas in the US, with the annual incidence ranging from 3.19 to 4.17 per 100,000 person–years . In the pediatric population (0–18 years), the incidence is lower (0.85 per 100,000), and pediatric GBM represents 3–15% of primary brain tumors . GBM predominantly affects individuals aged 50–60, with a higher prevalence in men . It occurs more frequently supratentorially, often in the frontal lobe, with the brainstem and cerebellum being the rarest locations . GBM classification is based on IDH (isocitrate dehydrogenase) status, a distinction introduced in the 2016 WHO classification of CNS tumors, incorporating both histopathological and molecular criteria . IDH wild-type GBM (90% of cases) peaks in incidence during the sixth decade of life, while IDH-mutant GBM (10%) occurs earlier, in the fourth and fifth decades, and carries a better prognosis . The classification also includes GBM, NOS (not otherwise specified), and NEC (not elsewhere classified) for cases with unclear IDH status or those not fitting existing categories . The 2021 WHO classification further emphasizes and refines the molecular characterization of brain tumors . Brain tumor symptoms are broadly classified as general or focal. Small, low-grade tumors tend to cause focal symptoms, while larger, more advanced tumors present with generalized symptoms . General symptoms include headache, vomiting, nausea, cognitive and emotional impairment, and sensory deficits . Due to the often asymptomatic nature of early-stage GBM multiforme, diagnosis typically occurs at later stages . GBM symptoms are non-specific and depend largely on tumor location but commonly include headache, nausea, visual disturbances, motor problems, seizures, personality changes, and significant memory impairment . Patients with high-grade gliomas are also prone to depression and anxiety, impacting their well-being and quality of life . Cancer-related fatigue is another common symptom . Epilepsy, triggered by tumor growth and invasion along white matter fiber tracts, is also frequently observed in glioma patients . Studies have shown that glioma-related epilepsy affects the white matter fiber microstructure within the tumor itself . Rapid and accurate diagnosis is crucial for timely and effective treatment of brain tumors. Magnetic resonance imaging has become the gold standard for glioma evaluation, providing clear visualization of anatomical structures without the interference of skull artifacts . MRI employs various modalities, including T1-weighted, T1-weighted contrast-enhanced, T2-weighted, and T2-weighted FLAIR, to differentiate between tissue types . However, manual review of MRI scans is time-consuming and susceptible to human error. Consequently, there has been a surge in research exploring the application of artificial intelligence and deep learning to enhance diagnostic accuracy and efficiency, minimizing the risk of misdiagnosis [ , , ]. GBM presents a formidable therapeutic challenge due to its high recurrence rate despite aggressive treatment . It remains an incurable cancer with a median survival of approximately 15 months in treated patients and a 5-year survival rate of only 5%, influenced by age at diagnosis, molecular characteristics, extent of resection, and treatment response. The current standard of care involves maximal safe surgical resection followed by concurrent chemoradiation therapy . This typically consists of 60 Gray of radiation delivered in 30 fractions over 6 weeks, combined with daily temozolomide administration, followed by adjuvant temozolomide . However, treatment efficacy is limited by factors such as local tumor invasion and infiltration, the blood–brain barrier hindering chemotherapy penetration, and the development of multidrug resistance . Consequently, novel therapeutic strategies are being actively explored, including immunotherapy approaches such as peptide vaccines, dendritic cell vaccines, chimeric antigen receptor T-cell therapy, checkpoint inhibitors, and oncolytic virotherapy [ , , ]. While some of these methods have shown promise, significant challenges remain particularly immunosuppression within the tumor microenvironment . A hallmark of GBMs is their highly vascular nature, with angiogenesis playing a central role in their growth and survival. Endothelial cells construct the tumor’s blood vessels, supplying essential oxygen and nutrients. Furthermore, these vessels directly promote the proliferation of GBM progenitor cells via intercellular signaling pathways, further enhancing tumor development and viability . This intricate relationship between angiogenesis and tumor progression has become a focal point for therapeutic development, with researchers actively pursuing drugs that target these mechanisms to improve prognosis and long-term survival for brain tumor patients.
Angiogenesis, the formation of new blood vessels from existing ones, plays a crucial role in both physiological and pathological processes . This process is tightly regulated by a balance of pro- and anti-angiogenic factors. In GBM, angiogenesis occurs in the later stages of tumor development, primarily within the necrotic niche ( ). Key angiogenic factors driving this process include hypoxia-inducible factor 1 (HIF-1α), vascular endothelial growth factor (VEGF), fibroblast growth factor (FGF), angiopoietin-1 (Ang-1), and angiopoietin-2 (Ang-2). Hypoxia serves as the primary trigger, activating HIF-1α, which in turn regulates the expression of growth factors , metabolic proteins , matrix components , and adhesion molecules . Oxygen deprivation stimulates the release of VEGF , FGF , and angiopoietins , which bind to their respective receptors on endothelial cells. This binding initiates the dissolution of the vessel wall, degradation of the endothelial basement membrane (ECM) and extracellular matrix, and remodeling of the ECM by matrix metalloproteinases (MMPs). Subsequently, stromal cells synthesize new matrix components, promoting endothelial cell proliferation and migration, leading to the formation of tube-like endothelial structures . The final stage involves the development of a mature vascular basement membrane around these newly formed structures, with pericytes and smooth muscle cells (mural cells) providing support and stability to the nascent vessels. Tumor-associated macrophages (TAMs) play a significant role in GBM angiogenesis . Constituting up to 30% of the tumor microenvironment, these macrophages originate from blood monocytes or activated microglia within brain tissue . TAMs secrete a variety of inflammatory cytokines and growth factors, including TGF-β (Transforming Growth Factor-β), VEGF, IL-10 (Interleukin-10), and TNF-α (Tumor Necrosis Factor-α), which promote endothelial cell survival and proliferation, thereby supporting angiogenesis, tumor immunosuppression, and metastasis . Additionally, TAMs indirectly contribute to tumor progression by activating other immune cells and remodeling the extracellular matrix . Consequently, a high TAM presence correlates with increased tumor growth, poorer patient prognosis, and a greater risk of recurrence after treatment . Angiogenesis is essential for tumor growth and progression. Tumors develop abnormal, immature blood vessels characterized by larger size, irregular paths, varying lumen diameters, high permeability, and erratic branching . This abnormal vasculature is permeable to plasma and its proteins, leading to local edema and extravascular clotting . Consequently, interstitial pressure increases, disrupting blood flow and leukocyte infiltration . The flawed basement membrane and lack of proper perivascular connective tissue facilitate tumor cell dissemination . Compression and leakiness within the tumor vasculature obstruct the delivery of oxygen, nutrients, and therapeutic drugs, resulting in ischemia, necrosis, and a hypoxic environment that further stimulates HIF-1 activation and angiogenesis [ , , ]. This disordered vasculature profoundly alters the tumor microenvironment, influencing growth, metastasis, and treatment resistance. Therefore, targeting tumor vasculature and inhibiting associated growth factors and signaling pathways represent promising therapeutic strategies. Vascular endothelial growth factors are essential regulators of blood and lymphatic vessel formation and function in both physiological and pathological contexts . Within the central nervous system, VEGF plays a role in angiogenesis, neuronal migration, and neuroprotection. However, as a permeability factor, excessive VEGF levels can disrupt intracellular barriers, increase choroid plexus endothelial leakage, induce edema, and activate inflammatory pathways . The VEGF gene, located at 6p21.3, belongs to the cysteine knot growth factor superfamily, which also includes PDGF, NGF, and TGF-β . In mammals, the VEGF family comprises VEGF-A, VEGF-B, VEGF-C, VEGF-D, and placental growth factor, encoding structurally homologous glycoproteins . These proteins form homodimers or heterodimers via cysteine disulfide bridges, a crucial step for their biological activity . VEGF-A, the most extensively studied member of the VEGF family, exists in various isoforms, such as VEGF-A121, VEGF-A165, and VEGF-A189 . These isoforms arise from alternative mRNA splicing, resulting in differences in bioavailability and biological potency . Vascular endothelial growth factor-A represents the principal angiogenic factor expressed within the solid tumor microenvironment . Tumor cells, in particular, secrete high concentrations of VEGF-A, driving cell proliferation, migration, and angiogenesis. VEGF-B is highly expressed in the heart, particularly in cardiomyocytes, where it plays a crucial role in regulating myocardial contractility and protecting cardiomyocytes from ischemic and apoptotic damage, leading to physiological hypertrophy . It is also involved in cardiac remodeling after myocardial infarction . VEGF-C and VEGF-D are key players in lymphangiogenesis, promoting the proliferation of lymphatic endothelial cells . VEGF-C, in particular, is implicated in promoting lymphangiogenesis in various cancers . Furthermore, VEGF-D expression by cancer cells is known to facilitate metastasis . Placental growth factor (PlGF), the final member of the VEGF family, stimulates the growth of endothelial and smooth muscle cells. In conjunction with VEGF-B, PlGF participates in monocyte differentiation and activation. Elevated PlGF concentrations have been observed in myocardial infarction and various cancers . Vascular endothelial growth factor signaling is mediated by three tyrosine kinase receptors: VEGFR1 (vascular endothelial growth factor receptor 1, FLT1), VEGFR2 (vascular endothelial growth factor receptor 2, KDR/FLK1), and VEGFR3 (vascular endothelial growth factor receptor 3, FLT4) . Each receptor possesses seven extracellular immunoglobulin-like domains and an intracellular tyrosine kinase domain activated upon VEGF binding. VEGFR1 exhibits a tenfold higher affinity for VEGF than VEGFR2, promoting endothelial and inflammatory cell migration during pathological angiogenesis . VEGF binding to VEGFR2 activates pathways like PLCγ/PKC, contributing to both physiological and pathological angiogenesis, along with anti-apoptotic and cell migration effects. VEGFR3 primarily influences lymphangiogenesis in embryonic development and pathological conditions, including lymphatic metastasis. These receptors display varying affinities for different VEGF proteins and isoforms: VEGFR1 binds VEGF-A, VEGF-B, and PlGF; VEGFR2 binds VEGF-A and processed VEGF-C and VEGF-D; and VEGFR3 exhibits the strongest affinity for VEGF-C and VEGF-D . Among these receptors, VEGFR2 is the primary mediator of VEGF signaling in endothelial cells, playing a crucial role in VEGF-induced endothelial functions . Heparan sulfate proteoglycans and neuropilin 1 and 2 (NRP1/2) co-receptors are crucial for proper VEGF-VEGFR interactions . NRPs, found on immune, cancer, and endothelial cells, enhance VEGF signaling and promote angiogenesis upon VEGF binding . Interestingly, NRPs can also contribute to tumor formation independently of VEGF. HSPs interact with various signaling pathways and play a significant role in key steps of carcinogenesis, including tumor cell migration, anti-apoptotic activity, metastasis, and angiogenesis . In malignant gliomas, VEGF-A is the most important glycoprotein secreted during angiogenesis, acting as a central player in tumor biology . High VEGF expression is observed in necrotic areas of GBM, induced by HIF-1α under hypoxic conditions . GBM formation involves the induction of VEGFR-1 in endothelial cells, while malignant progression requires the coordinated function of both VEGFR-1 and VEGFR-2 . Upon binding to its receptors on endothelial cells, VEGF triggers the secretion of matrix metalloproteinases, which degrade the extracellular matrix, facilitating endothelial cell migration and proliferation . Due to its significant role in tumor development, VEGF and its receptors have become key targets for anti-cancer therapy.
Given the highly vascularized nature of gliomas, targeting angiogenesis seems to represent an effective treatment strategy. Numerous anti-angiogenic agents ( and ) are either currently used in GBM treatment or are approved for other malignancies and are being explored for their potential use in GBM. 3.1. Aflibercept 3.2. Axitinib 3.3. Bevacizumab 3.4. Cediranib 3.5. Dovitinib 3.6. Pazopanib 3.7. Ramucirumab 3.8. Sunitinib Sunitinib is an oral multi-kinase inhibitor targeting several key receptors involved in tumor growth and angiogenesis. Its targets include vascular endothelial growth factor receptors, platelet-derived growth factor receptors, stem cell factor receptors, RET oncogene tyrosine kinase, FMS-like tyrosine kinase 3, and colony-stimulating factor-1 receptor (CSF-1R) . Its intranasal delivery has been explored as a method to bypass the blood–brain barrier and increase drug concentration in the brain, showing promising results in preclinical studies with improved tumor growth reduction and less systemic toxicity compared with oral administration . Intranasal (IN) and oral (OR) administration of sunitinib demonstrated tumor growth reduction in GBM multiforme-bearing rats, confirmed by MRI (magnetic resonance imaging) scans showing decreased tumor size compared with control groups . Both IN and OR delivery effectively inhibited angiogenesis in GBM, indicating sunitinib’s ability to target tumor blood vessel formation . Importantly, IN delivery resulted in less hepatotoxicity than OR administration, suggesting a safer alternative with reduced liver toxicity . These findings support the potential of IN delivery as a non-invasive method to bypass the blood–brain barrier, offering a potentially more effective and safer approach for treating GBM by directly targeting the brain with reduced systemic side effects. Research by Linde et al. indicates that while intratumoral sunitinib concentrations are higher (reaching a median concentration of 1.9 μmol/L, with a range of 1.0–3.4 μmol/L) than plasma levels (median inhibitory concentration of 5.4 μmol/L, with a range of 3.0–8.5 μmol/L), they remain below the levels required for significant tumor cell growth inhibition in vitro . This suggests that the limited efficacy of sunitinib in GBM may be due to insufficient intratumoral concentrations, highlighting the need for alternative dosing strategies. The STELLAR trial, investigating high-dose intermittent sunitinib for recurrent GBM, was terminated early due to futility . The trial compared two sunitinib regimens (300 mg once weekly and 700 mg every two weeks) against standard lomustine therapy . Results showed no significant difference in median progression-free survival (mPFS) between the sunitinib groups (1.5 and 1.4 months, respectively) and the lomustine group (1.5 months) . Similarly, median overall survival (mOS) was comparable across groups, ranging from 4.7 to 6.8 months . Only 8% of patients in both sunitinib groups were progression-free at six months, compared with 15% in the lomustine group . Both treatments were generally well tolerated, with maximal grade 3 toxicities observed in 8% of sunitinib patients and 15% of lomustine patients . The study concluded that high-dose intermittent sunitinib did not improve outcomes compared with lomustine, highlighting the need for more effective recurrent GBM treatments. Overall, while alternative delivery methods and combination therapies offer potential avenues for enhancing sunitinib’s effectiveness, further research is needed to overcome the challenges posed by the blood–brain barrier and optimize treatment strategies for GBM patients.
Aflibercept, a recombinant fusion protein designed to bind and neutralize vascular endothelial growth factor and placental growth factor, key players in tumor angiogenesis, has emerged as a potential treatment strategy for GBM . Studies indicate that aflibercept administration leads to a substantial decrease in circulating VEGF levels within 24 h, a reduction that correlates with positive radiographic responses, particularly in patients with recurrent GBM . This suggests a direct link between VEGF suppression and tumor response. Furthermore, research has identified potential biomarkers for predicting aflibercept efficacy. Patients who respond well to aflibercept tend to exhibit elevated baseline levels of specific chemokines, including CTACK/CCL27 (cutaneous T-cell-attracting chemokine/chemokine C-Cmotif ligand 27), MCP3/CCL7 (monocyte-chemotactic protein 3/chemokine C-Cmotif ligand 7), MIF (macrophage migration inhibitory factor), and IP-10/CXCL10 (interferon gamma-induced protein 10/C-X-C Motif Chemokine Ligand 10) . Additionally, a decrease in VEGFR1+ monocytes following treatment is associated with improved patient outcomes . These findings highlight the potential for personalized treatment approaches based on individual biomarker profiles. The clinical application of aflibercept is not without challenges. Its safety and toxicity profile are complex, requiring careful consideration. When administered in combination with radiation and temozolomide, the maximum tolerated dose of aflibercept is 4 mg/kg every two weeks . Dose-limiting toxicities, including deep vein thrombosis and neutropenia, have been observed at higher doses. Moreover, changes in cytokine levels, such as IL-6 (interleukin 6), IL-10 (interleukin 10), and IL-13 (interleukin 13), are associated with treatment-related toxicities, notably fatigue and endothelial dysfunction . These adverse effects can be significant and often lead to treatment discontinuation, underscoring the need for close patient monitoring and management of potential side effects. While aflibercept holds promise for managing GBM, its use is complicated by the potential for significant toxicities and the need for careful patient monitoring. Identifying and validating biomarkers for both efficacy and toxicity could pave the way for more personalized treatment strategies, potentially improving outcomes and minimizing adverse effects.
Axitinib, a VEGFR tyrosine kinase inhibitor, has shown promise in preclinical GBM models, particularly by targeting GBM stem-like cells and tumor vasculature . In preclinical animal studies, high-dose axitinib treatment effectively reduced tumor blood vessel density, increased immune cell infiltration, and caused significant tumor cell death . These anti-tumor effects were linked to axitinib’s inhibition of the PDGFR/ERK signaling pathway, which plays a crucial role in tumor growth and survival . Despite the preclinical promise, axitinib’s efficacy in GBM remains limited by factors like blood–brain barrier penetration and tumor heterogeneity . Future research should focus on optimizing combinations and identifying biomarkers for patient selection.
Bevacizumab (BEV) is a recombinant humanized monoclonal antibody that specifically targets vascular endothelial growth factor, a key regulator of angiogenesis . Classified as an anti-angiogenic agent, bevacizumab inhibits the formation of new blood vessels, a process crucial for tumor growth and metastasis. Its mechanism of action involves binding to circulating VEGF, thereby preventing its interaction with VEGF receptors on the surface of endothelial cells . This blockade disrupts the signaling cascade that normally leads to endothelial cell proliferation, migration, and the formation of new blood vessels. Consequently, tumor growth is suppressed due to the deprivation of oxygen and nutrients supplied by the newly formed vasculature. Bevacizumab is clinically used in the treatment of various cancers, including GBM, colorectal cancer, non-small cell lung cancer, and renal cell carcinoma . In GBM, bevacizumab targets the abnormal tumor vasculature, although its efficacy can be variable. Studies have revealed a correlation between VEGFA expression levels in GBM tissues and treatment response to bevacizumab. Specifically, high VEGFA expression has been associated with improved progression-free survival after bevacizumab treatment . Statistical analyses have demonstrated a significant difference in PFS (Progression-Free Survival) between patients with high and low VEGF-A expression, with high expressers experiencing a PFS of 10 months compared to 4 months for low expressers . This suggests that patients with higher VEGF-A levels may derive greater benefit from bevacizumab therapy, experiencing longer periods without disease progression. Consequently, VEGF-A holds promise as a potential biomarker for predicting bevacizumab treatment response, enabling personalized treatment strategies and potentially improving outcomes for GBM patients. Melhem et al. investigated the efficacy of low-dose versus standard-dose bevacizumab in recurrent GBM (rGBM) patients. The LD (low dose) regimen (5 mg/kg every 2–3 weeks or 10 mg/kg every 3 weeks) demonstrated significantly improved outcomes compared with the SD (standard dose) regimen (10 mg/kg every 2 weeks) . Patients receiving LD BEV experienced a longer median progression-free survival (5.89 vs. 3.22 months) and overall survival (10.23 vs. 6.28 months) . The LD regimen was associated with a 2.67-fold reduction in disease progression likelihood and a 2.56-fold improvement in survival chances . Notably, the LD group had a higher median age (62 vs. 54 years) yet still exhibited superior outcomes . While adverse events like fatigue, arthralgia, and hypertension were more common in the LD group, serious adverse events leading to treatment discontinuation were rare . These findings suggest that LD BEV offers a survival benefit and potential cost-effectiveness, potentially broadening treatment access. A study analyzing data from 106 GBM IDH-wildtype patients, 39 of whom received bevacizumab as a second-line treatment, found a significant difference in median survival from tumor progression based on tumor vascularity . Patients with moderate vascular tumors treated with bevacizumab lived a median of 305 days longer than those without second-line treatment, while those with high vascular tumors lived only 173 days longer . Patients with moderate vascularity showed better responses to bevacizumab, with a higher proportion surviving at 6, 12, 18, and 24 months post-progression . The study proposes rCBV (relative Cerebral Blood Volume) max, with a threshold of 7.5, as a predictive biomarker for bevacizumab benefit, potentially improving personalized treatment decisions. The efficacy of bevacizumab is limited, necessitating further therapeutic exploration. The latest research by Lai et al. has shown a correlation between bevacizumab and ROCK2 (Rho-associated coiled-coil forming kinase-2). Rho-associated kinase 2, a component of the Rho/ROCK (Rho-kinase/Rho-associated coiled-coil forming kinase) signaling pathway, regulates cellular processes like motility, migration, and proliferation, making it a potential therapeutic target in cancer . Inhibiting ROCK2 has been shown to enhance bevacizumab’s effects in GBM by reducing GBM cell viability and migration, primarily through the RhoA/ROCK2 (Rho-kinase-A/Rho-associated coiled-coil forming kinase) pathway, leading to increased apoptosis . Additionally, ROCK2 inhibition reduces angiogenesis and the degradation of cellular matrix-related cytokines, crucial processes in tumor growth and metastasis . This synergistic effect of ROCK2 inhibition with bevacizumab presents a promising strategy for improving GBM treatment outcomes. The latest meta-analysis of phase II and III randomized controlled trials indicated that bevacizumab significantly improves progression-free survival in GBM patients but does not prolong overall survival (OS) . It is effective in both first-line and second-line treatments and shows improved PFS regardless of MGMT (O6-methylguanine-DNA methyltransferase) methylation status when combined with temozolomide . However, the use of bevacizumab is associated with increased risks of hypertension, proteinuria, thromboembolic events, and infections, necessitating careful monitoring, particularly for hypertension . Interestingly, the development of hypertension in GBM patients receiving bevacizumab has been associated with improved overall survival, suggesting a potential role as a biomarker for treatment response . It is hypothesized that hypertension may influence the interplay between tumor cells and the perivascular niche, a specialized microenvironment surrounding blood vessels . This altered interaction could potentially impact tumor invasion and growth dynamics. Increased blood pressure may reshape the perivascular microenvironment by modifying vascular permeability, blood flow, and the extracellular matrix composition. These alterations could consequently impact tumor cell migration, proliferation, and survival, ultimately influencing disease progression. Bevacizumab’s primary clinical benefit, which contributes to its widespread use as the most commonly employed anti-angiogenic agent in the treatment of recurrent GBM, is its capacity to alleviate brain edema. It effectively relieves symptoms of radiation brain necrosis, improving the Karnofsky performance status and enhancing brain necrosis imaging . By binding to VEGF and preventing its interaction with endothelial cell receptors, bevacizumab reduces vascular permeability and brain edema . Its long half-life and convenient administration are advantageous. However, bevacizumab only addresses symptoms from new vessel formation around necrotic areas, and recurrence is possible due to the reactivation of the HIF-1α/VEGF cycle . While recurrence due to excessive vessel pruning and ischemia has been observed, bevacizumab resistance in this context is not yet conclusively reported . Further clinical data are needed to refine indications, optimize protocols, and address resistance and recurrence while emphasizing preventative strategies through careful radiotherapy dose management.
Cediranib, an oral pan-VEGF receptor tyrosine kinase inhibitor, has shown promise in treating GBM, particularly in newly diagnosed cases . Its mechanism of action involves normalizing tumor vasculature, reducing endothelial proliferation, and maintaining blood–brain barrier integrity . A study comparing recurrent GBMs (rGBM) treated with cediranib to those without anti-angiogenic therapy revealed several key findings . Cediranib treatment led to decreased endothelial proliferation, reduced glomeruloid vessels, and blood vessel diameters/perimeters comparable to healthy brain tissue . Notably, even after cediranib discontinuation, no revascularization or rebound angiogenesis was observed, with tumor endothelial cells expressing blood–brain barrier markers . Cediranib also altered rGBM growth patterns, showing lower central tumor cellularity gradually decreasing towards the infiltrating edge, distinct from post-chemoradiation patterns . However, treated tumors exhibited high PDGF-C and c-Met expression, along with significant myeloid cell infiltration, potentially contributing to anti-VEGF resistance . These findings suggest that rGBMs adapt their growth following anti-VEGF therapy, exhibiting decreased cellularity, reduced necrosis, and normalized vasculature without rebound angiogenesis, as well as potential resistance mechanisms. The NRG/RTOG 0837 trial, a randomized, double-blind, placebo-controlled phase II study, investigated the efficacy of cediranib in newly diagnosed GBM patients . Patients were randomized 2:1 to receive either cediranib (20 mg) or placebo, alongside standard radiation and temozolomide . Of the 158 randomized patients, 137 were eligible and evaluable for the primary endpoint of 6-month progression-free survival . Results showed a significant improvement in 6-month PFS in the cediranib group (46.6%) compared with the placebo group (24.5%), with a p -value of 0.005 . However, this improvement in PFS did not translate to a significant difference in overall survival between the two groups . Furthermore, the cediranib group experienced a higher rate of grade 3 or greater adverse events ( p = 0.02) . While cediranib demonstrated a benefit in short-term disease control, it did not improve overall survival and was associated with increased toxicity . In a study of 31 recurrent GBM patients treated with cediranib, researchers sought early predictive biomarkers for anti-angiogenic therapy response . Using advanced MRI, changes in vascular permeability ( K trans), microvessel volume, and circulating collagen IV levels were assessed after a single cediranib dose . Changes in these parameters after just one day correlated significantly ( p < 0.05) with both overall and progression-free survival . A “vascular normalization index”, combining these three parameters, demonstrated a strong correlation with overall survival (ρ = 0.54, p = 0.004) and progression-free survival (ρ = 0.6, p = 0.001) . This index offers promise as a mechanistic biomarker for predicting survival outcomes in patients receiving anti-VEGF therapy for recurrent GBM, pending validation in randomized clinical trials. While cediranib’s impact on PFS is encouraging, further research is needed to address its lack of effect on overall survival, manage adverse events, and overcome resistance mechanisms, potentially through combination therapies.
Dovitinib, a multi-kinase inhibitor targeting FGFR, VEGFR, PDGFRβ, and c-Kit (stem cell factor receptor), has been investigated as a potential treatment for GBM, especially in recurrent cases . Its ability to cross the blood–brain barrier and target multiple pathways crucial to GBM development makes it a theoretically attractive option. However, clinical trials have yielded mixed results. A phase II clinical trial investigated the efficacy of dovitinib in patients with recurrent GBM . The trial comprised two arms: Arm 1 included patients without prior anti-angiogenic therapy, and Arm 2 consisted of patients who had progressed after such therapy . The primary endpoint for Arm 1, 6-month progression-free survival, was a mere 12% ± 6%, indicating that only a small fraction of patients in this group remained progression-free six months post-treatment . Arm 2’s primary endpoint, time to progression, was similar to Arm 1 at a median of 1.8 months, suggesting limited treatment efficacy in both groups . Overall, the study concluded that dovitinib failed to significantly prolong progression-free survival in recurrent GBM patients, regardless of prior anti-angiogenic treatment experience . The majority of patients (70%) experienced disease progression, and 94% had died by the final follow-up, with a median overall survival of 5.6 months . Toxicity was substantial, with 15% of patients experiencing severe (grade 4) toxicities and 67% experiencing moderate (grade 3) toxicities, including lipid abnormalities, elevated liver enzymes, thrombocytopenia, fatigue, and diarrhea . While the study explored biomarker changes, no significant inter-arm differences were observed . However, elevated baseline levels of certain biomarkers, such as BMP 9 (Bone Morphogenetic Protein 9), CD73 (cluster of differentiation 73), and VEGF D, correlated with poorer progression-free survival . Overall, the study indicated limited efficacy for recurrent GBM. A phase I trial of dovitinib in 12 patients with recurrent GBM (post-radiotherapy and temozolomide) showed that the drug was generally safe and tolerable at 300 mg . Common adverse events were fatigue, elevated liver enzymes, diarrhea, and discomfort. . Severe toxicity (grade 3 or higher) occurred in a minority of patients (16.7%), mainly involving liver enzyme increases and hematotoxicity . However, efficacy was limited; no complete or partial responses were observed . Median progression-free survival was 1.8 months, and median overall survival was 9.5 months . Biomarker analysis suggested a possible reason for the limited efficacy: all tested patients had FGFR3 wild-type but lacked the FGFR3-TACC fusion protein . The trial concluded that dovitinib at 300 mg is safe but not particularly effective in unselected recurrent GBM patients, and future research should explore a personalized approach based on tumor tissue expression of the drug’s target proteins. Dovitinib’s mechanism of action involves downregulating the stem cell protein Lin28 and its target HMGA2 (High-Mobility Group Protein A2), affecting the STAT3/LIN28/Let-7/HMGA2 regulatory axis in GBM cells . This downregulation reduces tumor sphere formation and enhances temozolomide’s efficacy by impairing DNA (Deoxyribonucleic Acid) repair mechanisms, suggesting a potential combination therapy strategy . However, dovitinib’s clinical use is hampered by significant toxicities, including hepatotoxicity and hematotoxicity . Combining dovitinib with temozolomide has shown increased GBM cell apoptosis and reduced viability in preclinical studies, but this approach also carries increased toxicity risks . While dovitinib holds promise, its clinical efficacy in GBM remains limited, and further research is needed to optimize dosing, patient selection, and combination strategies to improve outcomes and manage toxicity.
Pazopanib, an anti-angiogenic tyrosine kinase inhibitor of VEGFR, PDGFR, and c-KIT, is also being investigated as a potential GBM treatment. North American Brain Tumor Consortium Study 06-02, phase II trial, evaluated pazopanib’s efficacy and safety in 35 recurrent GBM patients (median age 53) . Patients received 800 mg/day pazopanib until disease progression or unacceptable toxicity . The primary endpoint, PFS6 (progression free survival at 6 months), was achieved in only one patient (3%) . The median PFS was 12 weeks, and the median OS was 35 weeks . Two patients experienced partial responses, while nine showed some tumor reduction but not enough for partial response classification . Pazopanib was generally well tolerated, with common adverse events including hypertension, fatigue, and elevated liver enzymes . Four patients discontinued treatment due to severe side effects . The study concluded that pazopanib did not significantly prolong PFS, though some biological activity was observed . Compared with other treatments, like bevacizumab, pazopanib’s results were less favorable, suggesting it may not be effective for recurrent GBM at the tested dose. In the last couple of years, pazopanib has been studied in combination with other drugs. Its effect was associated with primarily grade 1–2 adverse events, including hypertension, increased ALT (alanine transaminase), asthenia, nausea, diarrhea, thrombocytopenia, neutropenia, and anemia . While pazopanib shows potential, its efficacy remains limited, and challenges such as drug delivery and adverse effects need to be addressed. The ongoing PAZOGLIO trial (phase I/II study evaluating pazopanib combined with temozolomide in newly diagnosed GBM patients following the Stupp protocol) and exploration of other combinations, such as with PARP inhibitors, may offer further insights and alternative strategies for GBM treatment [ , , ].
Ramucirumab, a fully humanized monoclonal antibody, specifically targets VEGFR-2, a key receptor in angiogenesis . By binding to VEGFR-2, ramucirumab blocks the binding of VEGF ligands, thereby inhibiting the activation of downstream signaling pathways responsible for endothelial cell proliferation and migration, essential processes for angiogenesis and tumor growth . This mechanism aims to restrict tumor blood supply, potentially limiting growth and metastasis . While promising in other cancers, ramucirumab’s efficacy in GBM is still under investigation. The blood–brain barrier poses a significant challenge for drug delivery . Imaging biomarkers VEGF and PDGF receptors were explored to assess ramucirumab’s early biological effects and identify potential therapeutic targets in recurrent GBM . VEGFR-2 and PDGFR were identified as significant targets due to their overexpression in GBM . However, further validation is needed. A non-randomized phase II clinical trial, NCT00895180, investigated the efficacy of ramucirumab in recurrent GBM patients. In this study, ramucirumab was compared to a monoclonal antibody targeting PDGFR . Results showed that ramucirumab offered marginally improved progression-free survival and overall survival compared to the PDGFR inhibitor . The adverse event profiles of the two treatments were similar . While these findings suggest a potential benefit for ramucirumab in recurrent GBM, the non-randomized nature of the trial limits the strength of the conclusions. Ramucirumab is generally well-tolerated, but potential adverse events like fatigue and neutropenia warrant consideration . Challenges for ramucirumab’s application in GBM include effective delivery across the blood–brain barrier, potentially marginal benefits compared with existing therapies, and cost-effectiveness . Future research should focus on optimizing delivery methods and identifying specific patient subsets who may derive the most benefit from this therapy.
Combination therapy in GBM angiogenesis utilizes multiple therapeutic agents to target the complex interplay of mechanisms driving tumor growth and vascularization. This approach seeks to improve treatment efficacy by addressing the shortcomings of single-agent therapies, such as drug resistance and incomplete suppression of angiogenic pathways. Various studies have investigated different drug combinations to inhibit angiogenesis in GBM, concentrating on key signaling pathways and molecular mechanisms crucial for tumor progression ( ).
Resistance to anti-angiogenic therapy in GBM presents a significant hurdle in treatment ( ). GBM employs several mechanisms to circumvent therapies designed to inhibit blood vessel growth. 5.1. Mechanisms of Resistance—Activation of Redundant Angiogenic Pathways 5.2. Mechanisms of Resistance—Hypoxia 5.3. Mechanisms of Resistance—Heightened Tumor Cell Invasion and Metastasis 5.4. Mechanisms of Resistance—Vascular Mimicry 5.5. Mechanisms of Resistance—Glioma Stem Cells 5.6. Mechanisms of Resistance—Immune Microenvironment Modulation GBM exhibits a remarkable ability to adapt to anti-angiogenic therapies by activating alternative pathways for blood vessel formation . When common therapies like bevacizumab inhibit vascular endothelial growth factor, GBM can upregulate other pro-angiogenic factors . Specifically, angiopoietin-2 (Ang-2), placental growth factor (PlGF), and ephrin A2 (EFNA2) have been observed at increased levels in GBM tissues resistant to bevacizumab . This activation of alternative angiogenic pathways allows GBM to continue developing new blood vessels, even when VEGF signaling is suppressed, contributing to treatment resistance and continued tumor growth .
Hypoxia, a characteristic of the tumor microenvironment exacerbated by anti-angiogenic therapies, plays a significant role in GBM resistance [ , , ]. Under low oxygen conditions, tumor cells activate survival pathways, notably the PI3K/Akt pathway, enabling continued growth and division . Hypoxia-inducible factors are stabilized and activated, promoting angiogenesis by upregulating pro-angiogenic factors like VEGF, counteracting the effects of anti-angiogenic therapies . Additionally, hypoxia induces metabolic reprogramming, allowing the tumor to utilize alternative energy sources and maintain growth despite a reduced blood supply . Furthermore, hypoxia can increase tumor aggressiveness, enhancing invasive and metastatic potential, thereby complicating treatment strategies
When anti-angiogenic therapies restrict the tumor’s blood supply, GBM cells become more invasive, seeking out new vessels and spreading aggressively within the brain . While GBM typically doesn’t metastasize outside the central nervous system, this heightened invasiveness leads to more widespread infiltration, complicating treatment . Tumor cells adapt by altering cell adhesion properties and expressing enzymes that degrade the extracellular matrix, facilitating movement through tissue barriers . Furthermore, the tumor microenvironment changes in response to therapy, creating conditions that favor invasion and metastasis, including alterations in the extracellular matrix and interactions with other cell types .
Vascular mimicry (VM) is a process where tumor cells form vessel-like channels independent of traditional angiogenesis, providing an alternative blood supply . In GBM, these channels, formed by tumor cells rather than endothelial cells, allow the tumor to receive nutrients and oxygen, especially when anti-angiogenic therapies are employed . This process represents a key resistance mechanism, enabling tumor growth and survival despite therapies targeting traditional blood vessel formation . Glioma stem-like cells contribute to vascular mimicry in GBM by transdifferentiating into endothelial-like cells, forming vessel-like channels that bypass traditional angiogenesis . This process is facilitated by pathways such as ATM serine/threonine kinase, which is also implicated in chemoradiotherapy resistance . Furthermore, the transcription factor FOXC2 promotes VM by driving the expression of endothelial genes in tumor cells, a process amplified by hypoxia within the tumor microenvironment . This hypoxia-induced VM formation further enhances resistance to anti-angiogenic therapies .
Glioma stem cells play a critical role in GBM’s resistance to anti-angiogenic therapies . GSCs can differentiate into pericytes, supporting the survival of endothelial cells crucial for maintaining tumor blood vessels . Furthermore, GSCs promote an autocrine VEGF-A signaling pathway, which helps sustain the tumor’s blood supply, even when targeted by therapies . GSCs are also involved in vessel co-option, allowing the tumor to hijack existing blood vessels and bypass the need for new vessel formation . Under hypoxic conditions, GSCs utilize autophagy to survive, adapting to the stress induced by treatment . These combined mechanisms demonstrate how GSCs contribute significantly to GBM’s resistance to therapies aimed at disrupting its blood supply, making them a key target for future treatment strategies.
GBM tumor microenvironment is a complex interplay of cellular and non-cellular components, including glioma cells, glioma stem cells, immune cells (macrophages, microglia), and the extracellular matrix . These components dynamically interact, influencing tumor growth, progression, and therapeutic resistance [ , , ]. Tumor-associated macrophages and myeloid-derived suppressor cells contribute to the immunosuppressive TME (tumor microenvironment) by secreting pro-angiogenic factors like VEGFA, promoting angiogenesis and tumor growth, thereby hindering anti-angiogenic therapies . Hypoxia within the TME further exacerbates immunosuppression by upregulating hypoxia-inducible factors, which polarize myeloid cells towards a suppressive phenotype, inhibiting effective immune responses . GBM develops resistance to anti-angiogenic therapies through redundant pro-angiogenic pathways, increased tumor cell invasion, and the formation of vasculogenic mimicry channels, which bypass traditional blood vessels . The “cold” immune environment, characterized by a high pro-tumor to anti-tumor immune cell infiltrate ratio, further contributes to resistance, particularly against immunotherapy . Granulocyte-rich environments exacerbate this immunosuppression by promoting a suppressive phenotype in microglia and macrophages, hindering therapies targeting VEGF . Overcoming resistance to anti-angiogenic therapy in GBM is a significant challenge due to the tumor’s complex biology and adaptive mechanisms. While initially promising, therapies targeting VEGF often encounter resistance, limiting long-term efficacy. Several strategies aim to address this, including combination therapies (e.g., bevacizumab with SB431542), targeting alternative pathways (e.g., MNK-eIF4E axis with tomivosertib), personalized medicine based on individual tumor profiles, and nanoparticle drug delivery for improved precision ( ) [ , , ]. Emerging perspectives emphasize the importance of multi-target approaches, including natural products and novel drug combinations, as well as a deeper understanding of the tumor microenvironment, particularly immune modulation and macrophage activity, to develop more effective and sustainable strategies to overcome resistance in GBM .
Anti-angiogenic therapy initially held promise for GBM due to the tumor’s highly vascular nature. However, resistance mechanisms limit long-term efficacy and overall survival benefits. GBM tumors frequently activate alternative pro-angiogenic pathways, bypassing the effects of agents like bevacizumab. Hypoxia induced by anti-angiogenic therapy paradoxically promotes tumor invasion and metastasis. The tumor microenvironment’s immune components are also modulated, potentially reducing treatment effectiveness. While anti-angiogenic therapy improves progression-free survival, it does not significantly impact overall survival. This suggests that while disease progression may be delayed, the ultimate trajectory remains unchanged. Furthermore, combining anti-angiogenic therapy with chemoradiotherapy increases adverse events, particularly hematologic ones. Current research focuses on combination therapies with immunotherapy or personalized medicine, alternative dosing regimens, and next-generation anti-angiogenic agents like small interfering RNAs and nanoparticles to overcome resistance and improve outcomes.
|
Cataract Surgery Practices in the Republic of Korea: A Survey of the Korean Society of Cataract and Refractive Surgery 2018 | 3b6d5838-4083-46f5-8d02-a338897fd7c5 | 6791949 | Ophthalmology[mh] | In July 2018, 32 multiple choice or open-ended questionnaires were sent via e-mail to 801 members of the KSCRS. The questionnaire used in this study was based on a previous KSCRS survey with some modifications to identify the latest and changing surgical trends, such as newly developed types of multifocal IOLs and biometry equipment or femtosecond laser-assisted cataract surgery (FLACS) . The questionnaire consisted of three major categories: surgeon demographics, cataract surgery-general, cataract surgery-skill, and related complications. Return questionnaires were not marked or labeled to maintain the confidentiality of the respondent, and no financial reward was offered. One hundred and two (12.7%) members completed the questionnaire. IBM SPSS Statistics ver. 20.0 (IBM Corp., Armonk, NY, USA) was used for statistical analysis.
Surgeon demographics Cataract surgery-general Biometry for axial length measurements Anesthesia Cataract surgery-technique IOL use Postoperative management Complications Ninety-four percent of the respondents (from 80% in 2012) observed posterior capsular rupture during surgery in less than 5% of cases, and 71% observed posterior capsular rupture during surgery in less than 1% of cases. More than half (54%) of the respondents did not observe patients with postoperative endophthalmitis, 23% observed postoperative endophthalmitis once, and 23% observed postoperative endophthalmitis in less than five cases. Fifty-eight percent of those who observed postoperative endophthalmitis estimated that the occurrence was less than 0.1%. More than half (52%) of the respondents observed severe postoperative uveitis, and 63% of these respondents estimated that the occurrence was less than 0.1% of cases. The occurrences of other complications such as toxic anterior segment syndrome with severe endothelial cell damage, intraocular pressure spikes requiring medication, clinically significant pseudophakic cystoid macular edema, retinal detachment, dysphotopsia, IOL sulcus fixation, and unplanned aphakia are shown in . More than half (53%) of the respondents did not observe IOL subluxation or dislocation in uncomplicated cases over the past 10 years, 16% observed this complication once, 31% observed this complication in less than five cases, and 1% observed this complication in 10 cases or more. Among secondary IOL implantation techniques, conventional scleral fixation with suture material was used by 76% of the respondents; the Hoffmann technique was used by 10%; the iris-fixated IOL technique was used by 10%; and the glued intrascleral haptic fixation was used by 2%.
Most respondents were 30 to 49 years of age (88%), male (78%), and had more than 6 years of surgical experience (75%). Most respondents also worked for university hospitals (48%), followed by private clinics (31.4%), and eye hospitals (15%). The respondent demographics are shown in .
The average monthly volume of cataract surgeries performed by KSCRS members was 31 cases. Seventy-two percent performed 1 to 5 post refractive surgeries per month, 5% performed 6 to 10 surgeries, 2% performed >10 surgeries, and 21% did not perform post refractive surgery. These surgeons preferred phacoemulsification (95%) and 5% used a femtosecond laser. None of the members performed planned extracapsular cataract extraction. Surgeons were assisted during the procedures by residents (41%), nurses (16%), and nursing assistants (15%). The surgical instruments were managed and sterilized by nurses (83%) and nursing assistants (15%). Ninety-four percent of the respondents selected the IOL power, and 4% of the respondents had the residents perform this task.
Optical biometry was used by 92% of the respondents for IOL calculations, and ultrasound biometry was used by 73% of the respondents. Seventy-eight percent of respondents used IOLmaster (Carl Zeiss Meditech, Jena, Germany), 73% used ultrasound A-scan biometry, 14% used Lenstar (Haag-Streit, Koeniz, Switzerland), and 8% used ALscan (Nidek, Gamamori, Japan). Thirty-four percent of the respondents used single equipment, 23% used an IOLmaster, and 8% used an ultrasound A-scan. Sixty-six percent used more than two instruments; the most common combination was ultrasound A-scan biometry and the IOLmaster (44%).
Topical anesthesia was used by 80% of the respondents (from 69% in 2012). Subtenon anesthesia was used by 14% of the respondents (from 17% in 2012), and retrobulbar block was used by 5% (from 10% in 2012) ( ) .
Temporal clear corneal incisions were used by 50% of the respondents, followed by a steep axis incision (15%). The temporal site was the preferred incision meridian (57%), with little variation since 2006 ( ) . A cataract incision size of 2.8 mm was used by 64% of the respondents (from 61% in 2012), and a size of 2.2 mm was used by 31% (from 24% in 2012) . Among ophthalmic viscoelastic devices, HEALON (Johnson & Johnson, New Brunswick, NJ, USA) was used by 65% of the respondents; HEALON GV (Johnson & Johnson) was used by 10%; Viscoat (Alcon Laboratories, Fort Worth, TX, USA) was used by 7%; and HEALON 5 (Johnson & Johnson) was used by 3%. Forceps were used by 68% of the respondents for continuous curvilinear capsulorhexis, a bent needle was used by 24%, and a femtosecond laser was used by 2%. Capsular polishing was performed by 95% of the respondents during surgery; 5% of the respondents did not use this technique. Forty-eight percent of the respondents used both anterior and posterior capsules, while 44% and 3% of the respondents used only an anterior capsule or posterior capsule, respectively.
In 2001, an acrylic IOL was the preferred optic material, exceeding half of the total number of IOLs used ( ) . In 2018, acrylic was preferred by 98% of the respondents and silicone by 2%; poly (methyl methacrylate) was not used by any respondent (0%). Among the total number of cataract surgeries, 64% of the respondents used a toric IOL less than 5%, and 84% used multifocal IOLs less than 10% ( ). Twenty-four percent of the respondents did not use multifocal IOLs; and 66% of the respondents treated less than 10 cases per month, while 8% treated 10–50 cases per month. Eighty-seven percent of the respondents also used multifocal IOLs for patients who were treated with corneal refractive surgery, and it constituted less than 30% of the total cataract surgery cases. Eighty-two percent of the respondents using multifocal IOLs had been using bifocal IOLs, 48% had been using trifocal IOLs, and 30% had been using both types of IOLs. The types of preferred multifocal IOLs are shown in .
Forty-seven percent of the respondents did not use oral antibiotics postoperatively, and 31% used oral antibiotics postoperatively for less than 3 days. Forty-one percent of the respondents (from 77% in 2012) prescribed antibiotics for 3 to 7 days. Topical nonsteroidal anti-inflammatory drugs (NSAIDs) were used together with topical steroids by 60% of the respondents, 10% used topical NSAIDs alone, and 30% did not use topical NSAIDs. More than half (53%) of the respondents used topical antibiotics and steroids for 1 month, and 26% of the respondents continued to use topical antibiotics and steroids longer than 1 month. In a similar manner, 52% of the respondents used topical antibiotics and NSAIDs for 1 month postoperatively. Twenty percent of the respondents finished follow-up visits at 1 month postoperatively, 30% finished visits at 3 months, 15% finished visits at 6 months, and 26% continued to visit every year. Most respondents (72%) prescribed reading glasses at 1 month postoperatively.
This survey provides a summary of current practices of KSCRS members performing cataract surgery and describes the changing trends in clinical practice. Most of the respondents (75%) had been in practice for 6 or more years and had performed an average of 31 cataract surgeries per month. There was a moderate increase in the number of respondents treating 16 to 50 cases per month from 2012 (55%) to 2018 (62%), and over 51 cases from 2012 (7%) to 2018 (11%). There was a decreased number of respondents treating 6–15 cases from 2012 (33%) to 2018 (18%). This survey involved the latest practice of KSCRS members in cataract surgery; some questions in the survey were modified from their original version used in previous surveys. Some questionnaire items such as FLACS, cataract surgery after refractive surgery, and NSAIDS eye drops were added; and the categories of biometry equipment, incisions, continuous curvilinear capsulorhexis, multifocal IOLs, toric IOLs, and postoperative management were refined . Because no surgeon performed planned extracapsular cataract extraction and the majority (95%) of respondents used phacoemulsification, there was a marked increase in the number of respondents who used topical anesthesia, from 69% (2012) to 80% (2018), and a decreased use of retrobulbar block, from 10% to 5% . These changes also seemed to affect the preferred smaller size of incisions. The first report of FLACS was included in the current survey. In the 2012 survey, 32% of the respondents expressed an interest in FLACS however, was not used in practice. Only 5% of the respondents in this survey performed FLACS, which showed that FLACS still had a limited role in the Republic of Korea. Possible explanations include the high cost of purchase and operation, as well as the lack of clear advantages of FLACS over traditional phacoemulsification . According to the 2018 American Society of Cataract and Refractive Surgery clinical survey, only an average of 8% of cataract patients were treated using FLACS. USA respondents performed FLACS more often, with 10% of cataract patients in the USA. Treated with FLACS vs. 6% of non-USA patients . Another survey from the Canadian Ophthalmological Society (COS) reported that the use of FLACS had increased from 2014 (8%) to 2015 (18.9%), then started to decline in 2016 (17.4%) and in 2017 (11.8%) . However, we cannot predict changes in the use FLACS from the results of this survey alone and additional studies are needed. There was a marked increase in the number of respondents who were using optical biometry. Optical biometry was used by 92% of the respondents and ultrasound biometry by 73% of the respondents. For example, the use of IOLmaster increased from 56% in 2012 to 78% in 2018. As new optical biometry equipment has been introduced with a newer generation of IOL calculation formulas, optical biometry has partially replaced conventional ultrasound A-scan biometry. Optical biometry is convenient to use, reproducible, and has an installed calculation formula. Several favorable results in cases implanted with multifocal or toric IOLs seem to be related to its increasing popularity. There was a marked increase in the number of respondents who were using multifocal IOLs, from 44% (2012) to 76% (2018). This was higher than the COS report of approximately 50% in 2017; two-thirds was the proportion reported by the American Society of Cataract and Refractive Surgery clinical survey in 2018 . The most current survey included the first use of NSAIDS eye drops. These were used by 70% of the respondents postoperatively, and the majority prescribed eye drops for 4 weeks (59%). This was similar to the COS report, which showed that 75.9% of the respondents prescribed NSAIDS and 52.4% prescribed these drops for 4 weeks . This survey has several limitations. Response bias could have affected the results although there were few open-ended questions. The questionnaire primarily involved multiple choice questions without the option of responses that were not listed. Because the response represented only 12.7% of the KSCRS members (20.6% in 2012), the results may not represent the opinions and practices of the majority of ophthalmologists in the Republic of Korea. In summary, this study provides a comprehensive update of the present cataract surgery practices of KSCRS members. The results emphasize the growing role of premium IOLs, optical biometry, and the use of topical anesthesia. A follow-up survey concerning the use of FLACS, post refractive cataract surgery, the use of premium IOLs, and changes in postoperative medications is required.
|
Variability in morphology and immunohistochemistry of Crohn’s disease-associated small bowel neoplasms: implications of Claudin 18 and Cadherin 17 expression for tumor-targeted immunotherapies | f52ed15b-81d6-42f9-bb18-02062891be23 | 11950054 | Biochemistry[mh] | Crohn’s disease (CD) and ulcerative colitis (UC) are the two most common forms of inflammatory bowel disease (IBD). Patients with IBD have an increased lifetime risk of developing colorectal adenocarcinoma (CRC). The molecular pathogenesis of colitis-associated carcinoma (CAC) is different than that of sporadic CRC, suggested that genomic changes linked to the effects of continuous inflammation and repeated mucosal injury in the setting of IBD . Although recent studies have shown that the risk of IBD patients developing CRC has decreased, probably as a result of better treatment and endoscopic surveillance [ – ], small bowel carcinomas in CD are still more likely to be found at an advanced stage since endoscopic surveillance is not standard for small bowel and it is often clinically difficult to distinguish strictures caused by inflammation from carcinoma. Two studies reported the histology of CD-associated small bowel adenocarcinomas ; however, detailed histological analysis based on the current classification of IBD-associated dysplasia or adenocarcinoma have not been performed. Cadherin-17 (aka liver-intestine cadherin) (CDH17) is a member of the cadherin superfamily and is a Ca2 + -dependent cell–cell adhesion molecule that is selectively expressed on enterocytes and goblet cells in the small and large bowel in human and mouse . Several studies described CDH17 expression in adenocarcinoma of the digestive system , and it is considered a useful biomarker of adenocarcinomas with intestinal phenotype . Recently, Feng et al. reported that CDH17 is an ideal target for chimeric antigen receptor T-cells (CAR-T) therapy for gastrointestinal carcinoma . Claudins constitute a multigene transmembrane protein family of tight junctions that regulate paracellular transport and lateral diffusion of membrane lipids and proteins . Claudin 18 (CLDN18) is a member of the CLDN family of cell surface proteins and CLDN18 isoform 2 (CLDN18.2) is normally expressed only in the stomach; Sahin et al. reported that CLDN18.2 is activated in a wide range of human malignancies, especially gastric, esophageal, and pancreatic adenocarcinoma . Based on this, zolbetuximab, a targeted monoclonal antibody, was developed for patients with CLDN18.2-positive gastroesophageal adenocarcinoma [ – ] and CLDN18.2-specific CAR-T therapy has been recently developed . We have previously identified a higher rate of expression of CLDN18.2 in colitis-associated colorectal carcinomas with loss of intestinal markers such as SATB2, and the findings suggested colitis-associated colorectal carcinomas are promising candidates for CLDN18.2 targeted therapy . In this study, we evaluate the histological characteristics of CD-associated small bowel dysplasia/adenocarcinoma and investigate the therapeutic potential of CDH17 and CLDN18 for tumor-targeted immunotherapies, and also whether expression of both CDH17 and CLDN18 are related to gastric differentiation using gastric MUC immunostains, MUC5AC, and MUC6.
Case selection Immunohistochemistry Statistics Chi-squared test or Fisher’s exact tests were used to characterize the relationship between categorical variables. Differences were considered significant at P < 0.05.
Study approval was obtained from the research ethics board at Shinshu University (5359, 22 November 2021) and Tokyo Yamate Medical Center (J-155, 7 September 2022). Twenty-five consecutive lesions of surgically resected CD-SBN from 15 patients between 2012 and 2021 were retrieved from the surgical pathology archives at Tokyo Yamate Medical Center. Hematoxylin and Eosin (H&E) sections were reviewed by three gastrointestinal pathologists (MI, HO, and RR). Neoplastic lesions were classified into dysplasia and adenocarcinoma; one case showed that dysplastic glands invaded only into the muscularis mucosae; however, no obvious submucosal invasion was identified; this case was classified into intramucosal carcinoma (pTis). A total 14 adenocarcinomas and 11 dysplasias were evaluated in this study.
At least one representative paraffin block of tumor was selected in each case for immunohistochemistry; if there was significant morphologic heterogeneity in a given case, multiple tumor blocks were selected as needed to adequately represent the entire tumor. Immunohistochemical staining was performed using commercially available antibodies with the immuno-enzyme polymer method (Histofine Simple Stain MAX PO Multi (Nichirei Biosciences, Tokyo, Japan) for MUC2, MUC5AC, MUC6, and SATB2, or Novolink Polymer Detection Systems (Leica, Wetzlar, Germany) for CDH17, CLDN18, and beta-catenin) with 3,3′-diaminobenzidine as the chromogen, or an automated slide preparation system (p53: VENTANA BenchMark ULTRA, Roche, Basel, Switzerland). The following primary antibodies were used in accordance with the manufacturers’ instructions: CDH17 (clone: SP183; Cell Marque, Rocklin, CA, USA), CLDN18 (clone: EPR19203; Abcam, Cambridge, UK), MUC2 (clone CCP58, Agilent, Santa Clara, CA, USA), MUC5AC (clone: CLH2; Agilent), MUC6 (clone: CLH5; Novus Biologicals, Centennial, CO, USA), SATB2 (clone: EPNCIR130A; Abcam), beta-catenin (clone: EP35, Cell Marque) and p53 (clone: DO7, Agilent).Microsatellite-instability testing by immunohistochemistry for mismatch repair proteins (MMRs) (MLH1(clone: M1, Roche), MSH2 (clone: G219-1129, Roche), MSH6 (clone: SP93, Roche), and PMS2 (clone: A16-4, Roche) was conducted on an automated slide preparation system (VENTANA BenchMark ULTRA, Roche). The extent of staining for CDH17, CLDN18, MUC2, MUC5AC, and MUC6 was scored semiquantitatively (no staining; < 10%; 10–25%; 26–50%; 51–75%; and 76–100%), and the maximum intensity was graded as negative, weak, moderate, or strong. For binary analyses, cases with 10% or more tumor cells showing moderate or strong intensity were considered positive (Supplemental Fig. ). For CDH17 and CLDN18, cases with ≥ 40% or 75% tumor cells showing moderate or strong intensity were also noted which met the participation criteria of the CLDN18 clinical trials . To evaluate any a possible correlation with wnt pathway mutations with CDH17 or CLDN18 changes, beta-catenin staining was classified as membranous expression or nuclear expression. Expression of p53 was classified as wild type (variable weak to moderate staining) or mutant type (either diffuse strong staining or complete absence of staining). For any relationship with mismatch repair gene proteins MLH1, MSH2, MSH6, and PMS2, retained expression was defined as nuclear staining of any intensity within tumor cells, using infiltrating lymphocytes as a positive internal control. Deficient mismatch repair protein expression was defined as complete loss of expression of at least one of the 4 mismatch repair proteins. Two of the authors (MI and HO) reviewed the immunohistochemical stains and reached a consensus score.
group characteristics Pathologic features and immunohistochemistry Histology of adenocarcinoma Histology of dysplasia Immunohistochemistry In fifteen patients, 12 (80%) patients had one carcinoma and 1 (7%) patient had 2 carcinomas for a total of 14 adenocarcinomas were identified. Eleven of 14 (79%) lesions were found in the ileum and 1 (7%) lesion was found in the jejunum, while in 2 (14%) lesions the precise location within the small bowel was not stated. Eleven of 14 (79%) lesions showed at least muscularis propria invasion and 5 (45%) of them were classified into pT4. Lymph node dissection was performed in 3 cases and no metastasis was identified. Eight patients with adenocarcinoma had synchronous dysplasia. One patient had 2 foci of dysplasia and 7 patients had 1 focus; 5 foci were adjacent to adenocarcinoma. Two of 15 (13%) patients had 1 focus of dysplasia only and for a total of 11 dysplasias were identified. The median age of included patients was 50 years (range: 29–71 years); of the patients, 11 were male and 4 were female.
None of the cases showed conventional type colorectal carcinoma morphology which represents cribriform glands composed of epithelium with stratified long oval nuclei and occasional intraluminal necrosis. Thirteen of 14 (93%) cases showed similar morphology; invasive glands were composed of epithelium with cuboidal nuclei and abundant dense eosinophilic cytoplasm; nuclear stratification was not frequently seen. The features were similar to tubular adenocarcinoma of the stomach (Fig. ); thus, we decided to subclassify tumor morphology by WHO classification of gastric cancer . Two of them combined different morphology such as signet ring cell or poorly cohesive cellular histological components therefore subclassified into mixed adenocarcinoma (Fig. ). The other 1 case had mucinous and poorly cohesive cellular histological components and subclassified into mixed adenocarcinoma.
Dysplasias showed similar morphology with IBD-associated dysplasia in large bowel and they were classifiable into conventional (adenomatous) and non-conventional morphology (24, 25). Four of 11 (36%) dysplasia showed conventional morphology, and three of them were adjacent carcinoma. Seven (64%) lesions were subclassified into non-conventional dysplasia and one lesion showed serrated morphology, and the others showed terminally differentiated morphology and overlapping mucin depleted morphology was occasionally observed (Fig. ). A summary of the clinicopathological features of the cohort is shown in Table .
CDH17 expression was retained in thirteen of 14 (93%) CD-associated adenocarcinomas and ≥ 40% extent expression was seen in 12 cases (86%), 9 of them showed ≥ 75% extent expression. Eight of 14 (57%) CD-associated adenocarcinomas were positive for CLDN18, and 5 (36%) lesions showed ≥ 40% extent expression and 2 (14%) of them showed ≥ 75% extent expression. Seven of 13 (54%) CDH17-positive CD-associated adenocarcinomas were positive for CLDN18 (Fig. ). In dysplasias, ten of 11 (91%) cases were positive for CDH17 and 6 of 11 (55%) were positive for CLDN18 (Table ). CDH17 was positive in 23 of 25 (93%) of CD-SBNs. In CDH17-positive CD-SBNs, 61% were positive for MUC5AC, 57% were positive for MUC6, and 57% were positive for CLDN18. Fourteen of 25 (56%) CD-SBNs were positive for CLDN 18 and between CLDN18-positive CD-SBNs and CLDN18-negative SBNs; CLDN18-positive CD-SBNS showed significantly more MUC5AC and MUC6 expression than CLDN18-negative CAC ( P = 0.005, < 0.001 respectively). Two cases of superficially invasive carcinomas showed surface predominant MUC5AC expression and deep layer MUC6 expression, similar expression to that seen in normal gastric mucosa (Supplemental Fig. ). There was no significant difference in the immunoprofile of CDH17, MUC2, beta-catenin, and p53 mutant ratio between CLDN18-positive CD-SBNs and CLDN18-negative SBNs. Three of 25 (12%) CD-SBNs (1 conventional type invasive carcinoma and 2 conventional type dysplasias; 1 dysplasia was adjacent to SATB2 positive invasive carcinoma) were SATB2 positive and all SATB2-positive CD-SBNs showed CDH17 expression, whereas no CLDN18 expression was identified. All lesions were MMR proficient (Table ).
Here, we evaluated the tumor morphology and immunohistochemical expression of CDH17 and CLDN18 in 25 CD-SBNs and potential relationships with gastric differentiation as indicated by MUC5AC and MUC6 immunohistochemistry, wnt pathway mutations, and mismatch repair gene protein immunohistochemistry. Our results showed that regardless of adenocarcinoma or dysplasia, CDH17 expression was retained in most CD-SBNs and CLDN18 expression was seen in 56% of CD-SBNs with a strong association with the expression of gastric mucins. No association was identified on p53, beta-catenin, or MMR status with CDH17 and CLDN18 expression. CD-associated small bowel adenocarcinomas were morphologically more similar to gastric carcinomas than colorectal carcinomas; however, dysplasia morphology was similar to that seen in the colitic mucosa. We also found some CD-SBNs showed SATB2 expression that is normally expressed in the large intestinal epithelium and considered a relatively specific marker for colorectal adenocarcinomas . CDH17 is a member of the cadherin superfamily which is selectively expressed in the epithelial cells of small and large bowel and considered a useful biomarker of adenocarcinomas with intestinal phenotype [ – ]. Su et al. reported that diffuse CDH17 expression was seen in 96% of colorectal carcinomas and 56% of gastric adenocarcinomas; however, they noted most gastric cases showed focal or scattered staining patterns . Interestingly, we found 92% of CD-SBNs were CDH17 positive and 91% of them showed ≥ 40% extent with moderate or strong membranous expression, and approximately half of them co-expressed gastric-type mucins, such as MUC5AC and MUC6 with frequently CLDN18. The findings suggest that CD-SBNs retained intestinal phenotype, that gastric differentiation can co-exist. We also found that 57% of CD-associated adenocarcinoma showed CLDN18 expression and the ratio was higher than that of colitis-associated colorectal carcinomas in our previous study . In the study, we used the same immunohistochemical antibody and cutoff criteria and demonstrated that CLDN18 expression was seen in 27% of colitis-associated colorectal carcinomas with an association of MUC5AC expression, while without significant association with MUC6 status . The positive ratio of CLDN18 was also higher than that in the study on small bowel adenocarcinomas recently reported by Arpa et al. They noted that 28% of small bowel adenocarcinomas were immunoreactive for CLDN18 with a positive correlation of MUC5AC expression, using cutoff values of ≥ 1% at any intensity. The findings might be affected by background disease since they used not only CD-associated adenocarcinomas, with including sporadic and celiac disease cases. In our study, 3 CD-SBNs (12%) showed SATB2 expression. Similar less frequent expression of SATB2 in CD-associated small bowel adenocarcinomas was reported by Neri et al. . They found 20% of small bowel adenocarcinomas showed SATB2 expression; however, the positive ratio was lower in CD-associated adenocarcinomas (12%) than in sporadic or celiac disease-associated adenocarcinomas, suggesting CD-associated small bowel adenocarcinomas are less likely to have large bowel differentiation. Whitcomb et al. reported that CD-associated small bowel adenocarcinomas were more likely to show MUC5AC and MUC6 expression than sporadic small bowel adenocarcinomas . In clinical practice, pyloric gland metaplasia and gastric foveolar metaplasia are frequently seen in small bowel in CD patients . Previous studies showed the frequent MUC5AC and CK7 expression in small bowel mucosa in CD patients and the frequent MUC5AC expression in colonic mucosa in IBD patients . Although the precise molecular mechanism remains undefined, aberrant expression of CLDN18 and gastric-type mucins in the setting of CD might be linked to the effects of ongoing inflammation and repeated mucosal injury. Initially, we attempted to classify adenocarcinomas into one of five morphological subtypes (conventional, mucinous, serrated, low-grade tubuloglandular (LGTG), and others) which we previously used in IBD-associated colorectal carcinomas [ , , ]; however, small bowel CD-associated adenocarcinomas were morphologically different from IBD-associated colorectal carcinomas and none of our cases showed conventional colorectal morphology. Most of our cases showed morphology similar to tubular adenocarcinoma of the stomach, as in the previous study . Considering the gastric-type immunoprofile and morphology, adenocarcinoma of the small bowel is more likely to show pronounced gastric differentiation than adenocarcinoma of the large bowel in IBD. We also evaluated the immunophenotype and morphology of CD-associated small bowel dysplasias and found that CD-associated small bowel dysplasia showed frequent CLDN18 and gastric-type mucins expression, with rarely SATB2 expression, with both conventional (adenomatous) and non-conventional morphology which has been described in IBD-associated dysplasia in the large bowel . In this cohort, 64% of cases were subclassified into non-conventional dysplasia and we found serrated, terminally differentiated and mucin depleted morphology. This finding is in accordance with the previous study by Simpson et al. who reported dysplasias in CD-associated small bowel and described histology as follows; “adenomatous,” “saw-tooth or serrated pattern,” and “dysplastic Paneth’s cells and basal cell change,” and noted that the features were similar with dysplasias seen in UC patients . These findings indicated that CD-associated small bowel dysplasia show similar immunoprophile with adenocarcinomas; frequent CLDN18 and gastric type mucins expression, with rarely SATB2 expression. However, dysplasias shared a similar histology with colitic dysplasia, whereas adenocarcinomas were histologically similar with gastric adenocarcinomas. Due to the small number of cases examined in our study, further studies in a larger cohort are needed for detailed histological evaluation in CD-associated small bowel dysplasias. By finding retained CDH17 expression and frequent CLDN18 expression, our study indicates the possibility of widened treatment options in CD-associated small bowel carcinoma patients. Feng et al. recently reported that they developed a llama-derived nanobody, VHH1-driven CAR-Ts targeting CDH17 and demonstrated that the VHH1-CAR-T cells (CDH17CAR-Ts) eradicated CDH17 expressing neuroendocrine tumor and gastrointestinal cancers such as gastric, pancreatic, and colorectal cancers in tumor xenograft or autochthonous mouse models. They noted CDH17CAR-T did not cause histological damage in normal intestinal cells . Although further investigation is warranted for clinical implementation, most CD-associated small bowel adenocarcinomas may be a candidate for CDH17 CAR-T therapy. Furthermore, we previously reported that frequent immunohistochemical CLDN18 (clone: EPR19203) expression in colitis-associated colorectal carcinomas and confirmed CLDN18-positive colorectal carcinomas only expressed CLDN 18.2 by RT-PCR . The finding of frequent expression of CLDN18 in CD-associated small bowel carcinomas has implications for a targeted anti-claudin 18.2 antibody such as zolbetuximab therapy. Recently, phase 3 zolbetuximab trial in patients with CLDN18.2 positive (defined as ≥ 75% of tumor cells showing moderate or strong membranous CLDN18 staining), HER2 negative, locally advanced unresectable or metastatic gastric or gastroesophageal junction (ClinicalTrials.gov Identifier: SPOTLIGHT; NCT03504397) resulted in significantly prolonged progression-free survival and overall survival . We found 14% of CD-associated small bowel adenocarcinomas match the participation criteria of the trial and the findings indicate that zolbetuximab therapy might be effective for a subset of CD-associated small bowel carcinoma which are still more likely to be found at the advanced stage. In conclusion, we demonstrated that CDH17 was frequently retained even approximately half of CDH17-positive CD-SBNs showed gastric mucin expression and CLDN18 expression was frequently co-expressed. CLDN18 expression had a positive correlation with the expression of gastric mucins. CD-associated small bowel adenocarcinoma histology was different from colitis-associated colorectal carcinomas; however, a subset of small bowel dysplasia was morphologically similar to that of the large bowel. These results suggest that CD-associated small bowel adenocarcinomas may be candidates for CDH17- and CLDN18-targeted immunotherapies.
Below is the link to the electronic supplementary material. Supplementary file1 Figure 1. Immunohistochemical scoring of Cadherin 17 and Claudin 18 expression. Examples of tumors scored as having absent (a), weak (b), moderate (c), or strong (d) membranous expression are shown (Claudin18 immunostain) Nb (TIF 15298 KB) Supplementary file2 Figure 2. Histology of superficially invasive adenocarcinoma (a), high power view (b). MUC5AC expression is seen mainly in the superficial to middle layers of the lamina propria (c), and MUC6 expression is seen mainly in the middle to deep layers of the lamina propria (c). Cadherin 17 expression pattern is similar with MUC5AC (e), and Claudin 18 expression is similar with MUC6 (f) (TIF 21874 KB)
|
Prevention and management of radiotherapy-related toxicities in gynecological malignancies. Position paper on behalf of AIRO (Italian Association of Radiotherapy and Clinical Oncology) | 46448a64-9642-43cd-9768-55a11e26095b | 11379782 | Internal Medicine[mh] | Today’s multi-modal therapies for gynecological cancers management including surgery, chemotherapy (CHT), external beam radiotherapy (EBRT) and interventional radiotherapy (IR), also called brachytherapy, may determine a wide range of underestimated side effects , the development of which depends on therapy-related factors such as radiation therapy (RT) modality and dose, and patient characteristics and comorbidities. Pelvic RT, in the curative or adjuvant setting, is linked with acute and late toxicity due to irradiation of organs at risk (OARs), such as the small and large bowel, rectum, bladder, and femoral heads, and can cause detrimental effects on health and long-term quality of life (QoL) . More recently further toxicities emerged, as hematological, due to the widespread use of concomitant chemoradiation (CRT), and pelvic bone and vaginal side effects . All adverse side effects are scored on specific international scales according to severity of symptoms or clinical evidence, which may vary from minimal to very serious, and even compromise the patient’s survival. The “radiation therapy oncology group” (RTOG) scale and the “common terminology criteria for adverse events” (CTCAE) system were designed to assess acute and late side effects. The subjective, objective, management, analytic/late effects normal tissue task force (SOMA/LENT) scale evaluates only late side effects. QoL questionnaires are often used to subjectively assess patients’ symptoms in relation to their daily life . Successful toxicity management varies with its severity, Radiation Centre practice and the experience and skills of the radiation oncologists which may be limited by a lack of physician education .The present position paper was designed by the Italian Association of Radiation and Clinical Oncology Gynecology Study Group ( AIRO Gyn) to provide radiation oncologists with evidence-based strategies to prevent and manage acute and chronic toxicities and follow-up recommendations for patients with gynecological cancers who underwent RT. With AIRO Steering Committee endorsement, 6 workgroups of radiation oncologists, each including physicians with over 5 years of experience in gynecologic cancer, were setup to investigate early and late RT-related toxicities in the bowel (AB, AP, EG, JDM, AF), rectum (SC, EM, CM, ADA, PF), bladder (FT, RL, GC, AS), bone (EP, CA, MPP, VE), blood (FT, RL, GC, AS), and vagina (MC, VDS, FT, CL) after adjuvant or curative EBRT, with or without BT and/or CHT. The choice of taking part to each group was based on the preference and interest of the single specialists in the specific field of investigation; each group was established during the preparatory meeting. For each topic, PubMed database was searched for relevant English language papers published from January 2005 to December 2023. Search strategy included the following keywords: “cervical cancer*” OR “cervical neoplasm*” OR “cervix cancer*” OR “cervix neoplasm*” OR “uterine cancer*” OR “uterine neoplasm*” OR “vaginal cancer*” OR “vaginal neoplasm*” OR “vulva* cancer*” OR “vulva* neoplasm*” OR “endometrial cancer*” OR “endometrial neoplasm*” OR “ovarian cancer*” OR “ovarian neoplasm*” OR “Genital Neoplasms, Female” [Mesh]. An example of search strategy referring to bone toxicity is shown in Table . Titles and abstracts of literature search results were checked to verify suitability for the document. Reference lists of selected studies and review papers were manually searched for additional pertinent publications. Editorial, abstract from international meetings and case reports/series were excluded. Results were grouped according to the topic investigated. Data on incidence, etiopathogenesis, prevention, treatment and follow-up of acute and late side effects for each OAR are presented and discussed. Bowel toxicityRectal toxicityUrinary toxicityBone toxicityHematological toxicityVaginal Toxicity Incidence and etiopathogenesisPreventionTreatmentFollow-up During follow-up visits, attention should be reserved for vaginal and sexual symptoms reported by the patients and active interventions by a multi-specialist team should be undertaken, if possible. Summary of evidences is shown in Table . Incidence and etiopathogenesisPreventionTreatmentFollow-upOverall, small bowel toxicity develops in up to 55% of women during RT or within 3 months of it and in 15% after more than 3 months , limiting dose delivery and negatively impacting QoL . Although the etiopathogenesis of enteritis after abdominal RT is still unknown, changes in fecal microbiota have recently been hypothesized to be involved . RT induces cellular damage, cell death, and generation of reactive oxygen species, thus triggering secondary reactive inflammatory processes and immune responses. Moreover, stem cell depletion and microvascular alterations induce progressive tissue fibrosis, ischemia, and mucosal atrophy . Occurrence of enterocolitis and diarrhea were reported at the end of treatment in 51.9% of endometrial and cervical cancer patients treated with 3D conformal RT (3D-CRT) and 33.7% of patients treated with IMRT . No certain data are available on the real incidence of bowel toxicity on vulvar and vaginal cancers due to their rarity. Bowel toxicity was not reported in a large multi-institutional series of vulvar cancer patients who had received adjuvant RT with or without CHT . A few cases of acute and late toxicity, not exceeding G3, were observed in other series of adjuvant, preoperative or definitive RT in vulvar cancer patients. Only G4 skin toxicity was found . Usually occurring after 2 weeks of RT, diarrhea was related to dose per fraction and irradiated volume. Although it may be underestimated, chronic RT-related enteritis was reported in up to 20% of patients , generally from 18 months to 6 years after treatment. Most symptoms were due to alterations of the bowel vascular compartment leading to the most serious side effects, i.e., ischemia, progressive intestinal fibrosis, stenosis and/or fistulas. Etiopathogenesis of bowel toxicity is shown in Fig. . Pharmacological and RT techniques may prevent small bowel toxicity. Several studies demonstrated that probiotics during treatment significantly reduced acute toxicity . A double-blinded study of 54 patients who underwent pelvic RT assessed probiotics against placebo . During EBRT and in the three weeks afterward, episodes of diarrhea and abdominal pain were evaluated through interviews and questionnaires and scored on the CTCAE scale . Probiotics significantly reduced not only the incidence of diarrhea more than placebo (53.8 vs 82.1%, p < 0.05), but also its severity ( p < 0.05) and the need for loperamide administration ( p < 0.01) . Furthermore, probiotics were associated with a significant difference ( p < 0.001) in grade 2 abdominal pain and in the number of daily episodes of abdominal pain . Other studies reported similar results, linking probiotics with a significant difference in use of loperamide (32% vs 9%) . Nutritional supplements based on Zinc, Prebiotics, Probiotics and Vitamins , amifostine and the oral CXCR4 Inhibitor X4-136 were also reported to be useful in patients treated with pelvic RT not only for cervical and endometrial cancer but also for anal and rectal cancer . Small bowel toxicity is reduced by modern RT techniques such as IMRT, volumetric modulated arc therapy (VMAT), tomotherapy and proton beam RT . On the other hand, changes in setup positions (supine vs prone) yielded discordant results . Hoover et al. found that visceral adipose-corrected bowel bag dosimetric constraints correlated better with acute bowel toxicity than the current standard practice of considering V45 cc and V40% . Using image-guided radiotherapy (IGRT), Xin et al. evaluated setup errors and their effects on acute bowel toxicity and treatment efficacy in 170 cervical cancer patients who underwent IMRT ± IGRT. Response rates were similar in both groups, but IGRT significantly corrected and reduced setup errors during treatment and enhanced the accuracy of dosage distribution within OARs (such as targeted regions), thus reducing RT-related toxicity . Park et al. found bladder filling associated with the use of personalized immobilization devices and the adoption of the prone position in 3D-CRT displaced the small bowel continuously away from the irradiated field in cervical cancer patients. Adopting these precautions decreases the amount of intestine exposed to radiation and consequently can decrease the frequency and severity of onset of side effects . Small bowel toxicity may have an impact on treatment compliance, requiring symptomatic therapy when necessary. Treatment of acute small bowel toxicity can require probiotics to restore intestinal microbiota, loperamide and dietary counseling, bearing in mind that nutrient malabsorption may occur as a late side effect . During follow-up, all patients should be evaluated to assess late toxicity for early intervention by a specialist multidisciplinary team (e.g., gastroenterologist, nutritionist, surgeon). Patients recovering from initial complications remain at risk of late and persistent adverse events . Summary of evidences is shown in Table . Incidence and etiopathogenesisPreventionTreatmentFollow-upRT-related proctitis, a common complication of pelvic RT, is due to the rectal proximity to pelvic organs and its fixed position . Although the incidence is not clear, due to a lack of consensus on its definition and reporting methodologies, large irradiated volume, RT dose (< 45 Gy or above 70 Gy), older RT technique (3D-CRT vs IMRT), are generally agreed to be risk factors . Acute RT-related proctitis occurs almost immediately after starting RT and lasts for up to 3 months. It is an inflammatory process affecting the superficial mucosa and its symptoms usually include diarrhea, cramps, tenesmus, urgency, mucus discharge, and minor bleeding which typically resolve spontaneously following completion of treatment . Even though chronic RT-related proctitis may begin during the acute phase of radiation proctitis, symptoms may not become apparent until a median of 8–12 months after completing RT . It is histologically characterized by arteriole endarteritis, submucosal connective tissue fibrosis and neoangiogenesis followed by telangiectasias . Bleeding is the most common symptom; strictures, perforation, fistula and rectal obstruction may also occur . In some cases, loss of distensibility, due to rectal wall fibrosis, results in tenesmus or defecation difficulties. Etiopathogenesis of rectal toxicity is shown in Fig. . Rectal toxicity should be prevented because it may interrupt treatment, limit the delivered RT dose with a consequent reduction in treatment efficacy and worsen the patient’s QoL . Prevention should begin by assessing the individual patient’s risk profile bearing in mind that comorbidities, such as diabetes mellitus, vascular disease, arterial hypertension, atherosclerosis, inflammatory bowel disease, collagen disease, and HIV infection, are associated with increased risk of toxicity . RT-related rectal toxicity is reduced by decreasing the dose delivered to the rectum and by adopting strategies that modulate cellular and tissue responses to RT, thus reducing radiosensitivity . Several trials demonstrated that IMRT was associated with less rectal toxicity than 3D-CRT . A prospective, phase III trial was conducted on 234 patients with cervical or endometrial cancer who were randomized to post-operative RT with IMRT or 3D-CRT. IMRT was associated with significantly fewer episodes of diarrhea and fecal incontinence . The Post-operative Adjuvant Radiation in Cervical Cancer (PARCER) phase III randomized trial, which compared late toxicity in women with cervical cancer undergoing post-operative RT with IGRT-IMRT or 3D-CRT, demonstrated that IGRT-IMRT significantly reduced late toxicity with no difference in disease outcomes . Although clinical target volume-planning target volume (CTV-PTV) margin shrinkage might reduce RT-related toxicity, too narrow margins could increase the risk of geographic miss, especially with IMRT/VMAT techniques with highly conformal doses to the target volume . IGRT reduces the risks of target miss and/or OARs overdose during RT delivery . The role of cone-beam computed tomography (CT) was evaluated in 170 patients with cervical cancer to check setup errors and their effects on acute toxicity and RT efficacy. The results showed it corrected and reduced setup errors, improved dose distribution accuracy in the target area and OARs, significantly reduced toxicity and improved efficacy . Even though prone and supine positions were not associated with any differences in dosimetry and rectal toxicity with IMRT, the supine position is preferred because of fewer setup uncertainties and greater patient stability during treatment . Several drugs have been used to prevent RT-related toxicity by modulating the radiosensitivity of normal tissues . Administered intravenously, subcutaneously or intrarectally (the most effective route) , amifostine exerts radioprotective efficacy through diverse complex and not fully understood molecular and cellular processes, which are hypothesized to include free-radical scavenging, DNA protection, DNA repair acceleration, and induction of cellular hypoxia . It may up-regulate the expression of proteins that repair DNA and inhibit apoptosis through Bcl-2 and hypoxia-inducible factor-1α . Several small, single-center controlled trials suggested that amifostine may reduce acute gastrointestinal toxicity during pelvic RT, while there does not appear to be any meaning reduction in late morbidity. Thus, despite many studies which a recent review judged to be at high risk of bias , due to methodological limitations and very uncertain evidence, amifostine has not been associated with sufficiently reduced side effects to satisfy FDA regulatory requirements . The present position concurs with the MASCC panel’s recommendation that cytoprotective agents like Sucralfate, non-steroid anti-inflammatory agents like balsalazide, mesalazine and prostaglandin analog like misoprostol should not be treatment of choice to prevent radiation-induced proctitis, due to conflicting evidence on their efficacy . Grade 1/2 proctitis responds to topical anti-inflammatory products, such as sulfasalazine or mesalazine alone or combined with steroids . Hyperbaric oxygen which induces neo-vascularization, tissue re-oxygenation, collagen neo-deposition and fibroblast proliferation, elicited responses in the majority of patients with soft tissue necrosis or chronic proctitis . A review evidenced that hyperbaric oxygen therapy may improve outcomes, but further studies are necessary to establish the correct patient’s selection . Potassium titanyl phosphate, Argon and YAG lasers are used to treat superficial injuries . Repeated applications of Argon Plasma Coagulation resolved 80–90% of cases with chronic proctitis and bleeding . Anal or rectal pain in 20% of cases resolved spontaneously while sever complications like hemorrhage, necrosis and perforation occurred in 10% of cases . Two or 3 sessions of Radio-Frequency Ablation provided hemostasis without severe complications . Cryoablation yielded excellent results but is not in widespread use . Refractory proctitis requires surgery leading to colostomy or exenteration. Sigmoidoscopy is recommended for investigating patient-reported bleeding or evidence of occult fecal blood . Summary of evidences is shown in Table . Incidence and etiopathogenesisPreventionTreatmentFollow-up In addition to the clinical examination, the accurate anamnesis guides the specialist in any ulterior investigation with further instrumental tests for urinary tract dysfunction. Bladder cystitis and bleeding may reach a peak prevalence rate at about 30 months, after which prevalence rates fell to baseline, indicating healing . Summary of evidences is shown in Table . After pelvic RT for gynecologic malignancies about 50% of women experience acute urinary symptoms, including dysuria, urinary frequency, nocturia, and hesitancy which are linked to RT-induced cystitis. Urinary disturbances occur after a dose of 20 Gy to the bladder and subside 2–3 weeks after the end of treatment . The bladder and urethra frequently show signs of late radiation damage, leading to urinary sequelae like infection, discomfort, and hematuria. Reduced bladder capacity leading to frequent urination is due to damage to bladder vasculature and smooth muscle fibers, resulting in edema, cell death and fibrosis . Bladder dysfunction occurring many years after RT, affects the patient's QoL and includes urgency, frequency and incontinence due to high dose bladder neck irradiation (26%), ureteral stricture or fibrosis (1–3%), hemorrhagic cystitis (5–9%), but rarely vesicovaginal and ureterovaginal fistulas . Chronic symptoms appear to be the result of vascular endothelial cell damage that develops with a latency period of 1 to 25 years. The risk of late genitourinary toxicity increased with a history of abdominal surgery, pelvic inflammatory disease, hypertension, diabetes mellitus and smoking . Older age significantly impacted incontinence, because shorter vaginal lengths can result in higher bladder neck doses. Obesity and overweight were risk factors for incontinence and frequency . Most RT-related ureteral strictures caused by RT affect the distal portion of the ureter, and it was demonstrated that delaying the clearance of ureteral blockage increases the risk of serious long-term morbidity, including infections, kidney damage, and arterial hypertension. The risk of ureteral stricture in patients with locally advanced cervical cancer and hydronephrosis at diagnosis was 11.5% at 5 years compared 4.8% without hydronephrosis . A higher incidence of ureteral stricture was seen in patients who underwent hysterectomy or other pelvic surgeries followed by RT. In the EMBRACE investigations, however, despite 26.7% of patients having received laparoscopic staging , a link between surgery and ureteral stricture was not observed, after EBRT with or without node boost and Image-Guided Adaptive IR. Diverse urinary morbidity endpoints exhibit different temporal trends, as shown by the EMBRACE research . This suggests that a wide range of intricate physiological mechanisms develop during radiation. The exposure of various organ sub-volumes to RT, the differences in dose–effect relationships for various symptoms, the potential reversibility of some late effects, and the effective management of late effects are additional factors that influence the development of treatment-related morbidity. Etiopathogenesis of urinary toxicity is shown in Fig. . Different IMRT modalities may reduce the rate of acute and late high-grade toxicity . On the other hand, Dröge et al. reported that patients treated with VMAT experienced acute < grade 3 urinary toxicity more frequently compared with 3D-CRT, probably due to the larger amount of irradiated bladder wall . In patients with locally advanced cervical cancer, who were treated with EBRT, CHT and IR the investigators of EMBRACE Collaborative Group found ICRU bladder point (ICRU-BP) dose > 75 Gy was a stronger predictor of incontinence than bladder D2 cm 3 since it is located near the trigone, bladder neck and urethra . A ureteral dose of D0.1 cc of 23.1 Gy EQD2 is connected to a 10% chance of G3 or greater urinary toxicity . To reduce the incidence of severe urinary complications to at least 15%, a D2cm 3 ≤ 80 Gy EQD2 should be used. Dose to the bladder trigone was also predictive of severe late urinary toxicity . Guidelines for managing urinary toxicity are lacking. For acute symptoms, the workup should include urine analysis and urine culture. Low-grade urinary symptoms are managed with non-steroidal anti-inflammatory drugs, anticholinergic agents such as oxybutynin, or analgesics such as phenazopyridine. Botulinum toxin A injection into the detrusor muscle may be used when drug therapy is ineffective . Symptoms are generally self-limited, and drugs can be discontinued as symptoms improve. Treatment for hemorrhagic cystitis includes hydration, hyperbaric oxygen, clot evacuation, endoscopic fulguration and bladder irrigation with a variety of substances . Surgery should be evaluated in case of refractory disease. Infection and primary bladder malignancy must also be evaluated. Ureteral strictures, if not due to recurrent disease, are repaired with endoscopy or open surgery including percutaneous nephrostomy or ureteral stent or ileal ureteral substitution which can be challenging due to the poor vascularity and wound healing following radiation. Vesicovaginal fistulae, not related to disease, may require fulguration and drainage or surgery . Incidence and etiopathogenesisPreventionTreatmentFollow-upSurgery with ovary removal, CHT and RT may have detrimental effects on bone mineral density (BMD) leading to osteoporosis and fractures which impact on quality of life and life expectancy . The incidence of bone toxicity after RT or CRT is largely underestimated because attention has only recently focused on long-term cancer survivors . RT is hypothesized to be linked to osteoblast death and less activity as well as increased osteoclast activity and inflammatory cytokine release. Consequences include bone marrow adiposity, trabecular bone loss , reduced BMD, osteoporosis, and pelvic insufficiency fractures (PIF) . The incidence of PIF after RT ranges from 10 to 14% , but other studies reported incidences ranging from 3% to 37.4% , with a higher incidence in patients over 50 years old . Median time to PIF occurrence ranges from 7 to 39 months ; actuarial rates increase from 3.6% at 1 year to 15.7% at 3 years . PIF is diagnosed on evidence from X-rays, bone scans, CT scans, or magnetic resonance imaging (MRI), with MRI being the most reliable tool . The sacrum, sacroiliac joint and pubis are the most frequently affected sites ; more than 1 PIF can occur . About 50–70% of patients with PIF refer pain . Risk factors for PIF development are age over 50 , post-menopause , low BMD at baseline and after RT , low body weight/low body mass index , osteoporosis , high alkaline phosphatase level at baseline . RT-related parameters include treatment modality (IMRT vs 3D-CRT), and intent (curative or adjuvant) which correlate with the delivered doses . Etiopathogenesis of bone toxicity is shown in Fig. . Before RT, primary prevention of PIF is based on accurate evaluations of BMD and risk factors, particularly in postmenopausal women and in patients over 50 years old , as lower pre-treatment CT bone density was found in patients developing PIF and a global reduction of BMD was reported after RT or CRT, even though there is no consensus on whether adding CHT to RT increases the risk of PIF . When necessary, therapy should be prescribed, e.g., vitamin D, calcium, bisphosphonate and, in selected cases, hormone replacement therapy . RT-related bone toxicity should be minimized even though to date modalities and doses have not yet been clearly defined and no dosimetric constraints are available for the pelvic bone dose to reduce the incidence of bone toxicity and PIF . In cervical cancer patients treated with curative intent, IMRT plus IR was associated with less PIF than 3D/CRT plus IR . This difference did not emerge in the adjuvant setting , due to the lower doses administered in the post-operative treatment. Controversial results were achieved when a simultaneous integrated boost (SIB) was administered by IMRT . Bazire et al. found maximum doses were significantly higher at fracture sites than in pelvic bones without PIF; while, Mir et al. reported 60 Gy SIB did not impact fracture sites. Ramlov et al. found sacrum D50% was a significant risk factor for sacral fracture in patients over 50 years old who underwent curative RT for locally advanced cervical cancer, indicating that high doses to the total bone and not just to a small part can cause PIF. Indeed, reducing sacrum D50% from 40 to 35 Gy lowered the risk of sacral PIF from 45 to 22%. Finally, to prevent PIF the recommended EBRT dose should be reduced to 45 Gy and tighter margins should be applied when contouring. An internal margin of 3 mm for pelvic bone, called bone − 3 mm, was used to assure that PTV did not extend beyond it by Bazire et al. who reported a PIF incidence of 3% and 4% for cervical and endometrial cancers, respectively, using IMRT . A nomogram was proposed to predict the risk of sacral PIF based on age and V40G 3 (EQD2 α/β = 3), which were found predictive factors for PIF in patients receiving adjuvant or radical RT . Management of bone toxicities and PIF requires a multidisciplinary approach. Preventive therapy for low BMD and osteoporosis should continue throughout treatment and follow-up . PIF is generally treated with no steroidal anti-inflammatory drugs, analgesics or opioids, if necessary; treatment can take many months . Bed rest is indicated to avoid load with slow full mobilization . Hospitalization is required for about 10% of cases and femoral head fractures require surgery . Specific bone therapies improve PIF repair and physiotherapy may be required . Follow-up examinations should include regular BMD assessment and drug therapy for patients at risk . Attention should be paid to patient-reported musculoskeletal symptoms, which are often overlooked as specific QoL questionnaires do not investigate RT-related bone toxicity . Imaging studies, particularly MRI, should be prescribed for symptomatic patients, taking care to differentiate PIF from metastases . Summary of evidences is shown in Table . Incidence and etiopathogenesisPreventionTreatmentFollow-up Slow immune recovery and abnormal white blood cells count at three months post-treatment and/or at the last follow-up, underline the need to lower the incidence of hematological toxicity . Low lymphocyte counts persisting for one year after RT might be associated with a higher risk of decreased survival. Patients with hematological toxicity should be evaluated by a multidisciplinary team, including a hematologist. Routine analysis should include blood and biochemistry tests other than CT Scan, USG abdomen, ECG, and chest X-ray. Summary of evidences is shown in Table . Due to the heterogeneity of gynecological cancers and the range of treatments (EBRT alone, IR alone, or combined, with or without CHT), no studies have defined the impact of each factor on the incidence of hematological toxicity. Several studies reported that bone marrow (BM) acted as a parallel organ and emphasized the need for sparing a threshold of its volume. Predictors contributing to hematological toxicity were: baseline white blood cells, absolute neutrophil count, hemoglobin and platelets; use of para-aortic irradiation; body mass index. No associations were found between hematological toxicity and race, age, comorbidity, performance status, smoking history, stage, BM volume, pre-treatment transfusions . Hematological toxicity might depend on the association of RT and a myelosuppressive CHT regimen . In the setting of CRT for various pelvic cancers, including cervical cancer , myelosuppressive CHT was identified as the primary cause of anemia, leukopenia, and neutropenia which, together with thrombocytopenia, are common and, at times, life-threatening side effects of oncologic treatments for pelvic malignancies . Huang et al. showed hematological toxicity grade 2 or higher in 69.5% of cervical cancer patients undergoing CRT with standard RT; while, hematological toxicity grade 2 or higher was 50% lower in patients undergoing BM sparing with IMRT . Hematological toxicity is also caused by incidental BM irradiation during pelvic nodal RT due to radiosensitivity of BM stem cells , with leukopenia, and in particular lymphopenia, being major consequences . BM composition (particularly the fat fraction) was reported to change during RT , with the decline and regeneration of active, red BM (aBM) being RT dose-dependent . Patients with a low pre-treatment aBM volume, identified by 18F-FDG-PET-CT and the technetium-99 m (Tc-99 m) sulfur colloid SPET, were more likely to develop hematological toxicity grade 3 than patients with a larger aBM volume before irradiation . aBM, half of which is located within pelvic bones and lumbar vertebrae , is highly radiosensitive as just 4 Gy reduces its volume by 50% within 1 or 2 weeks . Indeed, a dose threshold of 4 Gy, with no benefit from fractionation, was reported for BM suppression in pelvic cancer patients undergoing CRT with IMRT . Continuous lymphoid hematopoiesis within aBM , is especially vulnerable to RT . The lethal radiation dose that reduces the surviving lymphocyte fraction by 50% (LD50) is just 1.5 Gy, and the LD90 is just 3 Gy . Even though avoiding BM during RT appears to be a factor in preserving aBM and decreasing hematological toxicity , BM tolerance remains poorly understood . Moreover, BM was excluded from normal tissue dose constraint guidelines such as “the Emami table” or Quantitative Analysis of Normal Tissue Effects in the Clinic (QUANTEC) . Furthermore, the Lyman–Kutcher–Burman model, the most widely used normal tissue complication probability (NTCP) model, does not consider BM toxicity . Etiopathogenesis of hematological toxicity is shown in Fig. . Currently, the development of effective pelvic BM sparing RT techniques is limited due to a lack of knowledge on the spatial location of BM to be saved and the required degree of sparing that is essential . In the future proton therapy may be beneficial to enable BM sparing due to its physical characteristics and ability to achieve satisfactory target dose distribution . A systematic review investigating the clinical benefit of aBM sparing in cervical cancer patients receiving CRT evidenced decreasing incidence of hematological toxicity . Since functional imaging to identify aBM by 18F-FDG-PET-CT and the technetium-99 m (Tc-99 m) sulfur colloid SPET is expensive and not commonly available, earlier studies proposed an atlas-based method for delineating the aBM in patients with cervical cancer for BM sparing IMRT . Different methods were proposed for delineating pelvic bones: delineating the external contour of all bones within the pelvis or utilizing specified CT window settings or anatomical landmarks . Several studies recommended the following dosimetric parameters for pelvic bones to reduce hematological toxicity: V10 < 75–95% , V20 < 65–80% , and V40 < 28–37% . Grade ≥ 2 hematological toxicity was linked to increased BM volume receiving low doses, as V10 ≥ or < 90% . A significant relationship emerged between the dose received by pelvic bone and nadirs of blood cells, including white blood cells, absolute neutrophil count, hemoglobin, and platelets . Only V10 and V20 were significantly correlated with hemoglobin nadirs, while no dosimetric parameters were associated with platelets nadirs . In cervical cancer patients who were treated with CRT, Elicin et al. found the volume of BM and aBM exposed to low doses RT were associated with white blood cells decrease. In particular, aBM V30 correlated with reduced aBM SUV and impacted the white blood cells count three months after treatment and during late follow-up . In patients with cervical cancer who had no lymph node metastasis detected during surgery or by preoperative imaging, and met the criteria, reduced-volume pelvic RT, rather than whole pelvis RT, relieved acute and late radiation damage, especially myelosuppression. With a decreased CTV and significantly lower V10 and V20, reduced-volume pelvic RT did not affect long-term survival. Compared with whole pelvis RT the incidence of decreased hemoglobin associated with ≥ grade 3 thrombocytopenia toxicity was significantly reduced ( p < 0.05) . During CRT, routine blood and biochemistry investigations are indicated. Myelosuppression, which can increase infection and hospitalization rates may require transfusions and administration of growth factors. It is also linked with treatment interruptions that significantly worsen outcomes . Little attention is paid to vaginal toxicity and the ensuing sexual complications that women may experience after RT. In cervical cancer patients a systematic review reported more sexual dysfunction and vaginal toxicity after RT. . Modifications in sexuality were due not only to physical and treatment-linked factors, but also to physiological and social causes . Vaginal atrophy in up to 50–60% of women affects sexuality and sexual functioning with a notable impact on QoL . RT-related vaginal morbidity is mainly due to vaginal mucosa inflammation that is linked to microcirculatory alterations, leading to atrophy, telangiectasia, reduced lubrication and finally adhesions, fibrosis, vaginal stenosis and shortening. A 29% probability of grade 2 or more vaginal morbidity through the first two years after treatment was reported, with 22% actuarial probability of vaginal stenosis at 2 years . Very few studies described vaginal toxicity as a Patient Reported Outcome (PRO). As assessed by PRO questionnaires, a 3-year rate of 29% vaginal dryness was reported in women treated with pelvic RT . Etiopathogenesis of vaginal toxicity is shown in Fig. . Two dosimetric studies showed that improving RT techniques could prevent vaginal toxicity. Vaginal dose de-escalation at EBRT with IMRT as well as at IR is expected to reduce vaginal morbidity and thus help prevent sexual dysfunction . According to data on the dose-response relationship , de-escalating the dose to the ICRU rectovaginal point from 75 to 65 Gy reduced grade 2 or more vaginal stenosis by 7%. Targeting multiple vaginal points gives an overview of the dose to the different parts of the vagina and appeared to be a valid strategy for reducing the dose to the vagina and correlating it to clinical outcomes . In particular, doses < 50 Gy to the posterior inferior border of the pubic symphysis with EBRT + BT were associated with a lower risk of vaginal stenosis (44% incidence of grade ≥ 2 vaginal stenosis at five years vs 26% and 12% for patients receiving 15–50 Gy and < 15 Gy, respectively). Using 3D IR volumetric planning rather than non-volumetric point-based planning, grade 2 vaginal toxicity was significantly reduced (0% vs 27%) . With a vaginal mucosa dose of under 140% of the fractional IR dose (corresponding to a total EQD2 of 85 Gy), the dose to the ICRU rectovaginal point was reduced from 69 to 64 Gy ( p < 0.001) and the dose to the vaginal surface dropped from 266 to 137 Gy; the D90 HR-CTV dose was not significantly different. Overall, these changes significantly reduced vaginal toxicity more than the non-vaginal dose de-escalated plan . The gonadal function might be preserved in selected cases. Ovarian preservation with IMRT is technically challenging, due to poor ovary visualization at CT planning and high oocyte radiosensitivity. Indeed, sterilization is predicted in 5 and 50% of women whose ovaries receive 2–3 Gy and 6–12 Gy, respectively . Ovarian transposition and ovarian tissue preservation, as cryopreservation and transplantation, are not widely used techniques , but may prevent the onset of menopause, particularly in selected young cervical cancer patients. Still under evaluation are graft size, duration of the restored function according to the site of transplantation and the therapeutic modalities to reduce the risk of tumor recurrence. There is consistent evidence that heterotopic transplantation of ovarian tissue restored ovarian function for 4–5 years . A recent review reported that 98% of participants had restoration of ovarian function with a first ovarian transplantation. Topical application of hyaluronic acid, along with vitamin E and A prevented acute and late vaginal toxicities thanks to their role in cellular differentiation, keratinocyte proliferation, antioxidative properties and support to the extracellular matrix of the vaginal epithelium . They reduced dyspareunia, vaginal mucosal inflammation, vaginal dryness, bleeding, fibrosis and cellular atypia. Regular use of vaginal moisturizers to hydrate the vaginal mucosa and lubricants to minimize dryness and pain during sexual practice is indicated. Further studies are needed to confirm whether local application of mitomycin C prevents vaginal vault narrowing after treatment, as fewer vaginal adhesions and vaginal vault fibrotic changes were reported than in a control group . Toxicity, deriving from hypoestrogenism, includes the genitourinary menopause syndrome, i.e., the set of vulvovaginal signs and symptoms, involving changes in the major/minor lips, clitoris, vestibule, vagina, urethra and bladder . Hormone replacement therapy (HRT), as administered in diverse formulations, effectively treats genitourinary menopause syndrome and is useful in managing post-RT menopausal symptoms . Despite the few studies, systemic or local estrogen therapy is a valid option for acute RT-related changes and preventing the development of later vaginal complications, thanks to its direct effect on epithelial regeneration and anti-inflammatory properties. Vaginal estrogens reduce superficial dyspareunia and relieve urogenital symptoms related to vaginal atrophy and are safe in cervical cancer patients because of minimal systemic absorption through the atrophic mucosa . Although estrogen and progesterone receptors are expressed in 39% and 33% of cervical adenocarcinomas, HRT was not shown to significantly influence disease-free and overall survival . In post-treatment menopausal cervical cancer patients, low compliance rates with HRT were reported partly due to a lack of awareness of its benefits by patients and physicians and partly because clinicians rarely prescribed HRT appropriately, fearing second malignancies such as breast and endometrial carcinoma . However, estrogen-only HRT is not advised in this population, due to the risk of secondary endometrial cancer as residual function persisting after high-dose RT ends were reported ; while, some evidence suggested that in women undergoing a premature menopause HRT was not associated with increased breast cancer risk as long as its use continued until the age of the natural menopause . No relationship emerged between HRT usage and the risk of endometrial cancer recurrence . Pelvic floor muscle exercises help relieve vaginal pain and enhance clitoral blood flow, thus promoting better sexual function. Pelvic floor muscle training, alone or in combination with other treatments, seemed effective, even though more studies are needed . Laser therapy was described as promising in the management of vaginal atrophy after RT as intravaginal CO 2 laser was associated with a gradual increase in vaginal length . There is no consensus on the use of vaginal dilators. Even though some authors suggest they prevent the onset and worsening of vaginal stenosis , a systematic review concluded that evidence was insufficient to recommend them, and that dilation was associated with rectovaginal fistulae and psychological consequences. Despite these findings, vaginal dilators are commonly accepted as a strategy for preventing vaginal stenosis . Furthermore, their long-term use is indicated to reduce G2 late vaginal stenosis in 3D-vaginal cuff IR but poor compliance might underlie minimal improvement in vaginal symptoms . Treatment of gynecological cancers may have an important impact on women’s overall health and QoL. Other than the psychological aspect linked to the diagnosis of cancer patients may experience a wide range of side effects due to the multi-modal therapeutic approach which includes surgery, CHT, RT and IR. RT alone or combined with CHT as adjuvant or definitive treatment plays a crucial role in the treatment of gynecological cancers and achieves better outcomes and long-term survival of patients. However, the occurrence of acute and late side effects related to pelvic RT can negatively impact overall outcomes and patients’ QoL . This position paper, conceived in the AIRO Gyn Group, aimed at providing radiation oncologists with a succinct, but comprehensive view of RT-related toxicities in gynecological cancers. Aims were not only to describe the incidence and pathogenesis of specific toxicities but also, above all, to disseminate evidence for the prevention and treatment of such treatment-related side effects . The ultimate goal was to provide radiation oncologists involved in gynecological cancer treatment with a practical guide to preventing, recognizing and managing specific side effects and their complications., as is required in a global approach to the patients. Since there are no standard guidelines for narrative reviews, we decided to search PubMed, one of the largest free-access biomedical databases. We started our analysis with the year 2005, when IMRT for gynecologic tumors became standard in routine clinical practice in most Radiation Oncology Centers . In our opinion, prevention of toxicity should aim at improving the therapeutic index of RT treatment, possibly by adopting IMRT/VMAT, Tomotherapy along with IGRT, which reduce the occurrence and severity of toxicity . Treatment planning should be done with great care, following guidelines, indications and dose constraints for OARs even though, unfortunately, dose constraints are not standardized for each specific OAR. Furthermore, to prevent the onset of toxicity, and/or reduce its severity before, during and after RT, knowledge of patient and disease features aid radiation oncologists in prescribing drugs and non-pharmacological interventions. Moreover, patients should be carefully informed and trained if a particular preparation is required during RT treatment to avoid side effects, i.e., bladder filling or dietary recommendations if indicated. During RT treatment, patients should be followed with routine visits to early assess the occurrence and grade of toxicities, reported and graded by specific scales . At present it is unknown if one specific scale is better than others in assessing RT-related adverse events . The administration of questionnaires as PRO might be useful to recognize and prevent acute toxicity, as suggested by Chan et al. . If needed, pharmacological therapy should be prescribed along with eventual replanning. Long-term follow-up is needed to investigate not only the clinical outcome of the disease, but the occurrence of late RT-induced toxicity. Management of late toxicity can require a multidisciplinary approach and interventions should be based on shared decisions. New evidences suggest other fields of research and interventions. Recent studies focused on the role of gut microbiome in determining gastrointestinal side effects and possibly treatment outcomes, indicating the need for attention to this aspect during RT. Bone health in menopausal women should not be overlooked, as bone toxicity negatively affects patients QoL. Lastly, sexual problems in women undergoing treatment for gynecological cancer have been investigated more recently and the real occurrence is underestimated, as PRO revealed that patients did not respond to these specific questions . Patients needing RT should be fully informed about sexual dysfunctions linked to treatment and approaches for reducing discomfort . Therefore, RT techniques advance, respect for OAR constraints, knowledge of causes and treatment options for RT side effects along with patient care can guide radiation oncologists to offer the best RT modalities and support women during treatment and follow-up. Finally, well-designed, specific investigations are needed to answer the not yet solved problems in order to improve the quality of treatment delivered to patients who will receive radiation therapy for gynecological cancers. |
Follow-Up of a Cohort of Patients with Post-Acute COVID-19 Syndrome in a Belgian Family Practice | f6daeef5-979d-4a2b-af60-7201f67dac11 | 9505954 | Family Medicine[mh] | COVID-19 is no longer just an acute syndrome . In nearly 20% to 35% of post acute COVID-19 patients it can develop into a disabling health problem, sometimes lasting several months, called PACS by WHO, first called long COVID by the patients themselves and appeared under this name in the literature in the summer of 2020 . PACS remains also very difficult to define . A Delphi consensus, led by the WHO, was necessary to arrive at a still imprecise definition: PACS occurs in individuals with a history of probable or confirmed SARS-CoV-2 infection, usually 3 months from the onset of COVID-19 with symptoms and that last for at least 2 months and cannot be explained by an alternative diagnosis. Common symptoms include fatigue, shortness of breath, cognitive dysfunction but also others, and generally have an impact on everyday functioning. Symptoms may be new onset following initial recovery from an acute COVID-19 episode, or persist from the initial illness. Symptoms may also fluctuate or relapse over time . It is the first disease to be named by patients themselves through exchanges on social networks (see some internet website addresses in ) by patient advocacy groups whose members identify themselves as long COVID cases. An estimated 1.8 million people living in private households in the UK (2.4% of the population) were experiencing self-reported PACS in August 2022 . These numerous testimonies and the considerable number of reported cases show that this debilitating disease is becoming a serious public health problem. Among COVID-19 survivors more than 50% had one or more long COVID symptoms recorded during the 6-month period post infection . Diagnosis of PACS remains difficult, as the syndrome encompasses distinct groups of heterogeneous symptoms . These have poor diagnostic properties as they may overlap, evolve over time and are sometimes difficult to link to COVID-19. Patients with myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS) are known to have (highly) similar health complaints . Symptoms affect multiple systems, and often occur after a relatively mild acute illness or an influenza-like illness. They can be multiple and of varying intensity depending on the areas of the body affected. Unbearable fatigue, brain fog and myalgia are the most common symptoms. Cognitive disorders, memory and attention deficits with very impaired quality of life , anomia, dysarthria, frontal behavioural disorders, autonomic dysregulation, headaches, dyspnoea, anosmia, dysgeusia, skin or digestive disorders, psychosocial distress, loneliness, anxiety, depression and sleep disorders have been associated with PACS . In addition, post-COVID patients have an increased risk of psychotic disorders, dementia and epilepsy or seizures . The absence of specific markers means that the diagnosis is based on the patient’s word, which is not without medico-legal consequences . To date, we haven’t reached a full understanding of what PACS really is, nor its natural history . A major UK biobank longitudinal study shows that changes in brain structure were found in some patients with SARS-Cov-2 after several months and in particular a greater reduction in grey matter thickness and a reduction in global brain size . Its pathophysiology is still unclear and there is no specific treatment . Moreover, although some research seems encouraging, it is not clear that vaccination protects against PACS . The long-term consequences of COVID-19 can be very devastating while the socio-economic impact of long COVID on work and social life could be very important . The family doctor can certainly provide information and support to these distressed and sometimes stigmatized patients . Since May 2021, several long-standing patients (known by the doctor sometimes for years), come to the family practice unit with a set of similar unexplained symptoms. Neither the biological examinations nor the usual imaging techniques explained the condition of these patients. However, the patients presented a serious alteration of their physical and mental state. The family doctor is well positioned to explore a problem which is then turned into a research question. It is about understanding how COVID-19 is responsible for unexpected symptoms which overwhelm the patients, their families and their employers, who may not understand the situation. It is not unusual for family doctors to be confronted with a case of MUS . The certainty of the diagnosis itself is low and is primarily clinical. A careful history will be essential to support the patient, to identify and name this tiring PACS syndrome which seems endless and without remedy. In an attempt at answering these questions, the data and testimonies of a cohort of patients seen in daily practice were collected, analyzed and discussed based on the literature as well as exchanges and discussions with colleagues. The aim of this study is thus to provide a qualitative and quantitative description of the health status of fifty-five patients (full data available for 52 of them) with PACS requiring care from May 2021 to July 2022 in a general practice in Charleroi, Belgium. This study aims to contribute to the description of PACS, the understanding of the suffering of the patients, and physio-pathological phenomena of long duration triggered by SARS-CoV-2.
Human subjects research, i.e., “ research involving the collection, storage, or use of private data or biological samples from living individuals ” is typical to family medicine. Such research cannot always be planned in advance. The research topic emerges, imposes itself, even though the EMR collection of data has already started. This particular questioning leads to a review of the literature, which modifies the research protocol and the attitude towards the patients who enter the cohort. It is an action research in which central concerns, improvement in practice, increased knowledge and understanding are linked together. 2.1. Clinical Data Collection 2.2. Data Collection from Narrative Medicine and Qualitative Approach 2.3. Laboratory 2.4. Nuclear Imaging 2.5. Statistical Analysis 2.6. Ethics In May 2021, in a fee for service group practice of two family doctors, serving about 2500 patients in a deprived area in the city of Charleroi, Belgium, taking into account an increasing number of patients with unexplained symptoms after being sick with COVID-19, a data collection was initiated on thirty-two patients of the practice. After the publication of a first research report and its dissemination on social networks, twenty-four patients from other treating physicians joined the cohort. In June 2022, a standardized follow-up form has been designed for patients with clinical signs of PACS met in the daily busy practice of the two practitioners (MJ & AZ). In addition to the biographical characteristics, this form lists the symptoms of acute COVID, the dates of PCR if available, the severity indicators, the examinations carried out, the evolution of the clinical state along the time and the paraclinical examinations requested. Comorbidity is recorded by the ICPC-2 . These records are unidentified and shared with immunologists and geneticists for further studies (see ). Clinical certainty is considered here, i.e., the conviction by the practitioner that, as stated by Llewelyn, “ the anamnestic and paraclinical elements at his/her disposal allow him/her to reject with a high probability that the case presented is due to another condition ” . Only cases deemed clinically demonstrative of PACS are included. Biological certainty refers here to the availability of a positive PCR test. Severity is assessed by the doctor using the DUSOI/WONCA . The index is based on four parameters: symptom status, complications, prognosis over the next six months without treatment, and treatability, i.e., the expected response to treatment. The score ranges from 0 (no severity) to 5 (extremely severe). The DUSOI/WONCA form with instructions for use is available online . The functional status is assessed by the patients themselves using COOP/WONCA . General condition, ability to perform daily activities, physical condition, emotional management, ability to have a social life and changes in health status are indicated on the 6 corresponding charts by patients rating themselves from 1 (good performance) to 5 (cannot do it). The overall index will have a value from 6 (excellent form) to 30 (totally impaired functional status). A manual in several languages and instructions for use is available online .
A detailed clinical medical case report was written by the doctor for each patients with highly significant signs of PACS. The first six patients were visited by GK, a medical student who was not involved in their care. According to the principles of narrative medicine , GK conducted the interviews on the basis of standardized semi-open questions. These recorded interviews were transcribed in full and will be also used for further qualitative research. GK then reviewed and corrected if necessary the clinical case reports with each patient.
None of the laboratory tests ordered usually in primary care proved to be contributory and consequently are not mentioned here. The results of PCR tests and COVID humoral serology are indicated with the dates of their performance when available. COVID-19 serology is not routinely requested in Belgium, as this comes at a charge. All but four patients have accepted to provide a blood sample for genetics and immunological analysis by members of the network COVID Human Genetic Effort ( Covidhge.com ) (accessed on 1 September 2022), an international consortium aiming to discover the human genetic and immunological bases of the various clinical forms of SARS-CoV-2 infection and the particular characteristics of PACS patients . These immunological tests should also remove diagnostic uncertainty for patients without a PCR test. Indeed, the uncertainty of the biological diagnosis can have psychological, medico-legal and clinical consequences. The results of these analyses are not yet available and will be published in the near future.
A Positron emission tomography with 18 fluoro-D-glucose integrated with computed tomography (18FDG PET-CT) can provide precise information such as the hypo-metabolism that affects certain brain areas in certain PACS with a strong neurological component , and is considered the gold standard of nuclear imaging. Single Photon Emission Tomography-Computed Tomography (SPECT-CT), on the other hand, is more accessible and may reveal a metabolic brain disorder similar to that found in Alzheimer’s disease or stroke . SPECT-CT 99mTc-ethyl cysteinate dimer (ECD Tc-99m), an old technique coupling tomoscintigraphic and CT images acquired successively during the same examination, allows the assessment of brain perfusion using complex technetium-99m (Tc-99m) labeled molecules that are able to cross the blood-brain barrier . The binding of these tracers is dependent on cerebral blood flow. Tc-99m is used for brain perfusion studies because of its high first-pass extraction fraction and high affinity for the brain . With due precaution, Tc-99m can be disposed of as normal waste. This examination, easily available in primary care in Belgium, was requested only in patients judged by the physician to be strongly or very strongly affected or with a severely impaired functional status. Brain CT scans and Nuclear Magnetic Resonance (NMR) scans, if any, were also collected.
Binary comparisons were performed by aggregating patient groups 1 and 2 (cured) compared to group 3 (still ill)(see ). Data were analyzed by using SPSS Statistics for Windows, Version 26.0 (IBM SPSS Statistics for Windows, Armonk, NY: IBM Corp). Descriptive data were presented as n and percentage or mean and standard deviation (SD). Categorical variables were compared using the chi-square or Fisher’s exact tests while the independent samples t -test or the Mann-Whitney U test was used to check for differences between the cured versus still ill patients regarding age, COOP total scores, and days since COVID symptoms. p -values below 0.05 were considered statistically significant (see ).
Patients whose medical records are managed by doctors in the patient-doctor Belgian contractual framework, have expressly given their written consent to the use and publication of their personal data in an anonymous manner. The ethics committee of the University Hospital of Liege, Belgium, gave its full approval to this study under the number 2022/23.
3.1. Clinical Data 3.2. Clinical PACS Evolution on Two Years 3.3. PACS Clinical Picture 3.4. Nuclear Imaging Interest 3.5. Narrative and SPECT-CT Images of Some Exemplary Cases Some of the characteristic features are underlined and highlighted here by relevant SPECT-CT images. Other cases are described in the clinical research report . 3.5.1. Patient MGA010 3.5.2. Patient MGA017 3.5.3. Patient MGA005 3.5.4. Patient MGA058 A total of fifty-five patients presenting with unusual or medically unexplained symptoms (MUS) were followed up in family medicine. practice between May 2021 and July 2022. The fifty-five patients, aged 12-79 years (M= 42.9 years at start of study), 40 women (72.7%), 15 men (27,3%), were clinically suspected of having PACS. PCR were positive in thirty-three (58%) of the fifty-five patients. Twenty-three PCR were either negative or not available (See ). Among the 55 cases reported, as assessed by the doctors using the DUSOI/WONCA score of gravity, twenty-three patients showed severe (rated 3) and twenty-six showed a very severe condition (rated 4) at their first visit. Functional impairment rated scored more than 20 points (very impaired) on the six COOP/WONCA charts) by thirty-seven patients. Comorbidity prior to PACS i.e., the distribution of the major health problems (diagnostic index) present before the acute episode are detailed in for the 34 first patients, coded by ICPC-2, the main classification used in primary care. This kind of distribution is usual in a family practice. It is not surprising to find a high prevalence of locomotor (L) problems in this working population in which social (Z) and psychological (P) problems are also frequent. Among nutritional and metabolic problems (T), there are seven obese patients, five of whom have had bariatric surgery and one patient is also diabetic. The circulatory (K) chapter includes a dissecting carotid aneurysm and a transient ischemic attack, both at the moment of the acute COVID-19. Each of these last two patients recovered completely from these vascular damages, which suggests that they were direct consequences of the acute COVID-19. From locomotor point of view (L), two patients have an autoimmune disease: rheumatoid arthritis, and psoriatic arthropathy. One patient had a miscarriage (W for pregnancy) during the acute COVID-19.
From a clinical point of view, the evolution of the disease takes many months. After one year of follow-up, from a pragmatic point of view, and for the management by the family practice, three types of clinical evolution can be distinguished since the onset of acute COVID-19 (see ) (Data on 52 first of the 55 patients): Grade 1, mild PACS; 16 patients (9 female, 7 male): The duration is 3 to 8 months, the impairment is mainly respiratory with fatigue, sternal pain, exhaustion with effort, no cognitive nor mnesic disorders. Other aspecific symptoms (skin redness, paresthesia, anosmia, dysgeusia, vertigo) may be present but disappear with time. These patients generally had a low DUSOI severity index and a low functional impairment score at the beginnig of the care and return to normal life after several months, sometimes with sequelae such as recurrent chest pain or taste or smell disorders. Grade 2, severe PACS; 15 patients (12 female, 3 male): The duration is 6 to 18 months, with extreme fatigue, effort exhaustion, anomia, cognitive disorder, mnesic disorder. Other symptoms of the digestive, cardiac or autonomic system could be present. Nevertheless, all the symptoms diminish after 12 to 18 months and the resumption of activities is possible with sometimes sequelae (procedural memory disorder, fatigue on exertion, sudden deep breathing). A relapse is possible. These patients most often had a high severity index (DUSOI 3 or 4) and poor functional status (COOP > 20). Grade 3, very severe PACS; 21 patients (17 female, 4 male): After 12 to 27 months (maximum at the time of writing) patients are not able to resume their activity or only part-time at the most. Exhaustion is constant, efforts impossible, cognitive revalidation useless, hypersomnia, weight gain due to inactivity, persistent severe memory disorders and of course a considerable anxiety about the future and the feeling of having contracted an unknown and incurable disease. The repercussions on the family life are very severe. The oldest patient (F, 79) is hospitalized with a hypothetical diagnosis of Alzheimer’s disease. At the beginning of the care, these patients could not be distinguished from the others and no predictive elements were found that would allow us to make a prognosis of their evolution. Concerning Vaccines Seven patients contracted COVID-19 at the very beginning of the pandemic in early 2020, before PCR tests were available. During acute COVID-19, fifty patients remained home while two patients were hospitalized. A 79 year-old-patient was hospitalized for several months, whereas another—36 years old—for two days. Eight patients had acute COVID-19 several times The relationship between PACS and vaccination is presently unclear. Seven patients were not vaccinated and all vaccinated patients were vaccinated after contracting COVID-19. Thirty-six presented a reaction to the vaccine of whom six a local reaction (pain-redness) and thirty a systemic reaction, sometimes very severe (fever for several days, fatigue, cognitive problems). Four patients temporarily improved after the vaccine while two worsened. One patient improved for 6 months before relapsing. Three patients believed that the vaccination triggered their long COVID.
The symptoms experienced during PACS are numerous and form a recurrent picture with huge variations between patients. Previously unknown disabling state of exhaustion, inability to exert oneself with dyspnea on exertion, brain fog with memory impairment and word retrieval deficit (anomia) are the most common features. Five patients mainly had a respiratory form of PACS. Paresthesias in unexpected dermatomes, hematomas and skin spots, chest pain, involuntary movements of the limbs or fingers are also described. A short piece of verbatim of one patient is transcribed in . To give a global picture, the symptoms for each patient are aggregated and represented at the as a word cloud using wordclouds.com (accessed on 1 September 2022). It is not possible to determine a date of onset for PACS. It is only possible to know when a doctor mentioned the diagnosis. Indeed, the evolution of acute COVID-19 to PACS is insidious and the patient does not always make the link between the symptoms and this new disease, which is in any case not recognised by the many doctors consulted. The identification of PACS took between 2 and 20 months after the acute COVID-19, resulting in a high degree of medical wandering and a feeling of being unrecognized or abandoned among many patients The patients who came by for a consult were previously given a wide range of assessment or diagnoses. A young girl, an elite athlete, suffering from exertional exhaustion, was called a lazy teenager by his teacher. Such diagnosis as angina pectoris, pulmonary embolism, multiple sclerosis, depression, fibromyalgia, burnout, Alzheimer, generalized anxiety disorder or post-traumatic shock were coined in the emergency department reports or by the consulting specialists.
The SPECT-CT is interesting to follow the cerebral evolution. Fifteen patients (out of the 52 first) underwent a follow-up SPECT-CT after 4 to 9 months. Eight patients have an improved image showing regression of the previously described anomalies well correlated with the general improvement of their condition. Seven patients who are still very ill and severely disabled with more than 25 points on the COOP/WONCA scale have worse images with protocols such as: “appearance of a right thalamic hypofixation and parieto-occipitotemporal hypofixation currently more marked on the left than on the right” (not displayed), or “with appearance of a right posterior parietal involvement compared to the previous examination” (not displayed). Out of the 55 patients of this series, SPECT-CT was requested in thirty-two patients, for whom all 3 of the following clinical criteria were met: Clinical symptoms suggesting a brain disorder in the context of the COVID-19 pandemic A degree of severity of 3 or 4 on the DUSOI/WONCA A functional status of more than 20 points on the COOP/WONCA Having a positive PCR is not a determining condition to affirm a PACS. In this series one of the patients with a negative PCR did not even remember having COVID-19. PCR “proof” was not available in 23 of 55 patients, either negative or because the procedure did not exist at the beginning of 2020. On fifteen SPECT-CT requested, two of which are documented in and and three represented in , and , all but two showed altered cerebral perfusion, sometimes extensive. In two cases of PACS without memory impairment, there is no impairment of thalamic or subthalamic perfusion. In one reported case, it was not requested due to the patient’s desire to become pregnant. NMR are generally non-contributory except for mere minimal lesions. A follow-up NMR showed normalization of an aneurysmal lesion present in the patient during acute COVID-19.
This patient is forty-six years old, asthmatic, suffers from Meniere’s syndrome with severe vertigo and from common disabling migraine. The patient had never before complained about or been treated for a mental health problem. In November 2020, the patient developed acute COVID-19 with pneumonia (ground glass areas on the X-ray), and was treated at home while having the following symptoms: fatigue, dyspnoea at the slightest effort, chest pain and anosmia. After one and a half months, the patient writes this text message: “ For the past few weeks, I have been having obsessive and fixed ideas, I have experienced paranoia about my colleagues, violent anxiety and sometimes morbid ideas with a consequent state of despair, crying spells that cause chest pain, recurrent nightmares, and I sometimes have the feeling of being in a waking sleep where I am aware of having very strong chest pains, painful bones and joints …In the morning, I have no strength, no motivation, I feel reclusive and even persecuted …There is something wrong ”. Subsequently, the chest pain persists and dizziness worsens, to the point that she has twelve dizzy spells in just two weeks. She finds it difficult to concentrate, can’t stand noise, loses her words and her memory. She is exhausted and develops sleep disorders. She was unable to work. In May 2021, the decrease in cerebral vascular flow is evident on the CT-SPECT images (See ). The same patient, whose condition has gradually improved since, had a second scan (not displayed here) three months later. The protocol of this second scan was very reassuring: a discreetly heterogeneous tracer fixation is observed, with clearer left frontal, left parietal and right parietal hypofixations, and the presence of periventricular hypocaptation. Compared to the previous workup, there is an improvement in cerebral fixation with a decrease in fixation heterogeneity and periventricular hypocaptation . In February 2022, 15 months after having contracted COVID-19, the patient appears to improve.
This patient is a warehouse clerk, aged 51, who was being treated for rheumatoid arthritis and brachialgia with cervical foraminal stenosis. In February 2020, he fell ill with a pseudo flu, which turned out to be an acute COVID-19 infection. At that time, a PCR was not performed. Several months later, he presented with a characteristic PACS with anxious depression, disabling headaches, exertional exhaustion, chest and muscle pain, paresthesia, visual disturbances, nervousness, eye burning, gastrointestinal disturbances, malaise, and above all, a major worry regarding his future. The patient no longer feels like himself. The SARS-CoV-2 serology is negative. Consequently, there is no evidence of an occupational disease. In June 2021, the brain SPECT-CT shows the characteristics of vascular encephalopathy . In the spring of 2022, he was still off work and his functional state was very impaired. A claim for recognition of an accident at work was made impossible by the absence of a positive PCR test. Two years later, in February 2022, he has not recovered and the follow-up SPECT-CT shows a worsening of his condition (see ).
A 59-year-old mother and cleaning lady, currently on disability for osteoarthritis, is not sure what happened to her. She had had COVID-19 with headaches, asthenia, muscle and joint pain as well as anosmia and agueusia, the latter two symptoms having persisted for 3 months. Now she sometimes has difficulty pronouncing certain words, so much so that she has stopped speaking. There are also problems with her writing abilities. She has always been a bit dysgraphic, but now she has lost confidence in herself and her knowledge, and she is afraid to make mistakes. She has problems with balance and concentration, occasional coughing fits, sometimes tightness of breathing and dyspnoea on exertion. Her state of exhaustion persists and prevents her from doing her work, she feels weak and anxious. She also suffers from memory loss, forgetting where she puts her keys and sometimes suffering from word retrieval deficit. The SPECT-CT shows small but significant brain damage (see ). By the end of the fourth month she had almost recovered (see ).
Each patient has a unique life and health journey. But this one is very exemplary. 49 years old, executive managerial functions. He has already experienced a burnout, sleep disorders after a divorce and has had bariatric surgery for obesity. He was a great sportsman, bordering on sports addiction. He has done and acute COVID-19 three times. The first in March 2020, contracted at work, when there was no PCR. He stayed in bed for two weeks. The second in October 2020 with a positive PCR with fatigue and dizziness. The third in November 2021, with anosmia and dysgeusia. Interestingly the SARS COV 2 serology is negative in September 2021. But since the beginning of 2020 he has been consulting for severe cognitive and memory problems. The psychiatrist says Burnout. The neurologist says Alzheimer. A first SPECT-CT in April 2021 shows an alteration of cerebral perfusion. 18FDG PET-CT and lumbar puncture rule out Alzheimer’s. The patient was referred to the psychiatrist who prescribed Rilatine. In June 2022, 19 months after the first Covid, a second SPECT-CT confirmed the first one and the diagnosis of PACS was made. The neurological and behavioral disorders were severe. The cognitive state is very altered and fluctuates: sometimes he is as he was before, sometimes he feels lost and does not know where he is - he can no longer stay in a restaurant with friends because he is overwhelmed with information, he may completely forget why he is in a store, he may totally forget what he did during the previous hour when he has just gone for a walk with a friend. His cognitive and memory impairment has a strong impact on his social life; friends drift away and no longer understand him. He has incomprehensible physical alterations; his hands open by themselves: while he is carrying a shopping package, his hands open and the package falls. There is a loss of control and the object is dropped. He has paraesthesia of the fingertips, lateral hand tremors, sometimes very sharp, unknown before. He tried to resume sports in September 2020 but he is quickly exhausted and has to mow his lawn in several times. Since 2020 he has started to blow up his car, indicating a change in spatial perception. The imaging requested are displayed in .
4.1. Clinical Approach 4.2. Global Indicators of Severity and Functional Status 4.3. Limitations 4.4. Hypo-Perfusion and Hypercoagulation Looks Central to PACS Pathophysiology 4.5. Impact of Imaging Diagnosis on Patients’ Experiences 4.6. Severe Reactions to Vaccines in Many Patients 4.7. An Empathetic Therapeutic Approach 4.8. Further Studies It is currently not possible to distinguish between vaccine immunity and natural immunity in patients. Moreover, there is no biological proof that these cases can all, without exception, be tagged as PACS. In an attempt to disentangle those questions, a collaboration was initiated with the Department of Microbiology of the Catholic University of Leuven (Rega Institute; https://rega.kuleuven.be/ ) (accessed on 1 September 2022) and the Karolinska institute (Petter Brodin; https://ki.se ) (accessed on 1 September 2022). In the framework of the European Consortium for Genetic and Immunological Studies on COVID-19 (COVID-HGE consortium; https://www.Covidhge.com ) (accessed on 1 September 2022), the patients in this study will benefit freely from extensive blood tests. This joint genetic and immunological study tries to understand the occurrence of PACS in some of the COVID-19 patients . To date, forty-eight patients have agreed to donate their blood for these in-depth analyses. The full results will only be available in a few months and will be published in a separate article.
Between May 2021 and July 2022, the data of fifty-five patients are exposed, of which thirty-two of high concern were imaged by SPECT-CT. Their management in family medicine is described, highlighting the importance of medically unexplained symptoms and attention to the person as well as the use of nuclear imaging in the assessment of severely affected patients. The general context is that of the COVID-19 pandemic. These cases have been progressively highlighted in general practice consultations since May 2021 and, little by little, the notion of PACS has emerged as a coherent explanation for severely altered functional status in known patients. The very initial questioning was made during contacts with an abnormally tired patient with impaired memory who suddenly improved after two Comirnaty vaccines. This was the first case (see case MGA001 in and in ). Since then, more and more patients have been and are still being identified as carrying varying degrees of PACS stigma. In view of the polymorphous clinical picture of this syndrome, it is not surprising that emergency room colleagues or specialists have put forward diagnoses that are as diverse as they are multiple. Angina pectoris, Alzheimer’s disease, pulmonary embolism, hyperventilation, fibromyalgia, traumatic shock, burnout, anxiety attacks and post-traumatic stress syndrome have all been evoked as peremptory diagnoses that are destabilizing for the patient. Whether in the emergency room or with specialists, the cross-sectional doctor’s vision of the patient is limited to the present moment. For their part, the GP has a a long-term, repetitive, longitudinal view of the patient’s life and well-being, and can assess changes in health status over time. The initial diagnosis is essentially made with a clinical narrative approach , based on carefully listening to the patients on multiple occasions. The relationship with most patients had been established for many years, and it was very clear that these patients were undergoing a profound change in their condition. This study has a mixed-method approach, with both a quantitative and a qualitative component. The symptoms presented by all patients evoke the same clinical picture made in many studies . Most patients were not aware that their condition was related to COVID-19. The triad of exhaustion, exertional dyspnea and memory impairment seems to be recurrent. Cerebral disorders dominate, either by thalamic and subthalamic damage (memory loss), cortical damage (brain fog, anomia, hallucination, paresthesias, abnormal movements), or by bulbar damage (anosmia, dysgeusia, dizziness, orthostatic disorder), although in anosmia, both central and peripheral alterations could be demonstrated . No patients have contributory NMR images, although NMR can reveal cerebral microvascular lesions in severe COVID-19 . SPECT-CT were ordered in thirty-TWO patients judged severely affected and reporting significant functional impairment. Cerebral perfusion changes are visible in twenty-nine on thirty-two patients. The lesions found are consistent with the severity of the problem experienced by these patients. Brain SPECT-CT, as a diagnostic tool for demonstration of brain damage, measures brain perfusion and indirectly brain oxygenation. SPECT-CT was useful in 90% of suspected PACS patients showing brain disorder. This kind of result is highly unusual in a family practice setting. In daily practice, the selection bias is the rule as the doctor is searching for cases and a 29/32 (90%) efficiency is hence very surprising. 18FDG PET-CT allows brain metabolism to be directly assessed . In a recent study by Verger et al., 47% of the scans of 143 patients with suspected neurological PACS were visually interpreted as abnormal . 18FDG PET-CT has a superior sensitivity over SPECT-CT and provides better contrast and spatial resolution. Studies suggest superiority of 18FDG PET-CT over SPECT-CT, but the evidence base for this is actually quite limited . The spectacular but very expensive 18FDG PET-CT images enforces the use of SPECT-CT as much more cost-effective from an environmental and economical point of view in primary care for the diagnosis of PACS brain perfusion disorders. Nevertheless, the question of which patients should benefit from this procedure has to be addressed cautiously. Fifteen affected patients each had a follow-up SPECT-CT after three to nine months. The SPECT-CT showed a clear improvement in eight cases and an aggravation in seven, corresponding with the clinical improvement or aggravation of the condition. The improvement in metabolism in the PACS at six months has already been shown using 18FDG PET-CT . In the patients discussed in this study, the SPECT-CT examination looks to be useful for the follow-up of the most severe cases.
Our indicators of severity (DUSOI/WONCA) and functional status (COOP/WONCA) are specific to general practice and by-products of the WONCA. The DUSOI/WONCA is an indicator of the severity estimated by the doctor, while the COOP/WONCA charts is a reliable indicator of the patient’s opinion of his or her condition. Many publications deal with the impact of PACS on the health status of patients, sometimes with similar indicators , more detailed ones , or for example some specific to fatigue . Our indicators aim to estimate the overall condition of a patient and not the precise impact on a single function. However, patients have other intercurrent health problems and the indicators may be influenced by these new elements. It is important to note that the clinical elements, i.e., the clinical examination, the severity indicator (DUSOI) and the functional status indicator (COOP) that determine whether to request a CT-SPECT, have significant relationships with the severity of the outcome, both from the physician’s and the patient’s perspective, as shown in .
Clinical medicine tries to integrate the experience of the clinician, the values of the patient and the best scientific information available. These three pillars are also the pillars of EBM . But in this case, there is no—or very limited—evidence yet. This evidence is under construction as patients and doctors work together to develop knowledge. It is a type of action research by which, as J. Pols points out, it is up to us to articulate the knowledge that patients develop . This study is not without its limitations. This work was carried out in a busy family medicine practice in pandemic time and does not benefit from all the rigor that high-level research requires. In general practice, the present study is observational. Examinations are carried out if they are beneficial to the patient. A practitioner would not ask for SPECT-CT without expected benefit for the patient. In his seminal paper on problem-solving in general practice, Yan McWhinney wrote that “ the disease presented to family doctors is often in an unorganised state ” . The practitioner then confronts the information provided by the patient with his or her knowledge and constructs a frame of reference acceptable to him and to the patient. Clinical decisions and methods of patient care should be based on controlled experiments and not on intuition. However, the knowledge of practitioners can also contribute to this and be studied, shared and challenged . It is necessary to emphasize also that the field of research on PACS and the number of publications on this subject are growing exponentially. The initial observation of the present study dates back to May 2021 and since then the knowledge on PACS has advanced considerably. There is no pre-organised framework since the condition is new, undescribed and not part of the stock of knowledge accumulated to date. It is the doctor’s ability to listen to the patient, to be surprised and to think about a new situation that will be central to the process of uncovering the new condition. At the same time, the doctor’s experience is profoundly transformed by the research since he is an actor in it . Consequently, the quality of the observations changed between the first and the fifty-fifth patient in the presented study. The reproducibility of the SPECT-CT technique in PACS deserves to be studied, as well as its comparison with 18FDG PET-CT. SPECT-CT is not an expensive examination, invoiced at 222 € (∼250 $) to the Belgian national insurance provider. The environmental impact of labeled Technetium (Tc-99m) must also be taken into account, although Tc-99m used in medical diagnostics has a short half-life of six hours and does not remain in the body . Its main advantage is that its economical and environmental cost is much lower than that of the isotopes used for 18FDG PET-CT despite the production of large quantities of highly radioactive waste during its manufacture . The practitioner requesting such radio imaging should be aware of the environmental impact of the health-related activity .
Every time the severity of the case and the cognitive disorders are in the foreground, the SPECT-CT shows perfusion abnormalities (29 abnormalities on 32 ordered). These twenty-nine patients have an encephalopathy by hypo-perfusion, and the clinical pictures are so similar that one can hardy evoke another hypothesis to explain the symptoms. Mejia et al. suggests a deleterious effect of SARS-CoV-2 infection on systemic vascular endothelial function . Hohberger et al. showed an impaired capillary microcirculation in the macula and peripapillary region . Fogarty et al. showed that persistent endothelial cell activation may be important in modulating the ongoing pro-coagulant effects in convalescent COVID-19 patients and thus contributes to the pathogenesis underlying the COVID-19 syndrome . Hypercoagulation, vascular complications and microclots formation in COVID-19 have already been identified . PACS is accompanied by increased levels of antiplasmin , and pericytes–the multipotent parietal cells of capillaries–may play an important role in microvascular PACS alterations . Aggregated platelet, microhemorrhages and ischemia appear to play a central role in neuronal injury by reducing the blood flow with concomitant reduction in oxygen and glucose in SARS-CoV-2 infected non-human primates . The altered mental status may be due to encephalopathy caused by a systemic disease or encephalitis directly caused by the SARS-CoV-2 virus itself , without forgetting the dramatic consequences of cognitive loss on the mental health of patients. These lesions can be compared to the skin lesions that accompany COVID-19 and PACS, especially in young subjects, with a duration of 7 to 150 days and up to 22 months . According to Mehta et al. these lesions clinically resemble vasculopathy, with microvascular abnormalities observed with nail capillaroscopy . This is an argument for making the analogy between PACS and an autoimmune damage or disorder that affects vascular endothelial surfaces. The question arises as to whether the cutaneous vascular lesions in COVID-19 are similar to the cerebral endothelial lesions inducing the perfusion disorder in PACS. Nirenberg et al. have shown the presence of fibrin thrombi occluding capillaries and endothelial swelling without vasculitis in the toe of a COVID-19 patient . Furthermore, as the skin lesions heal without consequences, one can assume that the cerebrovascular lesions found will evolve in the same way. Chest pain on exertion could also be attributable to microvascular lesions or myocardial inflammation . Singh et al. show that the impaired systemic oxygen extraction observed in exhausted PACS patients is attributed primarily to reduced oxygen diffusion in the peripheral microcirculation . According to an observational study by Camazon et al., coronary microvascular ischemia is the underlying mechanism of persistent chest pain . The perfusion abnormalities, cognitive and memory impairment could be explained by the expression of ACE2 in the brain stem and other receptors in the cortex by the vascular wall, which makes them vulnerable to the virus . Several studies highlight the hypo-perfusion and subsequent hypo-metabolism as one of the pathophysiological explanations of the clinical symptoms. Hyper-inflammation caused by COVID-19 may be mediated by MCA which has also been hypothesized to cause PACS symptoms . The brainstem regulates respiratory, cardiovascular, gastrointestinal and neurological processes and has a relatively high expression of ACE2 receptors compared to other brain regions . A perfusional disorder could then explain hyperventilation, abdominal pain or central anosmia for example. Central perfusion abnormalities, as seen in the present study, and subsequent hypometabolism could explain the cognitive and memory disorders. One can then understand why two patients in this study, who did not not present with thalamic and sub-thalamic alterations while the cortical perfusion is very disturbed, leading to intense fatigue, brain fog and effort exhaustion but, surprisingly, not to any memory disorder. However, the mechanism of action of the SARS-CoV-2 virus on the brain is subject to many hypotheses including viral persistence . Galan et al. have shown that individuals with PACS showed significantly increased levels of functional memory cells with high antiviral cytotoxic activity implying that Sars-CoV-2 infection is still ongoing in PACS patients . This viral persistence could explain the recurrence of symptoms described by the patients.
In most cases of PACS, patients know they are sick, but no one can put a name to their illness and they go from doctor to doctor, being identified as medical wanderers with their nameless disease . Knowing with certainty that the patients’ experiences are not fantasised but correspond to clear and visible lesions was in each case a shock for the twenty-nine patients whose SPECT-CT showed altered cerebral perfusion. Despite the anxiety that comes with the diagnosis, the patients are reassured to know that they “ are not crazy ”, that they “ knew that there was something ”, and that “ their family, their employer will finally believe them ”. The clinician’s gradual knowledge increase and understanding of the clinical picture will reassure the patients who have not had SPECT-CT. For example, a young girl, a passionate gymnast who was called a lazy teenager by her teacher, can finally explain why she could no longer make any effort and why it took her more than 6 months to be at 80% of her capacity again. A truck driver, who had to stop driving to sleep during his working hours, was reassured by the letter of explanation sent to the occupational physician. Another patient now knows from the SPECT-CT images that the previously unknown condition she experienced for more than six months is not a mental disorder nor an Alzheimer.
All but seven patients were vaccinated. Most were vaccinated after contracting COVID-19. Thirty patients have had significant systemic effect side effects lasting from one day to two weeks. None of them is willing to be vaccinated again as they are afraid of late effects. Some people believe that the vaccine and not COVID-19 is responsible for the onset, aggravation or reactivation of their disease. Three had fleeting improvement or up to six months of their symptoms before a recurrence. Although SARS-Cov-2 vaccination is not associated with a decrease in quality of life or worsening of symptoms and that there is no strong evidence to suggest that vaccination improves symptoms of PASC , this remains a sensitive issue because individual experiences are more important to patients than anything that science can demonstrate.
The announcement of the diagnosis is in itself a therapeutic act and must be carefully considered. There are as yet no definitive, evidence-based recommendations for the management of PACS. Patients should be managed pragmatically and symptomatically . Psychotherapeutic care, neurocognitive revalidation and physiotherapy seem to provide some comfort , although the efficacy of these treatments are difficult to assess. The concept of neuroplasticity, known in other cerebral pathologies , can be used here to encourage the patient to revalidate his memory. In the Netherlands, a two-arm multicentre randomised controlled trial (RCT) showed that a comprehensive revalidation program called “Fit after COVID” significantly reduces fatigue . Whether through cognitive exercises, or through the patient’s use of cognitive applications for smartphones, patients must be encouraged to slowly and gradually regain lost ground. One could of course wish for neurocognitive revalidation by specialists in the field, but revalidation centers are lacking in Belgium. Getting the body back into action, despite breathing difficulties and pain, is absolutely essential while respecting the limits of each patient. Physiotherapy also has an important role to play . Recently, the Belgian social security system has facilitated the access of patients identified as PACS to several health professions such as psychologists, speech therapists, dieticians, occupational therapists and physiotherapists . To date, no drug has been shown to be of proven value in the treatment of PACS, although many substances are proposed as symptomatic treatments . Considering the known coagulation disorder in COVID-19 and the low vascular perfusion seen in PACS, one could propose administering with due caution aspirin at a low dose to limit the risk of micro-thrombi at the level of the damaged vessels as is done in COVID-19 in high doses .
PACS syndrome has a high prevalence in primary care for those who want to see it. Clinical skills and narrative medicine are essential to identify and understand patients’ experiences. This needs time, open mindedness and empathy. Cerebral hypo-perfusion demonstrated by SPECT-CT seems to correlate with the clinical symptoms in a cohort of PACS patients. This needs further studies. Uncertainty about the primary acute infection is a problem. The participation of 48 patients to the European Consortium for Genetic and Immunological Studies on COVID-19 will probably provide some answers and further questions. The impact of PACS is substantial, with many social and economic implications.
|
A force sensor improves trainee technique for rigid endoscopy | 4281df72-164a-4ee4-bd69-d7b999a84b6a | 11096833 | Otolaryngology[mh] | Rigid endoscopy is a widely used diagnostic and therapeutic procedure. Major complications, such as oesophageal perforation and gastrointestinal bleeding, are well documented in the literature and can lead to significant morbidity. Injury to the oral cavity occurs in up to 37 per cent of patients, and resultant damage to the teeth and oral mucosa can be costly and cause a delayed return to normal diet. Developing skills in rigid endoscopy poses challenges to the surgical trainee. There is a restricted view for the trainer, and it is difficult to ascertain how well the trainee is avoiding injury to the oral cavity, dentition and oesophagus. This compromises the trainer's ability to provide constructive intra-operative feedback. These challenges are on a background of reduced operative numbers within surgical training across all specialties; this is something that has been addressed by a greater emphasis on competency-based assessments and simulation, to enable trainee progression and provide evidence of surgical competence. – Manikin simulators offer a unique training advantage in creating an immersive learning experience. Procedural skills can be repeated and standardised in a low-stress environment, without risking patient safety. Over the past two decades, manikin simulators have become more sophisticated, and can now mimic physiological states, facilitate life support training and provide higher fidelity simulation. , They also assist in the development of psychomotor skills and have been used in a variety of different surgical specialties. – Interestingly, although procedures that risk the oral cavity (intubation laryngoscopy, bronchoscopy and rigid endoscopy) are common, no mainstream oral sensors have been developed. Some manikins allow for auditory feedback with ‘clicks’ if force is applied to the teeth, but this does not help trainees moderate force whilst performing the procedure. A literature review showed that one group had used force sensors in a plaster and silicon-based model of the mouth and oesophagus; it was demonstrated that the pressure exerted by trainees during rigid endoscopy was inversely correlated with the level of experience. We test a training model where rigid endoscopy is performed on a low-fidelity (basic) manikin after the insertion of an oral force sensor. The aims of the training were to: (1) improve awareness of force application whilst using the scope; and (2) investigate if real-time force feedback could result in adaptation of technique to lessen the force applied to the oral cavity.
A high accuracy, thin-film, force sensitive resistor (model RP-S40-ST; costing £11.50, capable of detecting weights of 20–10 000 g) was connected to a multimeter (Neoteck Pocket Multimeter; costing £14.99) ( ). The force sensitive resistor pad increases its resistance (in ohms) when force is applied. This is measured by the multimeter, which was set to a baseline resistance of 2 kΩ to allow for measurable recordings. As the relationship between resistance and force is logarithmic, different weights were tested on the sensor, and measurements of the resistance were used to form a simple calibration curve. In order to provide a relatable metric for trainees, we provided feedback in weight (grams) from the calibration curve; however, different metrics can be used if the model is replicated. The study protocol was granted ethical approval at the department's clinical governance meeting and no ethical conflicts were identified. All procedures contributing to this work complied with local clinical governance guidelines. The force sensitive resistor sensing pad was attached to a gum guard inserted on the upper teeth of a manikin, with part of it resting on the palate ( ). Rigid endoscopy was performed using a standard 30 cm adult oesophagoscope and light lead. The operator was able to visualise the reading from the multimeter as the oesophagoscope moved along the upper teeth. Endoscopy was performed on the manikin by consultants, registrars and junior trainees. The juniors had never performed the procedure and had graduated from medical school within the previous three years. Without coaching, the junior trainees were given five attempts at intubating the oesophagus. The ‘live’ readings from the force sensitive resistor and multimeter enabled them to see whether the weight they exerted on the upper teeth could be improved, with direct numerical feedback ( ). They were told the readings of the senior participants. A paired samples t -test was used to determine if there was a statistically significant improvement in the weight applied to the manikin's oral cavity by trainees after five attempts. Data are expressed as mean ± standard deviation, unless otherwise stated. The assumption of normality was not violated, as assessed by the Shapiro–Wilk test ( p = 0.135). Spearman's rank-order correlation was used to assess the relationship between doctors’ seniority and weight recorded at the first attempt.
The training exercise was performed by 19 operators, divided into three categories: 10 junior trainees (foundation year trainees or core surgical trainees), 5 registrars (specialist otolaryngology trainees) and 4 consultants (head and neck specialists). There was a statistically significant, strong negative correlation between operator seniority and the weight applied to the oral cavity on the first attempt (rs (17) = −0.824, p < 0.0001). Visual inspection of a scatterplot ( ) showed the relationship to be monotonic; the more experienced clinicians applied less weight on the manikin's oral cavity. All junior trainee operators applied less weight after five attempts ( ) (346 ± 90.95 g) compared to their first attempt (464 ± 85.79 g). This gave a statistically significant decrease of 118 g (standard deviation = 107.27) after five attempts ( t (9) = 3.479, p = 0.007, d = 1.1). The operators' technique was observed during the simulation, and without external coaching or verbal feedback. The junior operators made observable adaptations to their technique with repeated attempts. Observed behaviours included: extending the neck of the manikin; using the thumb of their non-dominant hand as a fulcrum for the scope; following the lateral corner of the mouth and lateral wall of the hypopharynx; and readjusting to the midline and moving to a sitting position.
Damage to the oral cavity during rigid endoscopy can lead to expensive dental work, bleeding and difficulty wearing dentures post-operatively. Moreover, it is likely that applying more force (or weight) at the proximal end of the scope is related to increased force distally, risking injury to the delicate oesophageal mucosa. This training model was easy to set up, and it familiarised the junior trainee with hypopharyngeal anatomy and the intended straight axis between the mouth, pharynx and oesophagus. The trainee needed to adjust their technique to exert less force on the oral cavity, and this increased the fidelity of the training manikin. Whilst previous work has shown that the pressure exerted in rigid oesophagoscopy is inversely correlated with experience, this study is the first to analyse the educational potential of real-time feedback in a pressure sensor model. The model uses manikins, which are available in most hospitals, making this a simple training exercise to recreate. We witnessed a statistically significant reduction in the weight applied to the manikin's upper teeth and oral cavity after five attempts. The improved readings became closer to the readings of the senior technician groups (registrars and consultants). It is possible that eliminating trainer feedback whilst in a safe environment reduces cognitive load, so that the trainee can better access theoretical knowledge. This study was limited by only having a sensor in the upper oral cavity, and in providing a manikin with a ‘standard’ oesophagus, which may not reflect the strictures, pouches, osteophytes and other anatomical variations that can be found in real life. In addition, although force sensitive resistors are good for obtaining a rough measurement of force or weight, trainees must create a calibration curve by plotting different weights against the multimeter reading (e.g. 0 kg on force sensitive resistor = reading of 1) if they want a relatable reference, although this is not necessary for plotting improvements.
Simulation without external coaching creates a low-stress environment, allowing the trainee to access theoretical knowledge. Low-cost force-sensing equipment can enhance the fidelity of manikins in rigid endoscopy. Real-time quantitative feedback can be a sufficient starting point in teaching trainees to adjust their operative technique, and may be considered in other procedures such as bronchoscopy or laryngoscopy.
|
Medicina de Familia, una especialidad amenazada | 35fbea48-1922-4659-84fa-24566ea33ad9 | 7659640 | Family Medicine[mh] | Tanto en los temas sobre los que legisla como en la tramitación seguida para su publicación. En el primer caso sorprende que en un mismo documento legal se normativicen tres cuestiones absolutamente dispares entre sí, como son el teletrabajo en las Administraciones Públicas, la contratación de personas sin la titulación adecuada para desarrollar funciones propias de una especialidad y la posibilidad de movilizar a profesionales sanitarios a unidades asistenciales diferentes de la suya. Estimamos la importancia de prosperar en la implantación del teletrabajo en la Administración e incluso de la necesidad de legislar la posibilidad de movilización de los profesionales sanitarios - respetando en todo momento sus derechos laborales- en el marco de una situación grave como es la pandemia por coronavirus, aunque a día hoy la situación epidemiológica no lo justifica, pero no podemos aceptar esta norma en lo relativo a la contratación de profesionales sin la capacitación idónea. En cuanto a la tramitación, aunque el texto señala que la redacción del mismo es fruto del diálogo fluido entre las distintas Administraciones Públicas y las organizaciones sindicales, en ningún momento se ha planteado o consultado a las Sociedades Científicas o Comisiones Nacionales de ninguna especialidad. Es un Real Decreto que afecta a todas las especialidades, pero muy especialmente a Medicina de Familia tanto por su perfil profesional y su ámbito laboral (atención primaria y urgencias) como por el hecho de tratarse de la especialidad más deficitaria en el momento actual. Legal(quizá) pero ilegítimo El Real Decreto 29/2020 contraviene numerosas normas legales incluso alguna de rango superior como la Directiva 93/16/CEE : que establece que a partir del 1 de enero de 1995 cada estado miembro condicionará el ejercicio como médico general en el marco de la Seguridad Social a la posesión de un título que certifica una formación específica en medicina general. En nuestro país, el Real Decreto 853/1993, de 4 de junio, sobre el ejercicio de las funciones de Médico de Medicina General en el Sistema Nacional de “Salud, instauró en su artículo 1: …”será necesario , para desempeñar plazas de Médico de Medicina General en centros o servicios sanitarios integrados en el Sistema Nacional de Salud, ostentar alguno de los siguientes Títulos, Certificados o Diplomas: a) El Título de Médico Especialista en Medicina Familiar y Comunitaria previsto en los Reales Decretos 3303/1978, de 29 de diciembre, y 127/1984, de 11 de enero. b) La Certificación prevista en el artículo 3 del presente Real Decreto. c) Los Títulos, Certificados o Diplomas a los que hace referencia el artículo 2.4 de la Directiva 86/457/CEE, cuya enumeración figura en la Comunicación 90/ C 268/02, de la Comisión de las Comunidades Europeas y que hayan sido reconocidos por el Ministerio de Educación y Ciencia. d) El Certificado expedido por las autoridades competentes de los Estados miembros de las Comunidades Europeas al que se refiere el artículo 7.4 de la Directiva 86/457/CEE, acompañado del reconocimiento correspondiente por parte del Ministerio de Educación y Ciencia. Y tampoco el RD 29/2020 se atiene a los requisitos exigidos en nuestro país para el ejercicio de la medicina de familia, tal y como viene expresado en el artículo 4 del REAL DECRETO 1753/1998, sobre acceso excepcional al título de Médico Especialista en Medicina Familiar y Comunitaria y sobre el ejercicio de la Medicina de Familia en el Sistema Nacional de Salud. Dice en el punto 2: “Para desempeñar las plazas de Medicina de Familia en centros o servicios, propios, integrados o concertados, del Sistema Nacional de Salud será requisito imprescindible poseer el título de Médico Especialista en Medicina Familiar y Comunitaria o la certificación prevista en el artículo 3 del Real Decreto 853/1993, de 4 de junio, indistintamente, sin que en ningún caso puedan establecerse preferencias derivadas del cumplimiento de uno u otro requisito”. Quizás el RD 29/2020 sea legal, aunque como vemos contraviene diferentes normas, pero desde luego es ilegítimo puesto que no tiene justificación, es injusto porque equipara a los no titulados con los titulados y contraviene el contrato social entre el estado y los ciudadanos, al no garantizar o incluso poner en riesgo, el bienestar de sus ciudadanos.
Porque no resuelve ni alivia la situación de sobrecarga y saturación de los equipos de atención primaria. La pandemia ha resaltado la grave situación por la que pasa la atención primaria en nuestro país, pero muy especialmente la medicina de familia. Desde la Sociedad Española de Medicina de Familia y Comunitaria (semFYC) hemos solicitado reiteradamente el incremento y la mejora de los recursos humanos de la Atención Primaria por medio de la elaboración y asignación de presupuestos finalistas y también -más aún en este momento- de la necesidad de introducir y desarrollar reformas en el sistema nacional de salud tanto en atención primaria como hospitalaria. Reformas que pasan además de -por dimensionar adecuadamente las plantillas- reducir la precariedad laboral, introducir incentivos al desempeño, eliminar o minimizar todas aquellas actividades que no aportan valor, maximizar las competencias de todos los estamentos profesionales que trabajan en los equipos, desarrollar los sistemas de información, … y un largo etcétera Que sepamos ninguna de estas iniciativas, a día de hoy se han puesto en marcha, prácticamente en ninguna CCAA. E Indebida Porque las contrataciones de profesionales sin especialidad conducen a un claro deterioro de la calidad asistencial y conllevan un riesgo para la seguridad clínica de los pacientes y de los profesionales en ejercicio. Entendemos que, en muchos de estos casos, se puede tratar de jóvenes con sus estudios recién terminados que, debido a una exposición del calibre que está suponiendo la epidemia de COVID19 pueden cometer errores que pesarán legalmente en su futuro desarrollo profesional. Se han dado ya situaciones esperpénticas como aquellas donde residentes de tercer o cuarto año acuden a rotar al servicio de urgencias o al centro de salud y deben ser tutorizados por alguien sin la formación postgrado oportuna y por tanto con menor capacitación. Durante la pandemia que estamos viviendo, la semFYC se ha caracterizado por el trabajo riguroso en pro de la mejora científica impulsando desde nuestros grupos de trabajo numerosísimas iniciativas informativas y formativas basadas en la evidencia, elaboradas tanto por especialistas de medicina de familia como con otros colectivos profesionales y organizaciones no gubernamentales. Y a pesar de este generoso y continuado esfuerzo y compromiso con el sistema sanitario público y los ciudadanos, recibimos esta normativa que interpretamos como un ataque contra la especialidad y una vulneración de los principios constitutivos de la Atención Primaria y la Atención Urgente. No obstante, y una vez más -al igual que lo hemos hecho siempre- semFYC pone a disposición del Ministerio y de la sociedad en general, todo nuestro conocimiento y experiencia para ayudar en estos momentos tan difíciles y de tanta gravedad sanitaria y social. Creemos que este RD debe revocarse porque entendemos que la sociedad demanda profesionales cada vez más capacitados en la resolución de sus problemas de salud y porque las instituciones sanitarias deben de ser conscientes del compromiso adquirido con la sociedad de ofrecer los mejores y más seguros servicios de atención médica. Somos conocedores de que esta situación no es nueva. Sabemos y desde semFYC hemos denunciado la situación cuando la hemos conocido, que algunas CCAA priorizan la cobertura de una plaza por alguien sin la capacitación adecuada en detrimento de la seguridad de los pacientes. Estamos al tanto de estas prácticas ilegales de contratación a lo largo de los últimos años, pero lo que realmente ahora lo agrava, es que este RD les otorga cobertura legal. Por todo ello desde la semFYC entendemos que NO SE DEBE contratar a médicos recién licenciados sin acceso a la Formación Sanitaria Especializada, especialistas sin título homologado o estudiantes, para realizar la labor de un especialista en Medicina Familiar y Comunitaria ni en Atención Primaria ni en Urgencias. Son profesionales no cualificados que van a desempeñar una tarea asistencial, sin formación y sin la capacitación suficiente con el peligro que eso supone para la seguridad del paciente. Es, no hay ninguna duda, una amenaza grave para la Medicina Familiar y Comunitaria, también para las otras especialidades, porque condiciona el futuro de nuestros residentes, desvaloriza el valor de la formación, deteriora la calidad asistencial, coloca en situación de riesgo a los pacientes y además, no resuelve nada. Por todo ello, desde la Sociedad Española de Medicina Familiar y Comunitaria EXIGIMOS que el RD 29/2020 se derogue o se modifique en lo relativo a la contratación de personas sin titulación. Pondremos todos los medios a nuestro alcance (recursos jurídicos, información a la población...) para que el mismo no se aplique. Tenemos que decir que NO TODO VALE, ni siquiera en una situación tan grave como la que estamos viviendo.
, , , , .
|
Implementation of flipped classroom combined with case-based learning | c2645f0d-1d4f-4437-94ff-1dd6994f39de | 8812661 | Pathology[mh] | Introduction As a bridge subject connecting basic medicine and clinical medicine, pathology is a compulsory course for worldwide medical students. The main goal of undergraduate pathology teaching is to provide students with an understanding of the functional and structural changes of disease, so that they can understand and interpret clinical signs and symptoms. Therefore, pathology teaching is central to the understanding of disease and is important to the medical education of physicians. Pathology teaching itself is a hard, complicated and challenging task, often with frustration. For years, pathology has developed from a macroscopy and autopsy based discipline to a finessed histological and molecular field with great advances. The growing advances in pathology but relatively lagging teaching models of pathology bring great challenges to pathology educators. Presently, the main teaching model in medicine curricula, including pathology in China, is still the traditional approach characterized by lecture-based classroom (LBC) and students’ in-class listening. In LBC, instructors deliver knowledge and concepts in a teacher-centered manner, and students collectively listen, take notes and passively study without understanding. Although this approach aimed at knowledge infusion helps to the memorization of basic knowledge in a limited period, many shortcomings have been found in the cultivation of the abilities of problem-solving, critical thinking, teamwork and self-active learning. Based on our experiences and previous reports, many medical students complained that the LBC-based pathology course was boring, and it was arduous to effectively learn this course. Given that medicine, including pathology, is a practical science, the LBC model cannot fulfill the requirements of the present medical education system and has been proven to be poorly effective in high-order learning abilities. Thus, innovative and modern teaching methods should be applied in pathology education to promote students’ abilities to solve real clinical problems. The flipped classroom (FC) is a brand-new pedagogical approach that inverts teacher-centered and lecture-based traditional education into student-centered active learning education. In FC progress, students study preprepared course materials pre-class without the restrictions of time and place, and participate in face-to-face interactive learning and problem solving in class, often with collaborative small group activities under the instructor’ guidance. FC leads to a shift from passive learning to active learning, facilitates higher order learning of the materials and promotes the development of various cardinal skills, thus overcoming the shortcomings of traditional LBC with desired results. The popularity of FC modality is growing in education, especially in various fields of medical education, such as anatomy, pharmacology, physiology, dermatology, and radiology. Meanwhile, case-based learning (CBL) is an in-class activity that can be applied within the FC model, where medical students work in groups to deal with questions related to disease diagnosis and clinical decision-making. However, the implementation of FC in pathology teaching has not been well explored. In this study, we administered FC combined with CBL in undergraduate pathology teaching to investigate the efficacy and potential advantages of this teaching model compared with the traditional LBC model, so as to provide evidence for the reform of pedagogical approaches in pathology education. Materials and methods 2.12.22.32.4 Statistical analysis Statistical analysis was performed by Statistical Product and Service Solutions (SPSS) 16.0 software (SPSS Company, Chicago, IL). Normal distribution and homogeneity of variance of the data were evaluated. The scores of 5-point Likert scale in the survey were compared between the 2 groups by nonparametric Mann–Whitney Test. The mid-term examination scores and post-quiz scores were analyzed by independent samples t test. The χ 2 test was used to analyze the sex and nationality match. The statistical data were presented as mean ± standard error of the mean (SEM). P < .05 was considered statistically significant. Participants This study was conducted at Anhui Medical University (Hefei, Anhui Province, China) in November 2019 (2019/2020 academic year), and a total of 117 third-year students majoring in clinical medicine were enrolled here. In their first 2 years of study, the students had completed Human Anatomy, Histology-Embryology, Human Physiology, Biochemistry, Molecular Biology, Medical Immunology, Medical Microbiology, Cell Biology and Human Parasitology courses, and mastered the basic knowledge of the subjects of fundamental medicine. Here, the pathology curriculum consists of 2 parts. The first part interprets the principles of general pathology, including cell injury, cell death and adaptations, tissue repair, hemodynamic disorders, inflammation, and neoplasia. The second part proceeds to specific disease processes as they affect particular organs or systems. The students who had not previously participated in inverted classroom had learned the first part of the pathology curriculum, and entered the mid-term examination of pathology to assess their previous performance on pathology learning. Participants were assigned to 2 groups: the FC group, wherein students received FC combined with CBL approach (n = 59); and the LBC group, wherein students received the traditional LBC approach (n = 58). This study was approved by the Institutional Review Board and Ethics Committee of Anhui Medical University (20190197), and all participants submitted their informed consent before this study. Study design The ninth edition of Pathology published by People's Medical Publishing House was used for pathology teaching. Two sections in the textbook (cardiovascular and respiratory system diseases) were chosen to implement FC combined with CBL teaching modality in this study, with a total of 12 class hours. The FC process was administered according to the guidelines described by Yang with minor modifications. The study was performed complying with the flowchart as stated in Figure . In the pre-class section of FC, the instructor briefed students on the FC model and provided study materials on the course website one week before the class. The course materials included learning purposes and requirements, tutor-generated annotated PowerPoints (PPTs), web-based video lectures and typical case handouts. Participants were asked to study the learning materials on their own time and prepare their PPTs to explain the learning points. The in-class session started with a brief outline of lectures by the instructor, followed by a brief presentation and discussion on students’ own PPTs within pre-assigned groups (n = 7–8). Students in groups then collaborated to take turns interpreting and discussing the real clinical cases proposed by instructors. These cases of cardiovascular or respiratory system diseases were not disclosed to students until the class convened. The instructor provided guidance and feedback during the progress of students’ interpretations and indicated the feature and atypical findings for each case. Finally, the instructor summarized the concepts and went over the arduous questions raised by students. In the traditional LBC, students were encouraged to preview the textbook before the class and attended a didactic lecture carried out by the instructor. A traditional question-and-answer session was included in LBC class. The classes in FC or LBC were conducted by the same instructor to guarantee the consistency of the teaching content and objectives in the 2 teaching approaches. Data evaluation When completing each section of teaching content, students in both the FC and LBC groups were asked to enter a post-class quiz to evaluate their learning outcomes. All items in the post-class quiz were A2-type questions proposed to evaluate students’ mastery of basic theoretical knowledge and students’ clinical case analysis ability. Based on Bloom's taxonomy of cognitive learning objectives, the categories of “remember” and “understand” collapsed into “basic theoretical knowledge,” and items in other categories were regarded as “clinical case analysis”. Moreover, an online questionnaire using WeChat was applied to collect data on students’ feedback and perceptions of the 2 teaching models. The questionnaire was modified based on previous references with verified reliability and validity, and was composed of 11 Likert-type items covering both positive and negative aspects (Fig. ), with a 5-point scale (1 = strongly disagree, 2 = rather disagree, 3 = neutral, 4 = rather agree, 5 = strongly agree). Results 3.13.23.3 Students’ self-perceived competence and opinions in the questionnaires Students who participated in this study finished the online questionnaires on their self-perceived competence and opinions towards FC or LBC teaching modality. The response rates of students in both groups for the questionnaires were 100%. In Figure , compared with LBC, more students believed that the FC approach improved their learning motivation ( P = .019), with no difference in increasing self-active study ( P = .208). In addition, the FC model was believed to significantly enhance the students’ abilities of comprehension of knowledge ( P < .001), critical thinking ( P = .024), and patient management ( P < .001) compared with the LBC model, but failed to benefit the memorization of fundamental knowledge ( P = .183). These results were consistent with students’ performance in post-quizzes, where FC modality significantly raised students’ scores in higher levels of cognitive abilities (ie, the ability of clinical case analysis), rather than scores in basic theoretical knowledge. Moreover, this study revealed that FC significantly improved students’ teamwork compared with LBC ( P < .001). The positive responses led to a higher rate of satisfaction with FC than LBC ( P = .006), and students agreed that FC rather than LBC should be popularized in pathology and other subjects ( P < .001). For negative items, more students agreed that FC increased pre-class burden than LBC ( P < .001), whereas no significant difference was found in the students’ opinion on in-class pressure between the 2 groups ( P = 0.116). Baseline characteristic A total of 117 students were assigned to the FC group (n = 59) or LBC group (n = 58). In Table , there was no significant difference between the FC group and the LBC group in terms of age ( P = .753), sex ( P = .775), and nationality ( P = .662), suggesting a good demographic match between the 2 groups. The analysis of the mid-term examination scores before interventions was performed to evaluate whether the previous performance on pathology learning of the students from the 2 groups was comparable. In Figure A, no apparent difference in the mid-term examination scores of pathology between the FC and LBC groups ( P = .718) indicated that students’ learning levels and abilities in the 2 groups were nearly equal. Students’ performance in the post-class quizzes The efficacy of the 2 teaching modalities was assessed by a post-class quiz, which was conducted when finishing the each section of teaching content. The response rates of students in both groups in 2 post-class quizzes were 100%. In Figure B, the students in the FC group gained higher scores (the average of 2 quizzes) in the post-class quizzes than those in the LBC group (78.73 ± 1.53 vs 65.52 ± 1.48, P < .001). Furthermore, the scores related to basic theoretical knowledge in the FC and LBC groups were 42.46 ± 0.66 and 40.52 ± 0.88, respectively, with no statistically significant difference ( P = .089). However, higher scores regarding the questions of clinical case analysis were observed in the FC group than in the LBC group (36.27 ± 1.22 vs 25.01 ± 1.26, P < .001). Our findings revealed that both the FC and LBC models were suitable for passing on basic theoretical knowledge, whereas the FC model exhibited greater advantages than the LBC model in developing the higher level of cognitive abilities. Discussion With the rapid development of medical and science techniques, the traditional model of pathology education cannot fulfill the needs of current medical education systems. Recently, there is a shift in education methodology from traditional teacher-centered didactic lectures to student-centered active learning approaches, including FC and CBL, which are becoming increasingly popular in medical education. In this study, we implemented FC combined with CBL in the teaching of pathology, where students were required to study preprovided course materials pre-class, followed by clinical case-based interactive group discussion in-class. To date, this innovative modality has not been well examined in undergraduate pathology education. We compared students’ performance and perceptions of this format with those of the traditional LBC teaching model. We found that students preferred the CBL-based FC modality as a whole, which can realize all-round teaching aims and promote students’ various cardinal skills. We explored the effectiveness of FC combined with CBL in the pathology education of undergraduate medical students. Here, we revealed that the scores in post-class quizzes in the FC group were much higher than those in the LBC group, which was mainly attributed to the increase in scores on case-analysis type of questions, but not theoretical knowledge-related questions. This suggests that although both FC and LBC may improve the acquisition of knowledge, the former makes students better understand and apply the new knowledge. Our findings were consistent with previous studies that the FC model applied in other medical subjects fostered students’ abilities in analyzing and solving clinical problems, thus improving the higher level of cognitive abilities. Multiple factors in the preparation and implementation of FC may contribute to this advantage of FC. In personalized pre-class studies, students in the FC group can arrange their self-paced study plans and learn poorly mastered knowledge multiple times. In in-class studies, students are encouraged to use what they learned pre-class to solve clinical problems during group discussions. Apart from simple knowledge mastery, this output process of FC emphasizes to foster the abilities of application, analysis and synthesis. Therefore, the FC model bridges the gap between pre-class knowledge learning and in-class cultivation of analyzing and solving abilities, and helps to connect theory to practice much better. We then compared students’ cognitions and opinions on the FC model with those on the LBC model. Herein, stronger learning motivation was considered by students from FC group than LBC group, but no apparent difference in students’ opinion on self-active study. To perform well in group discussion in-class, students often have a better motivation for studying the pre-provided course materials and searching for additional web materials in pre-class study. However, since it is hard to implement supervision and management in students’ pre-class study, the consensus of self-active study can be diverse from person to person, thus causing students’ different performance in group discussion. Our findings suggest that instructors should convey the intention, value and implementation of the FC model in detail to all students before class and truly inspire students’ enthusiasm for self-active study without strict supervision from teachers. Additionally, FC was believed to be helpful to improve students’ abilities of knowledge comprehension, critical thinking and patient management, as reflected in students’ performance in the post-class quizzes where students in the FC group gained more scores than students in the LBC group, especially the scores in case-analysis type of questions. These findings are similar to previous studies that the FC approach can produce greater learning gains than the LBC model in many medical subjects, such as pharmacology, radiology, and anatomy. Furthermore, an improvement of students’ teamwork ability was found with the application of FC. Compared to LBC where there is only teacher-student interaction, the in-class study in FC using not only teacher–student interaction but also student-student interaction may contribute to promoting the teamwork ability, which is an essential ability to patient management for medical students. Overall, more students from FC group felt satisfaction with the teaching model than those from LBC group, and it was agreed that FC model should be popularized in the entire teaching of pathology since FC improved their wide-spectrum cognitive abilities. However, although FC had the above advantages over LBC in teaching pathology, students in the FC group held the view that the pre-class study took up an amount of their spare time and gave negative feedback on the pre-class burden in FC. This negative feedback might be attributed to the following factors. First, compared to LBC, wherein students may spend more time after class to review and do homework, FC participants mainly perform their study in pre-class and in-class time. FC as a student-centered and active learning method requires additional time for self-study and preparation for in-class presentation and discussion. Second, since the traditional teacher-centered teaching method has been accustomed by students, the participants who had not previously participated in FC are unfamiliar with this novel teaching approach and do not know how to effectively learn in pre-class study. We speculate that this negative feedback on pre-class burden may be partially relieved with the adaptation to FC modality and prolonged time for pre-class study (eg, 2 weeks). It is well known that group discussion in-class as a vital element of FC modality often brings more in-class pressure, which might be a drawback of FC reported in previous studies. Intriguingly, students’ opinion on in-class pressure is not significantly changed with the application of FC combined with CBL. This phenomenon may be due to the usage of high-yield clinical cases, because the small group study via case-based discussions has been shown to provide a nonintimidating, interactive and supportive environment to foster students’ clinical reasoning and increase the overall enjoyment of learning. In this study, similar to previous findings, we believed that the students who took good advantage of preprovided course materials would participate in the case-based discussions with more confidence and less anxiety, thus causing no raised in-class pressure. Limitations Several limitations need to be considered. First, this study was conducted for a relatively small cohort of participants. Studies with more participants enrolled in may help to further verify the effectiveness and advantages of FC in pathology teaching. Second, the application of FC usually consumes generous human, material and financial resources. We spent more faculty time in preparation of learning resources, including pre-recorded video, annotated PPTs and typical clinical cases, as is required for FC teaching modality. For the above reason, we selected only 2 sections in the pathology textbook for the implementation of FC in this pilot study. Full preparation, overall consideration and careful planning are needed before applying this teaching model in the entire teaching of pathology. Third, we focused on pre-class and in-class activities, but did not extend the study to after-class activities, which are conducive to consolidating the prior learned knowledge by continuous practice. The after-class activities can be achieved in a structured manner through additional programs in future. Conclusions In conclusion, our findings suggest that FC combined with CBL as a promising and effective modality may be helpful to improve students’ performance and promote their multiple cardinal skills during undergraduate pathology education. Further optimizations in course design, course management and course evaluation can fulfill the application of this innovative approach in pathology and other medicine curricula in medical colleges. Conceptualization: Li Cai, Rong Li. Data curation: Li Cai, Rong Li. Funding acquisition: Li Cai. Investigation: Li Cai, Yan-li Li, Xiang-yang Hu. Writing – original draft: Yan-li Li. Writing – review & editing: Li Cai. |
Harnessing Trichoderma Mycoparasitism as a Tool in the Management of Soil Dwelling Plant Pathogens | 97683c38-b204-4f4c-9093-3b8817edb2f5 | 11663191 | Microbiology[mh] | Near 2050, there will be 9.1 billion people on the planet, meaning that overall food production must increase by about 70% for food security . Plant diseases have been a source of worry for humankind ever since the development of agriculture. These diseases played a significant part in the depletion of natural resources and were responsible for 16% of global crop yield losses . Soil-borne plant pathogens economically affect crop production in tropical, subtropical, and temperate regions . There are currently 2 million tons of pesticides used worldwide, with herbicides making up 47.5% of usage, insecticides making up 29.5%, fungicides making up 17.5%, and other pesticides making up the remaining 5.5% . The top 10 countries in the world that use pesticides are China, the USA, Argentina, Thailand, Brazil, Italy, France, Canada, Japan, and India. It was observed that pesticide consumption worldwide increased by 20% over the past ten years . Agricultural chemical pesticides are crucial for emerging nations to protect crops from insect pest attacks and increase crop yields. Pesticides can boost agricultural yield via direct control of phytopathogens. Still, their residual toxicity harms the environment, non-target organisms, biodiversity, food chain, human health, and food safety. Recent research on the environmental fate of chemical pesticides on soil, land, water, and living beings has been spurred by worry about the environment and human health . Therefore, an urgent need is to explore and exploit biological control agents to manage phytopathogenic infections as an alternate and effective strategy. Biocontrol is the term used to describe the function of naturally existing organisms in integrated pest management in reducing the number of plant pests . Trichoderma species are the basis for more than 60% of the biofungicides currently licensed worldwide, making them the most effective biofungicides used in contemporary agriculture . These fungi live in soil mostly confined to rhizospheric regions of plants. They colonize plant material like grains, leaves, and roots . Trichoderma is the market leader for fungal bio-control agents on a global scale . Due to its dual ability to prevent disease and act as soil compost, it has gained a unique place in agriculture as a potent biocontrol agent, plant growth stimulant, and soil fertility improver. Due to its rhizosphere competence, competitive saprophytic ability, capacity to manufacture or induce hormone production in plants, ability to release nutrients from the soil, and ability to increase root system architectural development, it acts as an effective plant growth promoting fungi (PGPF) . Trichoderma can produce a variety of fungal enzymes, such as chitinases, glucanases, cellulases, and hemicellulases, responsible for the toxic action of fungi against soil-borne plant pathogens . These enzymes are also used in the postharvest disease management of papaya, apple, tomato, pear, mango, banana, potato, and berries . This genus has nine species, initially described in 1969 by Rifai and Webster . The genus was further divided into five divisions based on conidiophore branching by Bissett (1991). Trichoderma harzianum and T. viride are most frequently used as bio-control agents. The present review attempts to analyze and evaluate the biological control potential of Trichoderma spp. in managing fungal plant pathogens. Trichoderma: Classification, Characteristics, and Benefits as a Biocontrol AgentIdentification of Trichoderma IsolatesCulture Media for Trichoderma Selective Media for Trichoderma (TSM)richoderma represents the asexual stage (anamorph), whereas Hypocrea corresponds to the sexual stage (teleomorph), according to their taxonomic classification . In its asexual form, it falls under the division Deuteromycotina, while in the sexual stage, it is classified under Ascomycotina. Trichoderma species (spp.) are widely distributed across agricultural regions in all climatic zones, thriving at temperatures ranging from 25 and 30° C, due to their saprophytic nature . For instance, Trichoderma viride and Trichoderma polysporum prefer mild temperatures, while Trichoderma harzianum prefers warm climatic conditions. These fungi are easily identifiable due to their unique smells, attributed to the volatile compound δ-lactone 6-pentyl-α-pyrone (6-PP), . With their competitive saprophytic ability (CSA), Trichoderma spp. are also found in the rhizosphere of plants, where they induce systemic resistance against diseases and enhance plant growth and development . Trichoderma spp. utilize a range of substances, including carbon and nitrogen sources for their sporulation activity , and produce abundant powdery masses with green conidia . While the presence of Trichoderma citrinoviride has been reported in Southeast Asia, it is yet to be discovered in India . Most of the Trichoderma spp. prefer acidic environments, however they can adapt to a wide pH ranging from 2.0 to 13.0 . Typically, the conidial color morphology of Trichoderma spp. is green but can be grey, white, and yellow, depending on the species . The dominant saprophytic ability of these fungi allows them to compete with other soil organisms and colonize plant roots effectively. Additionally, these fungi also produce secondary metabolites and enzymes that promote plant growth as well as enhance disease resistance against pathogens . The genus Trichoderma includes a diverse range of fungi that exhibit a range of colony morphologies depending on the culture media used for their growth. For instance, when grown on potato dextrose agar (PDA) at 28 °C for seven days, Trichoderma cultures from soil samples displayed green pigmented colonies. Conversely, colonies from rhizospheric isolates grown at 25 °C and 30 °C appeared pale or yellowish, exhibiting rapid growth and conidia dispersion. The fungi can also be identified based on the arrangement of conidia and the phialides, which are projections of the conidiophores. Phialides, of these fungi in particular, were observed to be ellipsoidal, oblong, and bowling pinshaped . Additionally, a study by Sekhar et al. (2017) reported that ten isolates from the rhizosphere of groundnuts exhibited various morphological and microscopic traits, including colony color, reverse color, and the shape and features of conidia, phialides, and conidiophores . selective medium (TSM) is recognized as the gold standard for the quantitative separation of Trichoderma spp. from the soil. To grow quickly and sporulate, the fungus contains low glucose-specific fungal inhibitors, including pentachloronitrobenzene, p-dimethyl amino benzene, diazo sodium sulfonate, and rose bengal. At the same time, chloramphenicol is used to stop bacterial development . Trichoderma Selective Media (TSM)Trichoderma harzianum Selective Medium (THSM) The ingredients and amounts required for T. harzianum selective medium are the same as those mentioned for TSM including the media preparation procedure. However, antimicrobial agents such as chloramphenicol, streptomycin, quintozene, and propamocarb are added to the medium to isolate a pure colony of Trichoderma sp. . For instance, the media after autoclaving are supplemented with 0.25 g of chloramphenicol, 9.0 ml of streptomycin, 1.2 ml of propamocarb, and 0.2 g of quintozene. The use of THSM makes the comparison of aggressive and non-aggressive Trichoderma groups possible. The ingredients and amounts required for Trichoderma selective medium are MgSO 4 ∙7H 2 O (0.2 g), K 2 HPO 4 (0.9 g), KCl (0.15 g), NH 4 NO 3 (1.0 g), glucose (3.0 g), rose bengal (0.15 g), agar (20 g), chloramphenicol (0.25 g), p-dimethyl amino benzene diazo sodium sulfonate (0.3 g), pentachloro nitrobenzene (0.2 g), distilled water (1.0 L). For media preparation, the ingredients are mixed properly and then autoclaved for 15 min at 121 °C. The mixture is then supplemented with 0.25 g of chloramphenicol and 0.2 g of pentachloro nitrobenzene. To avoid solidification, the media should be maintained or stored at 45 °C. pp. employ several strategies to function as biocontrol agents . They can rapidly multiply or utilize available food sources more efficiently than soil-borne pathogens, outcompeting them and seizing control through efficient nutrient competition. They may also engage in mycoparasitism/hyperparasitism, feeding on a pathogenic species. Additionally, Trichoderma spp. also secrete secondary metabolites that inhibit or significantly delay the growth of infectious soil-borne pathogens in their vicinity, a process known as antibiosis . For example, various secondary metabolites with antimicrobial potential have been identified, such as gliotoxin from T. lignorum , gliovirin from T. virens , alamethicin F30, a peptaibol from T. viride , and harzianolide, an antifungal butanolide compound from T. harzianum . Many other such metabolites have been extensively reviewed in the report by Khan et al. (2020) . The secondary metabolites secreted by Trichoderma spp. exhibit antifungal and antimicrobial effects through various mechanisms of action. These include inducing cytotoxicity by producing toxins, inhibiting spore germination, hyphal elongation, and mycelial growth, as well as suppressing the formation of sexual structures. In bacteria, these metabolites interfere with cell division and cause cell wall degradation . Furthermore, Trichoderma spp. can induce plants to produce chemicals that induce localized or systemic resistance in plants. Finally, their ability to grow endophytically supports the growth of plants (Fig. ). Trichoderma species either actively attack their hosts in their defense mechanisms or succeed by stopping the pathogen from proliferating in the host’s surroundings. They use lytic enzymes, proteolytic enzymes, ABC transporter membrane pumps, diffusible or volatile metabolites, etc. Ascomycetes, basidiomycetes, and oomycetes fungus as well as nematodes are all controlled by Trichoderma species . Protease, chitinase, glucanase, tubulins, proteinase, xylanase, monooxygenase, galacturonase, cell adhesion proteins, and stress tolerance genes are a few significant categories of biocontrol genes that are readily isolated, cloned, and reported. These genes carry out particular tasks in a biocontrol mechanism, including cell wall disintegration, hyphal growth, stress tolerance, and parasite activity. The structural proteins known as tubulins, composed of microtubules, are useful for analyzing the composition of pathogen cell walls. Chitinase facilitates the hydrolysis of glycosidic linkages, while glucose oxidase catalyzes the conversion of D-glucose into D-glucono-1, 5-lactone, and hydrogen peroxide, all of which have antifungal properties. Xylanase assists in the hydrolysis of hemicellulose, which is a major component of plant cell walls . Genes related to biocontrol and mycoparasitism are triggered by several signal transduction pathways, such as the cAMP pathway and mitogen-activated protein kinase (MAPK) cascades . Particularly important in the heterotrimeric G protein signaling pathway is the MAP-kinase TVK1, which was identified in T. virens and its orthologs in T. asperellum (TmkA) and T. atroviride (TMK1) . TGA1 is crucial in managing coiling around host hyphae and generating antifungal compounds. When TGA1 is missing, the growth of host fungi is significantly more hindering . TGA3, on the other hand, is essential for biocontrol since strains created after the homologous gene was deleted were not pathogenic. Recently, a significant function in the biocontrol of T. virens has been attributed to the homolog of the VELVET protein, which is currently mostly recognized as the light-dependent regulator protein . Secondary MetabolitesA Crucial Aspect of Successful Biocontrol: MycoparasitismSynthesis of Antibiotics and Additional Antifungal SubstancesPlant Resistance Induced by Trichoderma: the Battle for Space and Nutrients Amid Biological StressAntibiosis It is an antagonistic relationship between two bacteria, where the release of antibiotics or metabolites by one negatively impacts the other. According to chemical and analytical reports, 373 distinct secondary metabolites, including non-volatile and volatile terpenes, peptaibols, pyrones, and compounds containing nitrogen, were obtained from Trichoderma species and showed great potential for the production of antibiotic and secondary metabolites . Some of the examples that are effective against the target pathogen in situ are alkyl pyrones, trichodermin, diketopiperazines, viridin, polyketides, isonitriles, peptaibols, and sesquiterpenes isolated from Trichoderma spp., and 6-Pentyl-2H-pyran-2-one . The Trichoderma fungus is incredibly adept at occupying many ecological niches and its varied metabolism allows it to produce various secondary metabolites and catabolize many substrates. The secondary metabolites include about 370 distinct kinds of chemical compounds with antagonistic effects which play a crucial role in safeguarding plant health . The peptide antibiotic Paracelsin was one of the earliest secondary metabolites from Trichoderma spp. to be described. Trichoderma spp. produces secondary metabolites, including antifungal metabolites from some chemical com-pound classes, depending on the strain. Ghisalberti and Sivasithamparam divided them into three groups: Water-soluble substances, such as koningic or peptidic acid, volatile antibiotics, which include δ-lactone 6-pentyl-α-pyrone (6-PP) and most isocyanide derivatives . According to Vinale et al. (2008), peptaibols are linear oligopeptides with 12–22 amino acids that are rich in α-aminoisobutyric acid, N-acetylated at the N-terminus, and amino alcohol phenylalaninol (Pheol) or tryptophanol (Trpol) at the C-terminus . The most investigated secondary metabolites are peptaibols, polyketides, pyrones, terpenes, and molecules like diketopiperazine (Table ). Mycoparasitism is a condition in which an antagonistic fungus, known as a mycoparasite, parasitizes another fungus, referred to as the host . Necrotrophic mycoparasites are the general classification for the fungus of the Trichoderma genus . Around 75 Trichoderma spp. are known to have a strong propensity to become mycoparasitic . Trichoderma necrotrophs physically combat fungal diseases by vigorously branching and coiling around the host’s hyphae, as well as by chemotactically attaching to the host and sensing prey. This action is known as mycoparasitism. Trichoderma can also produce pathogen appressoria homologs or penetrate using structures resembling appressoria . It produces antifungal compounds and hydrolytic enzymes that chemically degrade and break down the cell wall of the pathogen, eventually leading to the death of host in the final stage of the mycoparasitic interaction . The necrotrophic mycoparasitic effect was noted in the study by Błaszczyk et al. when T. atroviride AN240 and T. viride AN255 strains interacted with F. graminearum and F. avenaceum , respectively. Similarly, it was found that the R. solani hyphae benefited from the mycoparasites T. virens and T. harzianum . Furthermore, the mycoparasitic T. cerinum Gur1 strain reduced chickpea wilt disease in vivo . Most Trichoderma strains primarily produce polyketides and peptidaibols, which are volatile organic chemicals . Approximately 80% of the entries in the “Peptaibiotics Database” are related to different species within the genus Trichoderma , making it one of the most abundant sources of peptaibols . Peptaibols are categorized as antimicrobial polypeptides with a molecular weight between 500 and 2200 Da. They are rich in non-proteinogenic amino acids, especially alpha-aminoisobutyric acid and isovaline. Peptidoglycan synthesis is carried out by non-ribosomal peptide synthetases . Three major non-ribosomal peptide synthetase genes, tex1, tex2, and tex3 have been recognized in the Trichoderma genomes . Depending on the strain, Trichoderma spp. produce secondary metabolites that include antifungal substances from several chemical class components. The polyketides produced by these fungal species are a structurally diverse class of physiologically active chemicals found in bacteria, plants, and fungi . These include pigments, mycotoxins, and antibiotics (such as macrolides and tetracyclines) . Numerous Trichoderma spp. produce secondary metabolites classified as pyrones, anthraquinones, terpenoids, and epipolythiodioxopiperazines . The terpenoids produced by the Trichoderma spp., include tetracyclic diterpenes (like harziandion), sesquiterpenes (like trichothecenes, like trichodermin and harzianum A), and triterpene viridian . Additionally, T. viride , T. harzianum , and T. koningii manufacture the volatile antibiotic 6-phenyl-α-pyrone, which is responsible for the biological barrier against F. oxysporum having the unique coconut odour . Pathogens can be deprived of space and nutrients when antagonistic fungi invade shared habitats such as rhizospheres, plant tissues, or phyllospheres . This depends on their traits, the degree to which the host plant has colonized them, and the degree to which they have adapted to their environment . Trichoderma should be common in a niche where there is a rivalry with other fungi and have effective plant colonization strategies to effectively compete with diseases for nutrients and habitats. In terms of glucose and sucrose, the Trichoderma fungus grows quite quickly . Compared to other microbes, the fungus from the genus Trichoderma is far more adept in mobilizing and absorbing nutrients from the soil . This procedure yields the formation of citric, gluconic, fumaric, and organic acids, which decrease the soil’s pH and encourage the solubilization of phosphates and microelements like manganese, iron, and magnesium . Of particular importance in the competitive dynamics of plant–microbe and Trichoderma tripartite interaction is the production of siderophores, which is formed during an iron-deficiency stress. Siderophores are low molecular weight compound with a high affinity for iron (Fe), and help the fungi compete for iron by binding to the insoluble form (Fe 3+ ) . Later the insoluble Fe3 + is converted into Fe 2+ , which is readily taken up by microbes and plants . Based on their chemical structure and iron coordination sites, the microbial siderophores are typically categorized into three classes, such as hydroxamate, catecholate, and carboxylate . This process of iron bioavailability under stress due to siderophore production and iron solubilization ultimately triggers plant resistance by enhancing nutrient availability and strengthening the plant’s defenses against pathogens. Moreover, the Trichoderma spp. can also indirectly influence dangerous microbes through plants by triggering their systemic or local defensive mechanisms . Different elicitors secreted by the cells of microbes and plant tissues cause the induction of plant resistance. There are two categories for the elicitors: (1) Race-specific elicitors only cause gene-to-gene type defence in specified host cultivars (2) Nonrace-specific defence is induced in both host and non-host plants by generic elicitors delivered from pathogenic and non-pathogenic strains simultaneously. The discovery of conserved domains, such as the microbe- or pathogen-associated molecular patterns (MAMP/PAMP), is the major foundation for the plant defence response (Fig. ) . These domains activate two types of innate immunity in plants: PAMP-triggered immunity (PTI) and effector-triggered immunity (ETI) . richoderma as a Bioremediation AgentTrichoderma spp. thrives in plant roots and soil, where they combat fungal infections and demonstrate resilience against most agrochemicals. Furthermore, they exhibit high resistance to various environmental pollutants such as tannery effluents, organometallic compounds, heavy metals, and hazardous chemicals like cyanide. As a result, these fungal genera are well suited to investigate as a genetic re-source for use in the phytoremediation of harmful contaminants . Trichoderma bioremediation techniques for inorganic contaminants including heavy metals and others can be categorized into four categories: BiosorptionBioaccumulationBiovolatilizationPhytoremediationsorption is the ability of biological materials to extract heavy metals from wastewater via physicochemical or metabolic absorption mechanisms. It entails attaching to free groups of negatively charged molecules in a variety of biopolymers that make up the microbial cell wall in a metabolism-independent manner. For example, using the batch approach, the dried biomass of Trichoderma sp. was tested for removing harmful heavy metal ions at concentrations ranging from 0.5 to 2.0 mg/L at different pH levels . The biomass of T. harzianum was found to significantly absorb Cr (VI) ions from the aqueous solution. FTIR spectroscopy revealed that the amine in chitin and the chitosan in the fungal cell wall play a crucial role in metal binding . The energetically dependent metal inflow mechanism living cells use during bioaccumulation is the active removal of the metal procedure. The ability to withstand and gather heavy metals like cadmium, zinc, copper, and arsenic in vitro have been demonstrated for various Trichoderma species. Trichoderma spp. has been shown to improve the solubility of soil micronutrients like Zn, Cu, Fe, and Mn. Cu (II) binding in the cell wall surface was demonstrated by T. viride as a metal tolerance mechanism. At pH 5.0 and 100 mg/L of Cu (II) at 30 °C, a maximum of 80% of the copper was eliminated in 72 h. Copper was rendered less dispersed and accessible in the media by binding to the cell surface, which decreased the metal’s toxicity . It involves the enzymatic conversion of organic and inorganic metalloid compounds into their volatile by-products, a process known as biomethylation. T. viridian and T. asperellum have been shown to be arsenic in liquid surroundings and should be volatilized. The fungal strains Rhizopus sp., Neocosmospora sp., Trichoderma sp., and sterile mycelial strain were found to have the most effective at removing arsenic from soil . Microbe-assisted utilizing plants and microbes to remove toxins biologically are called phytoremediation, also known as phytobial remediation. Trichoderma spp. aids in phyto extraction processes that promote the absorption of other ions, including nitrates, in the root area and adopt certain hazardous metals and metalloids. Pteris vittata , an arsenic-accumulating fern, grows more roots when T. harzianum strains are used because they can detoxify potassium cyanide (Table ). Biological control is a method of reducing crop pests by employing helpful microbial organisms. Among the many beneficial bacteria, Trichoderma sp. is frequently utilized as a biocontrol agent against various plant diseases. Trichoderma sp. is active rhizosphere colonizers that also infiltrate cortex cells in roots and live as endophytes. Examples include Trichoderma harzianum , T. longibrachiatum , T. virens , and T. asperellum , etc. These species contribute significantly to plant growth and metabolism, promoting increased shoot and root lengths, enhanced overall plant growth, and improved vigor and emergence seedling in many crops such as beans, brinjals, cauliflower, chickpeas, cucumbers, lentils, pigeon peas, radishes, tomatoes, and rice. Figure provides a schematic representation illustrating the plant growth-promoting activities of Trichoderma species. Trichoderma antibiotics such as viridin, gliotoxin, enzymatic breakdown of cell walls, and physiologically active heat-stable metabolites like ethyl acetate are engaged in preventing illness and promoting plant development. Examples of important categories of biocontrol genes include xylanase, chitinase, tubulins, protease, glucanase, proteinase, galacturonase, genes encoding cell adhesion proteins, monooxygenase, and stress tolerance genes that are easily isolated, cloned, and described. These genes carry out particular tasks within theubulins are microtubule-derived structural proteins that facilitate the examination of the content of pathogen cell walls. Chitinase facilitates the hydrolysis of glycosidic bonds. D-glucose is converted by glucose oxidase into hydrogen peroxide, 5-lactone, and D-glucono-15-lactone, all of which have antifungal qualities. One important component of plant cell walls, hemicellulose, is broken down with the help of xylanase . Effectiveness of Trichoderma Species Against Fungus that are Found in the SoilFuture Prospectsrichoderma spp. is used as plant growth enhancers and antagonistic fungal agents against various pests. Faster metabolic rates, antimicrobial metabolites, and physio-logical conformation are important elements that primarily lead to these fungi's antagonistic interactions. It is commonly recognized that Trichoderma fungi are antagonistic to several bacteria, invertebrates, and other soil-phytopathogens . The efficiency of Trichoderma species against soil-dwelling fungi is shown in (Table ). Timber PreservationTolerance to Abiotic and Biotic StressesReaction Sensitivity to Agrochemicals The efficacy of the bioagents is decreased by the toxic character of the fungicides used in crop production technologies. Consequently, researchers have investigated Trichoderma sensitivity and tolerance . Studies have been conducted on the impact of different fungicides in combination with Trichoderma species on treating illnesses holistically. Trichoderma spp. has been proven to be more resistant to broad-range fungicides than many other soil microbes because of their capacity to colonize pesticide-treated soil more quickly . Because numerous unusual contaminants can be treated simultaneously and have a wider range of uses, trichomoniasis alone, in conjunction with bacteria or immobilized formulations, can show enormous potential. This will boost the overall cost-effectiveness of the method. Ejechi studied T. viride ’s capacity to prevent G. sepiarium and Gloeophyllum sp. from decomposing obeche (Triplochiton sceleroxylon) wood over an 11-month period, during the wet and dry seasons in tropical climate. T. viride successfully suppressed the decay fungus through mycoparasitism and nutritional competition, . Trichoderma isolates can inhibit and kill wood decay fungi by the release of volatile organic compounds, with production varying based on the specific Trichoderma isolate . Trichoderma fungi are found on freshly cut sawn wood of many different softwood and hardwood species as well as in soils across all latitudes. The fungi that develop on wood surfaces have the potential to reduce the value of sawn objects by reducing the functional aesthetics of lignocellulosic materials. Under favorable conditions, the Trichoderma fungi can even cause soft rot in wooden materials because they produce a wide variety of enzymes, including cellulase, hemi-cellulase, xylanase, and chitinase . Overall, the diverse capabilities of Trichoderma fungi highlight their significant impact on both wood preservation and degradation processes. Trichoderma species, an excellent natural protein source, can help plants with-stand biotic and abiotic stress conditions. It has also been reported that the cloned and characterized hsp70 gene from T. harzianum T34 isolate encodes a protein that, when expressed and produced in Arabidopsis, increases tolerance to heat and other abiotic stimuli. This gene codes for a protein product that allows the fungus to resist heat and other stresses, such as oxidative, salt, and osmotic tolerances, to reach higher levels. With the recent sequencing of the genomes of seven Trichoderma species, there is promising potential for developing transgenic plants that may provide effective resistance to changing climate conditions . Numerous studies have shown that Trichoderma spp. can tolerate and detoxify environmental contaminants from contaminated areas. The synthesis of amylases from T. harzianum , cellulases from T. reesei , 1,3 β-glucanases from T. harzianum , T. koningii , and chitinases from T. aureoviridae and T. harzianum is well known. The genus Trichoderma is a good source of various hydrolytic and industrially important enzymes. They have been used in the manufacture of extracellular gold nanoparticles using T. koningii and the fabrication of silver nanoparticles (AgNPs) using T. reesei . However, further investigation is needed to examine the long-term consequences on the stability and rehabilitation of the contaminated sites before these interactions between Trichoderma and plants can be fully exploited. The survival of Trichoderma depends on the diverse metabolic capabilities of this group of fungi. A deeper understanding of these processes will lead to better, more affordable environmental protection techniques and increased crop output in contaminated areas. Due to their broad spectrum of biotic and abiotic stress tolerance, Trichoderma species have the potential to be exploited in sustainable agriculture and biofuel crops with the help of modern plant biology methods and techniques. The overuse of chemical fertilizers and pesticides has negatively impacted human health and the environment. Consequently, research on organisms as biocontrol agents has emerged as a promising approach to finding sustainable and eco-friendly alternatives. Trichoderma , due to its multiple biocontrol traits, is one of the most extensively studied beneficial microbes for managing various plant pathogens. These are free-living soil fungi that colonize decomposing organic matter and form beneficial endophytic associations with plants. They inhibit phytopathogenic fungi while stimulating plant defenses, promoting root development, and enhancing plant growth under biotic and abiotic stresses. It is effective not only against fungi and oomycetes but also against insects, pests, and nematodes. This is achieved through enhanced plant defenses or by directly inhibiting pathogen growth via competition, antibiosis, or parasitism. However, a deeper understanding of these underlying processes can significantly improve the effectiveness of Trichoderma in managing plant-pathogen interactions. This would make Trichoderma a highly effective biocontrol agent, biofungicides, and biofertilizers thereby reducing dependence on synthetic chemicals and promoting sustainable agriculture. Although numerous formulations containing different Trichoderma species are available for sustainable crop production, their high cost often limits accessibility for small-scale farmers. Species like T. atroviride and T. harzianum are notable mycoparasites, while newly discovered strains hold promise as cost-effective alternatives for farmers. Additionally, the production of hydrolytic enzymes and N-acetylglucosamine (GlcNAc) from Trichoderma spp, which influences signaling and virulence properties in bacteria, highlights its diverse biocontrol functions. Despite their potential, broader accessibility and cost reduction are needed to maximize their application in sustainable crop production. |
Asclepius and Yellow Ribbon techniques: Efficacious alternative strategies for advancing a coronary sinus electrophysiology catheter | c256cc06-459a-4052-a1ec-d04a41dc7674 | 7358818 | Physiology[mh] | INTRODUCTION The detection and recording of electrical signals is essential to the assessment of cardiac conduction for diagnosing cardiac dysfunction and planning treatment. Multi‐electrode catheters are routinely used in clinical electrophysiological (EP) assessments. To assess the coronary sinus (CS) (the vein situated between the left atrium and left ventricle), a decapolar catheter is advanced through a central venous access. In our hospital, the catheter is typically inserted via the femoral vein rather than the internal jugular or subclavian vein to avoid complications such as pneumothorax or neck hematoma with airway compromise (Eisen et al., ; Parienti et al., ). Femoral access is also more acceptable to patients who are nervous about the neck insertion approach. Occasionally, we encounter challenging variations in CS anatomy related to the vessel size, orifice direction, or lumen curvature (Mak, Hill, Moisiuc, & Krishnan, ; Mlynarski, Mlynarska, Tendera, & Sosnowski, ). These variants make advancement of the EP catheter tip more difficult owing to the fixed curve of the decapolar catheter. Difficulties in catheter advancement lead to prolonged operation time and greater radiation exposure. An alternative method involves the use of a steerable decapolar EP catheter. This device has ergonomic handling and a rotary dial designed for fine tip movements that allow for catheter advancement through challenging anatomy (Er, Yuksel, Hellmich, & Gassanov, ; Manolis, Koulouris, & Tsiachris, ). However, the steerable EP catheter is expensive, costing 17,189 New Taiwan Dollars (557 USD), 40% more than the fixed‐curve decapolar EP catheter. As an alternative, we have developed two innovative, safe, and highly successful procedures, the “Asclepius technique” and “Yellow ribbon technique,” for advancing the standard EP catheter into the CS.
METHODS 2.1 2.2 2.3 2.4 Subjects This study was performed in a tertiary care center. We reviewed a total of 226 cases in 4 years from August 1, 2015 to July 31, 2019 involving catheter radiofrequency ablation in patients with paroxysmal supraventricular tachycardia (PSVT) or Wolff–Parkinson–White (WPW) syndrome. Difficulty was encountered with the CS approach in 38 of these cases (16.8%), prompting application of the Asclepius or Yellow Ribbon technique (Figure ). We compared the characteristics and outcomes between patients undergoing conventional and alternative insertion procedures (Table ).
Asclepius technique Our newly developed Asclepius technique begins with the preparation of a fixed‐curve decapolar EP catheter (French gauge 6) (Response, Abbot Laboratories, Chicago, IL, USA) for the CS, a fixed‐curve quadrapolar EP catheter (French gauge 6) (Response, Abbot Laboratories) for the right atrium, and a steerable decapolar EP catheter (French gauge 6) (Livewire, Abbot Laboratories) for the HIS‐right ventricle (HIS‐RV). The angles of the fluoroscopy are 60 degree straight LAO. First, the HIS‐RV steerable catheter was introduced into the CS. Once in position, we place the fixed‐curve decapolar EP catheter with the tip toward the orifice of the CS. This second catheter then winded around the steerable catheter and up into the CS smoothly and quickly, resembling an Asclepius—the medical symbol depicting a snake winding around a staff (Figure ). Finally, we withdrew the steerable catheter from the CS and back into the HIS‐RV (Video ). The entire procedure takes no more than 5 min.
Yellow Ribbon technique The fixed‐curve decapolar EP catheter was placed with the tip near the orifice of the CS. The catheter was then advanced around the right atrial chamber in a circle, until the tip, headed toward the end of the circle, spontaneously entered the CS. The shape of the tip movement through the process resembles a yellow ribbon (Figure ). The angles of the fluoroscopy are also 60 degree straight LAO. This procedure proceeds quickly and smoothly (Video ). Basically, we perform the Asclepius technique at first for difficulty in CS placement. If prolonged attempt over 5 min, we do the Yellow ribbon technique subsequently. In rare cases, we just switch the techniques to each other not in absolute order of priority. In conclusion, no more than 5 min in each techniques were done by our experience. The supplementary video shows how each technique works.
Statistical methods Continuous variables are presented as the mean ± standard deviation ( SD ). The chi‐squared test was used for categorical variables, and the independent samples Student's t test was used for continuous variables. All statistical analysis was carried out using SPSS 23 for Windows (IBM Corp.).
RESULTS Alternative techniques were needed to insert the catheters in 38 of the patients. No significant differences in baseline characteristics were observed between those treated using alternative and conventional techniques, except that more patients with hypertension and paroxysmal atrial fibrillation were included in the alternative technique group. The Asclepius technique was used in 31 of the 38 patients undergoing an alternative technique. Of these patients, 30 (96.7%) were successful. Only one attempt failed, as the steerable catheter would not engage. This failure was followed by successful recannulation with a different catheter using the curve‐shaped guidance of a new central access on the right internal jugular vein. The remaining 7 patients were treated using the Yellow ribbon technique, with 100% success and no complications. The overall success rate for patients undergoing alternative techniques was 97.3% (37/38).
DISCUSSION The present study shows that our newly developed Asclepius and Yellow Ribbon techniques yielded impressively high success rates (Asclepius, 96.7%; Yellow Ribbon, 100%) for difficult CS catheter placement. No complications occurred. No patient required further neck internal jugular vein or thoracic subclavian venipuncture, so patient suffering and risk of complications were decreased. For HIS bundle signal recording in Asclepius technique, the only steerable catheter is targeted to RV‐HIS originally. And the nonsteerable CS catheter we used is designed for CS signal, not available for RV‐HIS. Therefore, we have to exchange catheters step‐by‐step: (a) The steerable RV‐HIS catheter to CS. (b) The nonsteerable CS catheter to CS by the Asclepius technique. (c) Pullback the RV‐HIS catheter from CS and advance to RV‐HIS in position. We attribute the success of the Asclepius technique to several factors. First, the steerable catheter provides better handling of the distal tip movement. Second, after the steerable catheter engages into position with the orifice and modified lumen of the CS, the fixed‐curve catheter can pass through. Third, placement of the steerable catheter in situ may lower the resistance of the tract to help the fixed‐curve catheter move more easily and deeper into the vessel. In the Yellow ribbon technique, the circumferential shape of the catheter tip movement returns the tip back to the starting point near the CS orifice, providing a better curve that takes shape naturally and strong support. As in the ancient Chinese martial art Tai Chi, taking advantage of the internal leverage of circular motion makes a forward pushing motion easier.
LIMITATIONS Theoretically speaking, those patients with right atrial dilation or enlargement are difficult for engagement of the CS to a certain extent, by using nonsteerable CS catheter as the original method. One of limitations in our retrospective study is lack of robust formal echocardiography report in each patients received the procedures. We just prove and assure that these new techniques provide a better cost‐performance way to overcome some structural heart diseases such as atrial dilation or enlargement. According to the rare case failure to apply such techniques, the possible causes lead to unworkable including extreme chamber size with CS orifice in sharp angle, anomaly of CS opening, or operator's technical experience.
CONCLUSIONS The Asclepius and Yellow Ribbon techniques are safe, cost‐effective, and highly successful alternative strategies to facilitate catheter placement for electrophysiological assessments.
The authors would like to thank the Ministry of Science and Technology of the Republic of China, Taiwan for financial support of this research under contract MOST 108‐2218‐E‐006‐019.
Conception and design: J‐YC; data acquisition: T‐WC, Y‐HW; data analysis and interpretation: J‐YC, T‐WC, M‐SH, W‐DL; statistical analysis: J‐YC, T‐WC, M‐SH; drafting and finalizing the article: J‐YC, TWC, M‐SH; critical revision of the article for important intellectual content: J‐YC, M‐SH.
This study was approved by the ethics committee of National Cheng Kung University Hospital and was conducted according to the guidelines of the International Conference on Harmonization for Good Clinical Practice.
Click here for additional data file. Click here for additional data file.
|
Proposed diagnostic and prognostic markers of primary malignant hepatic vascular neoplasms | 23c65a66-029b-4afe-a5bf-c6f8c57d9499 | 11089664 | Anatomy[mh] | Hepatic malignant vascular tumors include a wide range of malignancies – hepatic epithelioid hemangioendothelioma (EHE) with low-to-intermediate grade malignancy, and angiosarcoma (AS), which is highly malignant with a negative prognosis . While AS is invariably aggressive with a high rate of local recurrence and metastatic potential, the progress of EHE is heterogeneous, ranging from indolent to aggressive. Given the differences in treatment and prognosis between EHE and AS, it is important to differentiate between these two tumors . Both EHE and AS usually manifest as multiple hepatic lesions with some confusing and complex imaging features . Due to the overlapping imaging features of EHE and AS, a liver biopsy is not infrequently required. However, the pathological findings of EHE and AS may overlap, which makes it hard to suggest a definite diagnosis even with a liver biopsy, especially if only small biopsy specimens are obtained, or there is no apparent endothelial differentiation. Recently, nuclear Calmodulin Binding Transcription Activator 1 (CAMTA1) expression has been suggested as a useful marker for EHE diagnosis . However, previous studies included a limited number of patients. In addition, there are no internationally recognized pathological criteria to assist with the prediction of the course of EHE in terms of various clinical outcomes. This study aimed to evaluate the pathological findings of EHE and AS with different malignant potentials in correlation with patient outcomes. Furthermore, we suggest diagnostic markers to distinguish these tumors accurately and present pathological guidelines for predicting prognoses. Patients and samplesClinical informationPathologic assessmentImmunohistochemical staining and evaluationQuantitative evaluation of Ki-67 labeling indexStatistical analysisThis single-center retrospective study was approved by the Institutional Review Board of Asan Medical Center, Seoul, Korea (approval No. 2021 − 0766). An electronic data search in our pathologic database identified 59 cases of histologically diagnosed hepatic vascular tumors including EHE and AS between January 2003 and December 2020. The inclusion criteria were as follows: (a) adult patient (≥ 18 years old); (b) who underwent contrast-enhanced Computed Tomography (CT) or Magnetic Resonance Imaging (MRI) within three months of pathologic confirmation; (c) who underwent biopsy or resection for pathologic confirmation; (d) who had adequate paraffin blocks available for review; and (e) who had been clinically followed at least three months after the pathologic confirmation. We excluded patients who had inadequate CT or MRI image quality for imaging review, and those without adequate pathologic slides for review (Fig. ). Finally, 59 patients (34 EHE, and 25 AS) were included in this study. Clinical information, including age at biopsy or surgical resection; sex; most recent lab data after pathologic confirmation; presence of the hepatitis virus; serum alanine aminotransferase (ALT); aspartate aminotransferase (AST); alkaline phosphatase (ALP); gamma-glutamyltransferase (GGT); total bilirubin, alpha-fetoprotein (AFP) level; albumin; platelet; prothrombin time (PT); protein induced by vitamin K absence or antagonists II (PIVKA II); and date of disease recurrence and death or last follow-up, were obtained from electronic medical records. All available hematoxylin and eosin (H&E)-stained slides were reviewed without knowledge of the clinical information and pathological characteristics. One representative paraffin block to be used for immunohistochemistry was selected for each case after a close pathologic review by two pathologists (H.J.K. and J.H.S). Additional histopathological features were evaluated as follows; tumor border, tumor size (cm), tumor number, hemorrhage,prominent nucleoli, cytoplasmic vacuoles, and the number of mitotic figures per 10 high-power fields. The area of 10 HPFs reaches 2mm 2 . Immunohistochemistry (IHC) was performed using a Benchmark XT (Ventana Medical Systems, Tucson, AZ) autoimmunostainer with an Optview DAB IHC detection kit (Ventana Medical Systems) according to the manufacturer’s instructions. Briefly, 4-µm-thick sections of representative formalin-fixed paraffin-embedded tissue blocks were deparaffinized and rehydrated by immersion in xylene and a graded ethanol series. Endogenous peroxidase was blocked by incubation in 3% H 2 O 2 for 10 min, followed by heat-induced antigen retrieval. The sections were incubated at room temperature for 32 min in primary antibodies for CD 31 (1:500, Mouse monoclonal, clone JC70, CELL MARQUE, Rocklin, CA, USA), ERG (1:400, Rabbit monoclonal, clone EP111, NEOMARKERS, Rocklin, CA, USA), CAMTA1 (1:200, Rabbit polyclonal, NOVUS, CO, USA), TFE3 (1:100, Rabbit monoclonal, clone MRQ-37, CELL MARQUE, Rocklin, CA, USA), KI-67 (1:200, Mouse monoclonal, clone MIB1, DAKO, Glostrup, Denmark), and P53 (1:1000, Mouse monoclonal, clone DO-7, DAKO, Glostrup, Denmark). Immunostained sections were lightly counterstained with hematoxylin, dehydrated in ethanol, and cleared in xylene. Immunolabeled slides were independently evaluated by two experienced pathologists (H.J.K. and J.H.S). The expression of CD31, ERG, and CAMTA1 was evaluated as “negative” and “positive” regardless of intensity and proportion. The expression of TFE3 and P53 was scored according to the proportion and classified under the following criteria; score 0: no expression, or < 10% staining, score 1: 10% to 1/3rd part staining, score 2: 1/3 to 2/3rd part staining, score 3: >2/3rd part staining. A TFE3 and P53-nuclear staining score of 3 was considered positive. Immunostaining for p53 has been used as a surrogate marker for the presence of a TP53 mutation. An aberrant p53 expression, associated with a TP53 mutation was defined as the form of strong diffuse nuclear positivity or null-type pattern. For quantitative analysis of Ki-67 labeling positivity in neoplastic cells, digital slide images were generated with a Pannoramic 250 Flash III (3DHistech, Hungary), and ERG and Ki-67 IHC staining slides were analyzed with an open-source bioimage analysis software platform QuPath v0.3.0 . After accommodating the 3,3’-diaminobenzidine staining vector with the “Estimating stain vectors” command, we counted Ki-67 positive cells with the “Positive cell detection” command, and Ki-67 positivity was calculated and used as the Ki-67 labeling index of the tumors. The cut-off point for a high Ki-67 was set to 10%. Clinical and pathological features of EHE and AS were compared according to the final pathologic diagnoses by means of the t -test or the Wilcoxon rank sum test for continuous variables and Fisher’s exact test for categorical variables. For imaging analysis, per patient analysis and per lesion analysis of imaging findings between EHE and AS were compared using consensually agreed imaging findings. Overall survival (OS) was defined as the period from the initial pathologic diagnosis to death from any cause or last follow-up. OS times were calculated using the Kaplan–Meier method, and statistical significance was evaluated by the log-rank test. EHE were subdivided into low grade-EHE (LG EHE) and high grade-EHE (HG EHE) EHE according to different overall survival rates. Statistical analyses were performed using the SPSS statistical software program (version 18.0 SPSS Inc. Chicago, IL, USA), and R (version 4.0.0). Patient characteristicsComparison of histologic features between EHE and ASIHC panel resultsPrognostic factors in EHE and survival analysis (LG EHE, vs. HG EHE, vs. AS)The clinical, imaging, and pathological characteristics of 59 patients are shown in Table . Patients with EHE (mean 49.6) were significantly younger than those with AS (mean 61.7). Females predominated the EHE group compared to the AS group (61.8% vs. 20%, p = 0.004). In the background of the liver, cirrhosis was observed exclusively in the AS group (70.8%, p = < 0.001). The AS group showed significantly lower levels of albumin and platelets than the EHE group (both p = < 0.001). There were no differences between the two groups in the other serum liver function tests, tumor markers, and hepatitis virus status. In outcome data, deaths were observed more frequently in AS than in EHE (100% vs. 23.5%, p = < 0.001). All of the patients with AS died from the disease. The follow-up period (months) was shorter in AS than in EHE (10.2 ± 11.7 vs. 65.6 ± 52.3, p = < 0.001). Histological characteristics between EHE and AS are summarized in Table ; Fig. . The p-value was determined by multiple comparisons. Compared with AS, EHE was characterized by myxohyaline stroma (97.1%, p = < 0.001), buds, hobnails, or papillary-like projections (82.4%, p = < 0.001), and intracytoplasmic vacuoles (97.1%, p = < 0.001). Mitotic counts in EHE (1.26 ± 1.4/10HPFs) were significantly lower than in AS. In contrary, AS revealed general histopathological features including infiltrative tumor borders (100%, p = < 0.001), hemorrhage (40%, p = 0.004), solid/sheet growth (44%, p = < 0.001), sinusoidal infiltration (100%, p = 0.007), hypercellularity (44%, p = < 0.001), high grade nuclear atypia (96%, p = < 0.001), and high mitotic counts (mean ± SD/10HPFs, 9.66 ± 7.83/10HPFs). There were no significant differences between the two groups in tumor size (cm) ( p = 0.052), tumor number ( p = 0.067), necrosis ( p = 0.160) and prominent nucleoli ( p = 0.122). Immunohistochemically, ERG or CD31 expression was observed in all 59 cases (Table ). The p-value was determined by multiple comparisons. CAMTA-1 nuclear positivity was observed in 31 of the 34 cases of EHE (91.2%). However, none of AS showed CAMTA-1 immunopositivity ( p = < 0.001). Of the three cases that were negative for CAMTA-1, two cases were strongly positive for TFE3. TFE3 positivity was found in nine cases of EHE and in one case of AS (26.5% vs. 4.3%, p = 0.038). Ki-67 labeling index counted by QuPath was significantly higher in AS (mean 42.0%, range 12.6–69.5%) than in EHE (mean 6.0%, range 0.1–15.7%, p = < 0.001). Immunohistochemistry was not available for two cases of AS, but high Ki-67 (≥ 10%) was observed in all remaining AS. Additionally, immunostaining for p53 was performed as a surrogate marker for the presence of a TP53 mutation. Aberrant p53 expression was more frequently identified in AS than in EHE (87% vs. 3%, p = < 0.001) (Fig. ). To predict the prognosis of EHE, we analyzed histological and immunohistochemical findings associated with overall survival (OS) using the Cox proportional hazards model in the EHE group (Table ). In this univariate survival analysis, mitotic activity (cut-off: 2/10HPFs, p = 0.035) and the Ki-67 index (cut-off: 10%, p = 0.021) were significantly associated with OS. Other histological findings, including necrosis, solid/sheet growth, myxohyaline stroma, sinusoidal infiltration, hypercellularity, buds, hobnails or papillary-like projections, high-grade nuclear atypia, and intracytoplasmic vacuoles were not associated with survival. Immunohistochemical findings, including TFE3 positivity, and aberrant p53 expression were also not correlated with OS. Using these prognostic factors including mitotic activity and Ki-67 index, EHEs can be classified as LG EHE and HG EHE. HG EHE was defined when mitotic activity was more than 2/10HPFs or the Ki-67 index was more than 10%. As shown in Table , the sensitivity and specificity of mitotic grading in EHEs were 0.72 and 1.00, respectively. Those of the Ki-67 index were 0.44 and 1.00, respectively. These two diagnostic criteria can be used in the differential diagnosis between EHE and AS. The mitotic count and Ki-67 index showed high sensitivity (0.96 and 1.0, each) and specificity (0.62 and 0.76, each) in diagnosing AS. Aberrant p53 expressions also manifest with high sensitivity and specificity (0.87 and 0.97, respectively). Using these prognostic values including mitotic activity, the Ki-67 index, and aberrant p53 expression, we can classify three groups as following: LG EHE, HG EHE, and AS. In survival analysis by means of the Kaplan–Meier method, EHE and AS showed significant differences in overall survival (Fig. ). A total of 33 patients died. Of these 33 patients, one was LG EHE, and seven were HG EHE. Most of them (25/33) were AS. Patients with EHE lived longer (median 169.4 months) than those with AS (median 10.2 months, p = < 0.001). The difference in survival between the two groups was significant ( p = < 0.001). Also, the three groups, comprising the LG EHE, HG EHE and AS, showed significant differences in survival (LG EHE vs. HG EHE, p = .020, LG EHE vs. AS, p = 0.001). Patients with LG EHE lived longer than those with HG EHE (median 206.6 vs. 101.7 months, p = 0.019) and AS (median 206.6 vs. 10.2 months, p = < 0.001). Malignant hepatic vascular neoplasms, including angiosarcoma and epithelioid hemangioendothelioma, are extremely rare. Although hepatic AS is well known as a rare but highly aggressive neoplasm characterized by high recurrence rates and tumor-related death, hepatic EHE can be considered clinically unpredictable because it frequently exhibits indolent behavior but sometimes develops into advanced neoplasms . The study of the pathology and radiology of primary malignant hepatic vascular neoplasms requires precise diagnosis and also the improvement of prognostic evaluation. Compared with AS, hepatic EHE revealed a relatively well-defined border, myxohyaline stroma, buds, hobnails, papillary-like projections and cytoplasmic vacuoles. They exhibited little hemorrhaging, solid/sheet growth, and hypercellularity. In addition, the mitotic count of EHE was significantly lower than that of AS. Although these pathological features showed a statistically significant difference, it was difficult, in some cases, to accurately differentiate the two diseases based on the H&E findings alone because of overlapping features between EHE and AS. This is especially difficult when there is an abnormal morphology or when biopsy material is limited. However, immunohistochemical staining was helpful in these cases. All hepatic EHE and AS cases included in this study were positive for endothelial markers CD31 or ERG staining. CAMTA-1 nuclear positivity was observed in 91% of EHE, and none of the AS cases was positive. A recent study by Doyle et al. evaluated CAMTA-1 expression in a large series of EHE and other soft tissue neoplasms. Nuclear expression of CAMTA-1 was a highly sensitive and specific marker for EHE, observed in 86% of the total cases. It was explained by the identification of repetitive translocations involving chromosomal regions 1p36.3 and 3q25 in EHE, resulting in the formation of a fusion between WWTR1 (WW domain-containing transcription regulator) and CAMTA1 (calmodulin-binding transcription activator 1) . In published studies, the detection frequency of this fusion gene has been reported to vary (range: 77–100%) ; and the overall prevalence of this fusion gene in EHE is approximately 90%. This result was similar to that of our study. Also, more recently, a small subset of EHE was found to have an alternative YAP1-TFE3 gene fusion . In EHE with this fusion, the immunohistochemical results showed nuclear TFE3 was uniformly expressed, whereas CAMTA1 was negative in most cases . Although the number of cases is small, our study results showed that most CAMTA1-negative cases (2 of 3, 67%) showed strong TFE3 positivity. However, it is not recommended that TFE3 immunostaining be performed alone to confirm a TFE3 rearrangement as TFE3 expression has been shown to be non-specific as confirmed in WWTR1-CAMTA1 EHE . In our study, immunostaining was performed without genetic testing. Therefore, we can only estimate the type and frequency of EHE according to histological characteristics and immunohistochemical results. In addition, significant differences were observed between the EHE and AS groups not only in CAMTA1 and TFE3 but also in the Ki-67 proliferation index and P53 expression type. The Ki-67 proliferation index was observed in more than 10% of the AS cases, and except for three cases, p53 immunostaining exhibited an aberrant pattern. Accordingly, EHE and AS could be more accurately differentiated using H&E findings as well as the immunohistochemical results of CAMTA1, P53 expression pattern, and Ki-67 proliferation index. In previous literature, it has been mentioned that EHE has a variable clinical course . However, there are still no internationally recognized pathological criteria for the prediction of EHE behavior associated with various clinical courses. Therefore, among the imaging and pathological factors, statistical analysis was performed on the factors affecting the overall survival. Mitotic activity and Ki-67 proliferation index showed significant results, and accordingly, EHE could be classified into LG and HG group. When survival analysis was performed by dividing participants into three categories: LG EHE, HG EHE, and AS, a significant graph was drawn for each group. Therefore, Ki-67 proliferation index and mitotic activity can be suggested as tools to predict the behavior of EHE. To the best of our knowledge, this is the first study to present the criteria for predicting the behavior of EHE. Based on the previously mentioned results, we would like to propose the following diagnostic algorithm for primary hepatic vascular neoplasm (Fig. ). ERG or CD31 expression confirms the vascular nature of tumor cells. CAMTA1 positivity is highly specific for the diagnosis of EHE, as none of other tumor cells reacted to this antibody. Among the CAMTA1-negative cases, if an aberrant P53 expression is identified (sensitivity 87%, specificity 97%) or the mitotic activity and Ki-67 are high (sensitivity 88%, specificity 91%), AS can be diagnosed. When less than one of the three factors, comprising mitotic activity, a Ki-67 proliferation index, and P53, are shown to be present, EHE may be diagnosed. It may be further divided into high grade and low grade according to the mitotic activity and Ki-67 proliferation index level, and accordingly, EHE behavior can be reflected and diagnosed. Our study should be interpreted within its limitations. First, since it is not a study including a large population, further studies are needed. Second, this study was conducted retrospectively. We tried to maintain as much objectivity as possible, but prejudices that we did not consider may be involved. Finally, we have previously mentioned the types of EHE based on gene rearrangement. However, in this study, we inferred the type only from the results of CAMTA1 and TFE3 immunostaining. Hence, we were unable to evaluate the accuracy of the information on gene rearrangement because a gene study was not conducted. If a gene study is included in a following study, it is expected that the understanding of EHE will be broadened. Immunohistochemistry for CAMTA-1, P53, and Ki-67 labeling may help distinguish EHE and AS in histologically ambiguous cases, especially in small biopsied tissue. Moreover, the combination of mitotic activity and ki-67 labeling can be a prognostic factor for EHE with various clinical behaviors. |
Monepantel is a non-competitive antagonist of nicotinic acetylcholine receptors from | c9719851-2946-40e1-bea5-b1dcf11e4b40 | 5963102 | Physiology[mh] | Introduction Soil-Transmitted Helminth (STH) infections in humans and animals cause significant disease (morbidity & mortality) globally. At least 1.2 billion people suffer from STH infections, with an estimated at-risk population of 4.5 billion ( , , ). Control of these infections is heavily reliant on the use of anthelmintics as there are no effective vaccines and sanitation is often sub-optimal ( ). There are a limited number of drugs used to treat helminth infections ( ). Antinematodal (anthelmintic) drugs can be classified on the basis of similarity in chemical structure; benzimidazoles, imidazothiazoles, tetrahydopyrimidines, macrocyclic lactones, amino-acetonitrile derivatives, spiroindoles and cyclooctadepsipeptides. The benzimidazoles, imidazothiazoles, tetrahydopyrimidines and macrocyclic lactones are older anthelmintic drug classes. Increasing reports of resistance to the ‘older’ anthelmintic drugs has encouraged the development of newer drug classes: amino-acetonitrile derivatives, spiroindoles and cyclooctadepsipeptides. Zolvix ® (Novartis Animal Health, Greensboro, NC, USA) is a recently developed anthelmintic for control of gastro-intestinal (GI) nematodes in sheep. It was first introduced in New Zealand in 2009, and contains 25 mg/ml monepantel (mptl) as the active ingredient ( ). Monepantel is the first member of the amino-acetonitrile derivative (AAD) group of anthelmintics. It has a wide range of activity against nematodes in sheep, including those resistant to benzimidazoles, imidazothiazoles and macrocyclic lactones ( , , ). Disappointingly, resistance has developed in infected goats and sheep following treatment with monepantel. The first field report of monepantel resistance was observed in Teladorsagia circumcincta and Trichostrongylus colubriformis in goats and sheep on a farm in the lower North Island in New Zealand <2 years after its initial use on that farm ( ). Subsequent cases of resistance to monepantel have been reported for H. contortus in sheep ( , ). The emergence of field resistance to monepantel within very short periods following treatment underscores the pressing need to understand the full mode of action of the drug and thus possible mechanisms of resistance. Monepantel has a site of action reported to involve ACR-23, a member of the DEG-3/DES-2 group of nicotinic acetylcholine receptors (nAChRs) ( ). showed AADs to cause hypercontraction of body wall muscles in Caenorhabditis elegans and Haemonchus contortus , leading to spastic paralysis and subsequent death of the worms. These authors further revealed C. elegans treated with AADs display molting deficits and characteristics of necrosis ( ). Subsequent mutagenesis screens in H. contortus led to the identification of the nAChR subunit gene, mptl-1 ( acr-23 ), as a target for AADs ( ). Comparative genomics of all ligand-gated ion channel (LGIC) genes from different clades of nematodes reported nematode species lacking the ACR-23/MPTL-1 nAChR subunits to be insensitive to monepantel. On the contrary, nematode species having ACR-23/MPTL-1 were susceptible to monepantel, thus promoting ACR-23/MPTL-1 nAChR as a principal target for the AADs ( ). To further elucidate the mode of action of monepantel, showed that monepantel on its own did not activate heterologously expressed H. contortus DEG-3/DES-2 receptors but acted as a type 2 positive allosteric modulator when co-applied with choline ( ). In heterologously expressed C. elegans ACR-23 receptors, monepantel caused potentiation following activation by betaine ( ). Monepantel also acted as a positive allosteric modulator of H. contortus MPTL-1 and C. elegans ACR-20 receptors at low concentrations (<1 nM) but as a direct agonist at high concentrations (>0.1 μM) ( ). Interestingly, a homolog of C. elegans acr-23 is present in the A. suum genome ( ). However, A. suum is not susceptible to monepantel treatment in vivo ( ). Our research seeks to advance knowledge on the mode of action of monepantel on nAChRs from Clade III ( A. suum ) and Clade V ( Oesophagostomum dentatum ) nematodes. This present study therefore is an investigation of the effects of monepantel on nAChRs that are widely and ubiquitously expressed in A. suum ( Asu- ACR-16), and those involved in neurotransmission (pyrantel/tribendimidine sensitive and levamisole sensitive nAChRs) in O. dentatum . We find that monepantel also acts selectively as an antagonist on the nematode nAChRs we studied.
Materials and methods 2.1 2.2 2.3 2.4 2.5 Xenopus oocyte expression All nAChR subunit and ancillary factor cRNAs from A. suum ( Asu-acr-16 and Asu-ric-3 ), O. dentatum ( Ode-unc-29 , Ode-unc-38 , Ode-unc-68 and Ode-acr-8 ) and H. contortus ( Hco-ric-3 , Hco-unc-50 and Hco-unc-74 ) were prepared as previously described ( , ). Briefly, defolliculated X. laevis oocytes were obtained from Ecocyte Bioscience (Austin, TX, USA). A Nanoject II microinjector (Drummond Scientific, Broomall, PA, USA) was used to inject cRNA into the cytoplasm at the animal pole region of the oocytes. Homomeric nAChRs comprising Asu- ACR-16 subunits were expressed by co-injecting 25 ng of Asu - acr-16 with 5 ng of Asu-ric-3 in a total volume of 50 nl in nuclease-free water. Heteromeric nAChRs from O. dentatum were expressed by co-injecting 1.8 ng of each subunit cRNA that make up the levamisole ( Ode-unc-29 , Ode-unc-38 , Ode-unc-68 and Ode-acr-8 ) or pyrantel/tribendimidine ( Ode-unc-29 , Ode-unc-38 and Ode-unc-63 ) receptor with 1.8 ng of each H. contortus ancillary factor ( Hco-ric-3 , Hco-unc-50 and Hco-unc-74 ) in a total volume of 36 nl in nuclease-free water. Injected oocytes were transferred to 96-well microtiter plates containing 200 μl incubation solution (100 mM NaCl, 2 mM KCl, 1.8 mM CaCl 2 ·2H 2 0, 1 mM MgCl 2 ·6H 2 0, 5 mM HEPES, 2.5 mM Na pyruvate, 100 U·ml −1 penicillin and 100 μg ml −1 streptomycin, pH 7.5) and incubated at 19 °C for 5–7 days to allow for functional receptor expression. Incubation solution was changed daily.
Two-electrode voltage-clamp electrophysiology Currents produced by activation of expressed Asu- ACR-16 and Ode levamisole sensitive and Ode pyrantel/tribendimidine sensitive receptors were recorded using the two-electrode voltage-clamp electrophysiology technique as previously described ( , ). 100 μM BAPTA-AM was added to the oocyte incubation solution about 4 h prior to recording to prevent activation of endogenous calcium-activated chloride currents during recording. Recordings were made using an Axoclamp 2B amplifier (Molecular Devices, Sunnyvale, CA, USA) and data acquired on a computer with Clampex 10.3 (Molecular Devices, Sunnyvale, CA, USA). For all experiments, oocytes were voltage-clamped at −60 mV. Microelectrodes for impaling oocytes were pulled using a Flaming/Brown horizontal electrode puller (Model P-97; Sutter Instruments, Novato, CA, USA). The microelectrodes were filled with 3 M KCl and their tips carefully broken with a piece of tissue paper to attain a resistance of 2–5 MΩ in recording solution (100 mM NaCl, 2.5 mM KCl, 1 mM CaCl 2 ·2H 2 O and 5 mM HEPES, pH 7.3).
Muscle contraction assay Adult A. suum were collected from the IMES slaughterhouse, Belgrade, Serbia and maintained in Locke's solution (155 mM NaCl, 5 mM KCl, 2 mM CaCl 2 , 1.5 mM NaHCO 3 and 5 mM glucose) at 32 °C for 5 days. Locke's solution was changed daily. Ascaris muscle flaps for contraction studies were prepared as described in . Briefly, 1 cm muscle body flaps were prepared by dissecting the anterior 2–3 cm part of the worm. A force transducer in an experimental bath containing 20 ml APF (23 mM NaCl, 110 mM Na acetate, 24 mM KCl, 6 mM CaCl 2 , 5 mM MgCl 2 , 11 mM glucose, 5 mM HEPES, pH 7.6) and 0.1% DMSO and bubbled with nitrogen was attached to each muscle flap. The bath was maintained at 37 °C during which isometric contractions of each flap were monitored on a PC computer using a BioSmart interface and eLAB software (EIUnit, Belgrade). The system allows real time recording, display and analysis of experimental data. The preparation was allowed to equilibrate for 15 min under an initial tension of 0.5 g after which acetylcholine (1–100 μM) in the absence and presence of monepantel (1–30 μM) was applied to the preparation.
Drugs Acetylcholine was purchase from Sigma-Aldrich (St Louis, MO, USA). Zolvix (monepantel 25 mg/ml) was a gift from Dr Michael Kimber (Iowa State University, Ames, IA). Acetylcholine was dissolved in either APF or oocyte recording solution. Monepantel was dissolved in DMSO such that the final DMSO concentration did not exceed 0.1%. All other chemicals were purchased from Sigma-Aldrich (St Louis, MO, USA) or Fisher Scientific (Hampton, NH, USA).
Data analysis 2.5.1 2.5.2 Muscle contraction Isometric contractions of each Ascaris muscle flap in the absence and presence of monepantel were monitored on a PC computer using eLAB software (EIUnit, Belgrade). GraphPad Prism 5.0 (GraphPad Software Inc., La Jolla, CA, USA) was used to analyze the data. Responses to each acetylcholine concentration in the absence and presence of monepantel were expressed as mean ± s.e.m. Single concentration-response relationships were fitted to the data as described in Section .
Electrophysiology Electrophysiology data was measured with Clampfit 10.3 (Molecular devices, Sunnyvale CA, USA) and analyzed with GraphPad Prism 5.0 (GraphPad Software Inc., La Jolla, CA, USA). All concentration-response experiments began with a 10 s application of 100 μM acetylcholine. Increasing concentrations of acetylcholine (0.1–100 μM) were applied for 10 s at ∼2 min intervals. For experiments using monepantel, an initial 10 s application of 100 μM acetylcholine was followed after 2 min by continuous perfusion of a single concentration of monepantel (0.3 nM - 30 μM); for the rest of the experiment 10 s applications of acetylcholine (0.3–100 μM) were used at ∼2 min intervals in the presence of monepantel. Responses to each acetylcholine concentration were normalized to the initial control 100 μM acetylcholine responses and expressed as mean ± s.e.m. Concentration-response relationships (for each oocyte) were analyzed by fitting log concentration-response data points with the Hill equation as previously described ( ). % inhibition for monepantel was calculated using 100- R max for each experiment and plotted as mean ± s.e.m. on the concentration-inhibition plots. IC 50 's were calculated as previously described ( ). We used one-way ANOVA to test for statistical differences among treatment groups. If the group means were statistically different ( p < .05), we used the Tukey multiple comparison test to determine significant differences between groups.
Results 3.1 3.2 3.3 3.4 Effects of monepantel on Asu-ACR-16 Asu -ACR-16 is a homomeric nAChR comprising of Asu- ACR-16 subunits ( ). This receptor subtype requires only the ancillary factor Asu- RIC-3 for functional expression in Xenopus oocytes. ACR-16 is nicotine sensitive, but levamisole insensitive, and is commonly referred to as the nicotine subtype nAChR ( , , , ). We tested the effects of different monepantel concentrations (0.3 nM - 1 μM) on Asu -ACR-16 responses to acetylcholine. For control experiments, each acetylcholine concentration from 0.3 to 100 μM was applied for 10s, A. To test the effects of monepantel, 100 μM acetylcholine was first applied for 10s as control, followed by a 2 min wash, then perfusion with monepantel, after which acetylcholine applications were repeated in the presence of monepantel, B. For both control and test experiments, a 2 min wash period was applied between drug applications. Monepantel applied on its own did not cause activation of Asu -ACR-16, eliminating agonist actions on this receptor subtype. When co-applied with acetylcholine, monepantel caused a concentration-dependent inhibition of Asu -ACR-16 responses to acetylcholine. C shows concentration-response plots for these experiments. Monepantel did not change the EC 50 but did significantly reduce R max , C and D, implying non-competitive antagonism. EC 50 and R max values for these observations are shown in .
Effects of monepantel on expressed Ode pyrantel/tribendimidine receptors Ode pyrantel/tribendimidine receptors were described by to comprise of the nAChR subunits Ode- UNC-29, Ode- UNC-38 and Ode- UNC-63. Ode pyrantel/tribendimidine receptors require all 3 ancillary factors Hco- RIC-3, Hco- UNC-50 and Hco- UNC-74 from H. contortus for functional expression in oocytes. To investigate the effects of monepantel on expressed Ode pyrantel/tribendimidine receptors, we used the same experimental protocol described for Asu- ACR-16 receptors in Section . C shows concentration-response plots for these experiments. Monepantel alone did not activate Ode pyrantel/tribendimidine receptors ruling out an agonist action. When co-applied with acetylcholine, features of non-competitive antagonism (no change in EC 50 , reduction in R max ) were seen for Ode pyrantel/tribendimidine receptors, C and D. shows EC 50 and R max values for these experiments.
Effects of monepantel on expressed Ode levamisole receptors Previous studies showed the nAChR subunits Ode- UNC-29, Ode- UNC-38, Ode- UNC-63 and Ode- ACR-8 are required to express the Ode levamisole sensitive receptors in Xenopus oocytes ( ). This was in accordance with previous work by who expressed functional levamisole receptors in Xenopus oocytes when these oocytes were injected with Hco- UNC-29, Hco- UNC-38, Hco- UNC-63 and Hco- ACR-8 from H. contortus . The levamisole receptors required all 3 ancillary factors Hco- RIC-3, Hco- UNC-50 and Hco- UNC-74 from H. contortus for its functional expression in oocytes ( , , ). To investigate the effects of monepantel on expressed Ode levamisolThe monepantel concentrations tested on levamisole receptors were from 1 to 30 μM. Sample traces and concentration-response plots for these experiments are shown in A, B, & C. Again, monepantel applied alone failed to activate the Ode levamisole receptors, demonstrating no agonist action. As expected for a non-competitive antagonist, monepantel did not produce any significant change in EC 50 but did cause a significant reduction in R max as shown in C and D. EC 50 and R max values for these experiments are reported in .
Effects of monepantel on Ascaris body muscle flaps The results we obtained for expressed nicotine ( Asu- ACR-16), Ode pyrantel/tribendimidine ( Ode -UNC-29: Ode -UNC-38: Ode -UNC-63) and Ode levamisole ( Ode -UNC-29: Ode -UNC-38: Ode -UNC-63: Ode -ACR-8) subtype nAChRs which we describe in Sections , , , encouraged us to investigate the in vivo effects of monepantel on Ascaris muscle. A shows representative traces of the effects of adding increasing concentrations of acetylcholine in the absence and presence of monepantel on isometric contractions of an Ascaris body flap preparation. Application of increasing concentrations of acetylcholine from 1 to 100 μM produced concentration-dependent contraction responses which were inhibited by monepantel in a concentration-dependent manner. Monepantel on its own did not produce any significant change in baseline tension. Washing reversed the inhibition caused by monepantel to near control levels. Concentration-response plots (mean ± s.e.m.) for acetylcholine and monepantel are shown in B. 1 μM monepantel produced a significant reduction in the maximum response, R max and also shifted the EC 50 to the right. Increasing the concentration of monepantel to 3, 10 and 30 μM further reduced the R max and caused a further right-shift in the EC 50 , characteristic of a mixture of competitive and non-competitive antagonism. The activity of monepantel on Ascaris muscle flaps was rather interesting, as previous authors have showed monepantel (600 mg/kg) to lack in vivo activity against A. suum ( ). shows EC 50 and R max values for acetylcholine alone and in the presence of 1–30 μM monepantel. In an effort to infer what nAChR subtypes monepantel maybe acting on, we generated inhibition plots for Asu -ACR-16, Ode pyrantel/tribendimidine and Ode levamisole subtype nAChRs, C and . The inhibition caused by monepantel ( C) on Asu -ACR-16 had 2 components, one with an IC 50 of 1.6 ± 3.1 nM, and the other with an IC 50 of 0.2 ± 2.3 μM, suggesting the likelihood of more than one binding site for monepantel on Asu -ACR-16. The IC 50 monepantel for Ode pyrantel/tribendimidine receptors (1.7 ± 0.7 μM) was seen to be lower than that for the Ode levamisole receptors (5.0 ± 0.5 μM). The Ode pyrantel/tribendimidine receptors appear to be more sensitive to monepantel than the Ode levamisole receptors. When we compare these results with that of the in vivo effects shown in B, monepantel appears to be acting on a mixture of the Asu- ACR-16, Ode pyrantel/tribendimidine and Ode levamisole nAChRs.
Discussion 4.1 4.2 Non-competitive antagonism of monepantel on expressed A. suum and O. dentatum receptors In contrast to the positive allosteric modulatory effects of monepantel on DEG-3/DES-2 nAChRs, we found monepantel to produce non-competitive inhibition of Asu- ACR-16 and Ode levamisole sensitive and Ode pyrantel/tribendimidine sensitive nAChRs. These observations were consistent with results obtained from our muscle contraction assay. In all cases, monepantel produced no change in EC 50 but a significant reduction in R max , an observation which was seen to be concentration-dependent. Of all three receptor subtypes expressed in Xenopus oocytes, Asu -ACR-16 was most sensitive to monepantel as reflected by its IC 50 values of 1.6 ± 3.1 nM and 0.2 ± 2.3 μM. This was followed by the Ode pyrantel/tribendimidine receptor with an of IC 50 of 1.7 ± 0.7 μM, and the Ode levamisole receptor with an IC 50 of 5.0 ± 0.5 μM. Monepantel had a potent inhibitory effect on Asu -ACR-16, involving 2 components: one in which the maximum inhibition was only 40%, giving an IC 50 value of 1.6 ± 3.1 nM and the other in which the maximum inhibition nearly reached 100%, giving an IC 50 value of 0.2 ± 2.3 μM. These observations suggest that monepantel also acts via negative allosteric modulation, involving more than one binding site as is the case with abamectin and other negative allosteric modulators of nAChRs ( , ).
Mixed antagonism of monepantel on A. suum muscle flap Monepantel causes hypercontraction of both C. elegans and H. contortus ( ). In our electrophysiology experiments, monepantel produced an inhibitory effect on inward currents induced by acetylcholine. To further characterize the inhibition by monepantel, we tested different concentrations of monepantel in the presence of acetylcholine on A. suum muscle flaps. With all concentrations tested, monepantel produced a significant concentration-dependent reduction in R max , and a right-shift in EC 50 . In sharp contrast to the effects on C. elegans and H. contortus this indicates monepantel is a mixed antagonist of Ascaris muscle contraction. These results are likely due to the mixed nAChR populations on nematode muscle ( , , ). Asu- ACR-16 is extremely sensitive to monepantel; the observed shift in EC 50 in the muscle flap experiment is likely due to the almost complete inhibition of Asu- ACR-16 at the concentrations tested in the muscle. The non-competitive aspect of the inhibition is in agreement with the results obtained from the Ode levamisole sensitive and Ode pyrantel/tribendimidine sensitive receptors.
Conclusion Our results indicate that monepantel acts as an antagonist of Ascaris muscle contraction, and as a non-competitive antagonist, with subtype selective effects, of expressed nAChR subtypes from A. suum (Clade III) and O. dentatum (Clade V). Non-competitive antagonism of monepantel on expressed nAChRs which we show in our research adds to the reported mode of action of monepantel as a positive allosteric modulator of expressed receptors of the DEG-3/DES-2 group of nAChRs. Thus, illustrating the complexity of the mode of action of the drug; involving more than one target site. Detailed understanding of the mode of action of antinematodal drugs is necessary, especially when considered for use in combination therapy/products, an approach proven to be highly effective for parasite control. As with many pharmacological agents we find that the mode of action of monepantel is complex and the drug is active on multiple nAChR subtypes.
None identified.
|
Endothelial Dysfunction in Acute Myocardial Infarction: A Complex Association With Sleep Health, Traditional Cardiovascular Risk Factors and Prognostic Markers | ecab46cd-03b0-4c6a-8486-c3df12546131 | 11773158 | Surgical Procedures, Operative[mh] | Introduction The endothelium is a single layer of cells that covers the interior of major and minor vessels. It is an interface between the vessel walls and the circulating blood. Endothelial cells play an active role in regulating vascular tone through responses induced by vasodilator and vasoconstrictive stimuli. Furthermore, the endothelium is involved in angiogenesis, hemostasis control, and maintaining blood fluidity by preventing platelet and leukocyte activation . In recent decades, endothelial function has been widely used as a research tool and has gained prominence, particularly in cardiovascular disease (CVD) pathogenesis. It has been recognized as a valuable marker for prognosis following acute myocardial infarction (AMI) due to its role in regulating vascular health . Furthermore, endothelial function is a key component of cardioprotection, which gathers all measures and interventions to prevent, attenuate, and repair myocardial injury . Therefore, cardioprotective strategies aim to reduce infarct size and recover ventricular function after reperfusion of the ischemic area following AMI. In this context, a healthy endothelium, particularly through endothelial‐derived nitric oxide (NO) production, provides a cardioprotective effect that limits ischemia‐reperfusion (IR) injury. Moreover, elevated NO production has been observed to suppress endothelial cellsinflammation and to limit infarct size . Conversely, endothelial dysfunction, marked by reduced NO production and increased sensitivity to vasoconstrictor substances such as endothelin‐1 (ET‐1) and pro‐inflammatory compounds, heightens the risk of IR injury . Indeed, endothelial dysfunction‐induced atherosclerosis contributes to the impairment of barrier function associated with inflammatory response. This consequently promotes the heightened infiltration of low‐density lipoprotein and leukocyte extravasation into the vessel walls, ultimately leading to plaque instability . Endothelial function is a strong predictor of CVD and is a major component of CV health. It is well‐established that CV health encompasses modifiable lifestyle behaviors, including physical activity levels, diet, and smoking . Evidence demonstrates that low physical activity levels and sedentary behavior are associated with metabolic disorders and poor CV health, including endothelial dysfunction, which increases the risk of CVD . To stem poor lifestyle habits, a new field of medicine was established “Lifestyle Medicine” which aims to prevent and treat chronic disease using behavioral techniques and therapies. Cardiologists are drawn to lifestyle medicine practices because of their involvement in three subspecialty areas: behavioral cardiology, preventive cardiology, and cardiac rehabilitation . In this line, an initiative was induced by the American Heart Association named “Life's Simple Seven” to address unhealthy lifestyle behaviors and to advance preventive cardiology . Despite the efforts made to reduce the incidence of CVD, behavioral counseling targeting tobacco use, healthy diet, and physical activity practice was reduced in low and middle‐income countries . Recently, the American Heart Association added sleep to its list of traditional risk factors in a 2022 update known as “Life's Essential Eight,” ranking it as the eighth major component of CV health . Interestingly, sleep is considered a novel independent risk factor for atherosclerotic CVD . Sleep is a complex and multidimensional parameter, encompassing various metrics such as sleep duration and sleep quality. It plays a key role in maintaining an optimal body system by promoting CV health, regulating hormones, strengthening the immune system, and consolidating cognitive functions . Otherwise, most physiological processes, including heart rate, heart rate variability, blood pressure, platelet activity, vagal modulation, cardiomyocyte function, and endothelial cells display a circadian rhythm regulated mainly by sleep/wake and fasting/feeding cycles . A clear link between human synchronization of the circadian clock and CV health has been established . In this line, the daily light/dark pattern was considered a new modifiable lifestyle and a synchronizer of the circadian clock . It has been demonstrated that coronary artery disease (CAD) patients usually suffer from sleep disturbances with reduced levels of melatonin and desynchronization of the circadian clock . Yet, sleep disorders are associated with increased levels of inflammatory markers, endothelial dysfunction, and reduced cardiac vagal modulation leading to CAD . The connections between endothelial function and sleep patterns have received little attention in cardiology studies. Therefore, the main objective of this study was to investigate whether poor sleep quality and quantity were associated with endothelial dysfunction in CAD patients after AMI. In addition, a secondary purpose of this study was to investigate whether the traditional CV risk factors were associated with endothelial dysfunction. This study was also implemented to test the hypothesis that endothelial function could be a prognostic indicator for short (cardiorespiratory fitness, left ventricular ejection fraction (LVEF) and severity of the CAD status) and mid (major adverse acute and chronic events (MAACE)) terms.
Methods 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 Study Design Ethical approval for this study was obtained from the local Ethic and Investigation Committee (CPP SUD N° 0331/2021). The study protocol was registered in the Pan African Clinical Trial Registry under the trial ID: PACTR202208834230748. This study was conducted according to the Declaration of Helsinki.
Participants This study enrolled patients with acute ST‐elevation myocardial infarction (STEMI) who underwent a primary percutaneous coronary intervention (PCI) within 120 min of the onset of chest pain. For patients who did not have immediate access to primary PCI, the procedure was performed within 24 h after successful immediate fibrinolysis, following the guidelines of the European Society of Cardiology . Patients with non‐STEMI included in this study underwent a primary PCI within 24 h.
Echocardiography Parameters LVEF was measured by 2D echocardiography (EPIQ CVx model; Philips, Bothell, WA, USA) on admission and 24 h post‐primary PCI.
Endothelial Function Endothelial function was assessed between the second and the third week post‐primary PCI using an E4‐diagnose device (Polymath Company, Tunisia) that fully automates post‐occlusion hyperemia protocol. The E4‐diagnose is a noninvasive, high‐resolution (0.002°C) skin temperature measuring device composed of two finger temperature sensors, an integrated wrist cuff placed on the dominant forearm, and a portable micro unit controller. Notably, measurements in the nondominant hand serve as an internal control. A dedicated PC software was used to view, store, and export data. All measurements were performed in the morning, in a dimmed and quiet room with an ambient temperature between 22°C and 24°C. Patients were fasting for at least 8 h with no smoking and no heavy physical activity at least for 4 h and kept in a relaxing sitting position at least 20 min before the test. Before starting the test, finger's temperature must be above 27°C. Baseline temperature was recorded during the first 5‐min period followed by a 5‐min cuff occlusion period. During the occlusion period, the temperature of the dominant hand index finger decreased due to the absence of warm circulated blood. Following the 5‐min cuff occlusion period, the cuff was released and the hyperemic blood flow was restored to the dominant hand causing the “temperature rebound” recorded in the index finger reflecting endothelial function by the Endothelium Quality Index (EQI) . Based on the EQI, endothelial function was classified as follows: EQI > 2: healthy endothelial function, 1 ≤ EQI ≤ 2: mild endothelial dysfunction, EQI < 1: severe endothelial dysfunction .
Sleep Quality and Quantity A wrist‐worn accelerometer, based on motion sensors across three axes, was used to assess objectively sleep quality and quantity (ActiGraph GT3X, ActiGraph Inc, Pensacola, FL, USA). All patients wore the accelerometer during the third week post‐primary PCI. All accelerometers were initialized to a sampling rate of 30 Hz. The start time was set to midnight on the first day and the stop time was set to midnight on the seventh day. Actilife software (version 6.8.1; ActiGraph LLC, Pensacola, FL, USA) was used for the exportation of data. They were extracted in 60 s epochs to optimize the signal‐to‐noise ratio. It is worth noting that ActiGraph Gt3X has been validated for sleep period detection, as well as periods of wakefulness based on wrist movements during the night. The Cole‐Kripke algorithm was used to measure sleep efficiency, sleep duration, wake‐after‐sleep onset, and total time in bed . The Pittsburg Sleep Questionnaire Index (PSQI) questionnaire was used to complete the objective assessment with a subjective assessment. The PSQI index is the sum of seven components, each equally scored on a 0–3 scale: subjective sleep quality, sleep duration, sleep latency, sleep efficiency, use of sleep medication, sleep disturbances, and daytime dysfunction. Patients with a PSQI index above 5 were considered to have poor sleep quality .
Physical Activity Level The physical activity level was assessed using the Ricci–Gagnon scale (RG; Montreal). It consists of nine items that assess sedentary behavior, leisure physical activity, and daily physical activity with a 5‐point Likert scale. Participants are considered inactive when the sum of the nine items is < 18, and active when this sum is ≥ 18 .
Cardiorespiratory Fitness Distance covered during the 6‐min walking test (6mwt) was considered a proxy for cardiorespiratory fitness. The test was conducted in a 30‐meter hospital corridor during the third week post‐primary PCI. Patients were asked to walk as far as possible for 6 min and to stop in case of palpitations, dizziness, or rapid change in vital signs. Total distance after the 6 min was recorded, as well as perceived exertion with the Borg's scale. Heart rate, blood pressure, and oxygen saturation were measured before and after the test.
Statistical Analyses The normality of the data distribution was assessed by the Shapiro‐Wilk test. The null hypothesis of an absence of differences between groups was tested through a one‐way ANOVA when data were normally distributed, or with a Kruskal Wallis test when they did not fulfill this condition. A Bonferroni post‐hoc test was used for pairwise comparisons when the null hypothesis was rejected. The null hypothesis of an absence of difference between groups in categorical variables regarding clinical characteristics was analyzed through the Chi 2 test. The null hypothesis of an absence of association between relevant parameters was tested through the Pearson test product‐moment correlation when data were normally distributed, or with the Spearman rank order correlation when they were not. Multivariable regression was used to explore whether certain clinical factors (age, body mass index (BMI), dyslipidemia, diabetes mellitus, HbA1c, smoking, and physical activity level) have an impact on endothelial function, and to identify short‐term outcomes associated with endothelial dysfunction. An α risk of 5% was retained for all analyses. All statistical analyses were performed with SPSS (Statistical Package for the Social Sciences) version 26 (SPSS Inc., Chicago, IL, USA).
Results A total of 63 patients with a mean age of 56.2 ± 7.6 years were included in this study following AMI. Their clinical characteristics were summarized in Table . 3.1 3.2 Sleep Patterns Sleep parameters in patients following AMI were summarized in Table . 3.3 3.4 3.5 3.6 Endothelial Function A severe endothelial dysfunction was observed in 23.8% of patients ( n = 15), while a mild endothelial dysfunction was present in 63.49% ( n = 40). Endothelial function was considered normal in 12.7% of patients ( n = 8) (Table ). The mean EQI was 1.4 ± 0.7.
Sleep Patterns Sleep parameters in patients following AMI were summarized in Table .
Endothelial Function and Sleep Patterns An association was found between EQI and sleep efficiency (r = 0.34, p = 0.006) (Figure ), as well as between EQI and PSQI scores (r = ‐ 0.53 p < 0.001) (Figure ). Kruskal Wallis test revealed a difference in sleep efficiency (H = 6.013, p = 0.049, η2 = 0.07) and subjective sleep quality (H = 13.231, p = 0.001, η2 = 0.19) between endothelial function categories (Table ). Bonferroni post‐hoc analysis revealed that patients with healthy endothelial function had better subjective sleep quality ( p = 0.001) and better sleep efficiency than patients with severe endothelial dysfunction ( p = 0.016). No difference was found between endothelial function groups in sleep duration, total time in bed, and wake after sleep onset (Table ).
Effect of Traditional CV Risk Factors on Endothelial Function Multiple regression analyses showed that approximately 56.4% of the variance in endothelial function is related to CV risk factors (F (6,56) = 8.749, R 2 adjusted = 0.50, p < 0.001). Based on the analysis, the primary risk factors believed to influence endothelial function are identified as physical activity level, age, and smoking as presented in Table . We found a difference in physical activity levels between groups (H = 23.22, p < 0.001, η2 = 0.35). Patients with healthy endothelial function were more active than patients presenting endothelial dysfunction, whatever its severity ( p < 0.01). EQI was inversely correlated with concentration of triglycerides (r = −0.275, p = 0.029) and HbA1c (r = −0.315, p = 0.012) in blood.
Short‐Term Outcomes Multiple regression analyses showed that endothelial dysfunction affected cardiorespiratory fitness following AMI (F (3,59) = 3.185, R 2 adjusted =0.096, p = 0.03). However, the severity of CAD and LVEF were not affected by endothelial dysfunction as shown in Table . One‐way ANOVA test revealed a significant difference in 6mwt between the endothelial function groups (F (2, 60) = 4.56, p < 0.014, η2 = 0.13) (Table ). Bonferroni post‐ hoc test revealed that patients with healthy endothelial function exhibited higher cardiorespiratory fitness in comparison to those with severe endothelial function ( p = 0.017). Moreover, a significant correlation was found between EQI and 6mwt (r = 0.291, p = 0.021) using the Spearman test (Figure ).
Mid‐term Follow‐Up The average duration of mid‐term follow‐up for MAACE was 4 months. Out of the 63 patients, four individuals encountered rehospitalization due to acute coronary syndrome. All these patients exhibited severe endothelial dysfunction, as indicated by EQI values of 0.78, 0.01, 0.97, and 0.99, respectively. Among these cases, two patients exhibited double‐vessel disease, while two had triple‐vessel disease. Moreover, one patient with an EQI value of 0.28 and incompletely revascularized triple‐vessel CAD died 4 weeks after AMI.
Discussion In this study, endothelial function was evaluated in the context of AMI treated with primary PCI. It is a practice not commonly implemented in patient management. It is well established that endothelial dysfunction plays a crucial role in atherogenesis and the development of CVD. The findings of this pilot study showed that 87.3% of the patients following AMI tend to experience endothelial dysfunction. Our results are consistent with a previous study reporting that endothelial dysfunction, assessed using the reactive hyperemia peripheral artery tonometry method, was present in 50% of the patients with AMI after primary PCI . Endothelial dysfunction can cause arterial vasospasm that may reduce blood flow and aggravate arteriosclerotic plaque formation leading to acute coronary syndrome . Indeed, myocardial ischemia arises when the metabolic requirements of the myocardium are not sufficiently fulfilled due to decreased blood flow . However, it is not well understood whether endothelial dysfunction is already impaired before AMI or if it was induced by IR. It has been well demonstrated that a sudden supply of oxygen during reperfusion induces a more proinflammatory response in endothelial cells already caused by ischemia. The overproduction of reactive nitrogen species, exacerbation of oxidative stress associated with the overproduction of reactive oxygen species that reduce NO bioavailability, and transmigration of leukocytes through the endothelium are the major mediators contributing to endothelial dysfunction . To the best of our knowledge, limited studies have explored the relationship between sleep components and CV health in cardiology. This study seeks to address this gap in the literature by examining sleep quality and quantity in patients following AMI and investigating its potential link with CV health indicators such as endothelial function. Sleep health has been recognized as a modifiable CV risk factor and a novel component of CV health associated with a broad spectrum of CV events, including AMI . In this context, a most recent review and a large meta‐regression analysis of 40 prospective cohort studies showed that poor sleep quality and insufficient or excessive sleep duration were associated with increased systemic inflammation and reduced cardiac vagal modulation, which could be linked to a higher risk of CVD and all‐cause mortality . The present study showed that patients with endothelial dysfunction tend to experience poor sleep quality as compared to those with healthy endothelial function. These results align with a previous study involving 684 healthy subjects revealing an inverse association between endothelial function assessed through flow mediation dilation (FMD) and PSQI score, along with the Epworth Sleepiness Scale score . Furthermore, a more recent meta‐analysis emphasized the link between endothelial function and sleep disturbances, highlighting that lower FMD was linked to shorter sleep duration and circadian misalignment often observed in shift workers . Notably, none of the studies establishing the association between sleep patterns and endothelial function included patients with CVD. Therefore, including patients with CVD underscores the originality of our current study. Sleep health has been recognized as a core component of CV health. Notably, during normal sleep, heart rate and blood pressure are decreased. Furthermore, the dominance of either the sympathetic or parasympathetic nervous system characterizes each sleep phase . Consequently, poor sleep health elicited an imbalance in the autonomic nervous system and neurohormones with overactivity of sympathetic tone that plays a crucial role in the development of hypertension by increasing vascular resistance . Notably, high blood pressure exerts excessive mechanical stress on endothelial cells and exacerbates oxidative stress that decrease the bioavailability of NO contributing to endothelial dysfunction. In this line, the link between sleep disturbances and endothelial dysfunction could be explained by the presence of peripheral clocks within endothelial and vascular smooth muscle cells, which are controlled by the central biological clock located in the suprachiasmatic nuclei. These clocks regulate the circadian oscillation of NO and ET‐1 . The synchronization between the central clock and peripheral clocks located in cardiac tissue is controlled by melatonin, a key circadian hormone produced mainly by the pineal gland. Melatonin has a potential role in maintaining and stabilizing circadian rhythms . Notably, circadian misalignment associated with sleep disturbances is caused by reduced levels of melatonin observed in CVD patients . Consequently, the balance between NO and ET‐1 is disrupted, thereby impairing endothelial function. In addition, melatonin has a cardioprotective effect against IR injury, which is characterized by the proinflammatory response and exacerbation of oxidative stress contributing to atherosclerosis and endothelial dysfunction . Numerous studies have confirmed that melatonin is a potent free radical scavenger and has a powerful capacity to activate antioxidant enzymes . Several studies have shown that sleep disorders can affect lifestyle behaviors, including diet timing and physical activity practice, contributing to an increased risk of CVD . In the literature, a limited number of studies have established a clear link between lifestyle behaviors such as physical activity level, considered a modifiable CV risk factor, and endothelial function in patients with CVD. In the current study, multiple regression analysis revealed a significant association between low physical activity levels and endothelial dysfunction. In addition, comparison group analysis showed that active patients had better endothelial function than sedentary patients. Accordingly, a study among 2363 patients with prediabetes and type 2 diabetes demonstrated that sedentary behavior was linked to increased low‐grade inflammation biomarkers (i.e., C‐reactive protein, soluble intercellular adhesion molecule‐1, IL‐6, and TNF‐α), as well as endothelial dysfunction biomarkers . In this line, a more recent study among chronic kidney disease patients showed that endothelial dysfunction biomarkers increased with sedentary behaviors associated with increased oxidative stress leading to reduced NO bioavailability . Conversely, higher levels of physical activity and the regularity of moderate to vigorous intensity activity displayed an inverse correlation with endothelial dysfunction and low‐grade inflammation biomarkers . These findings underscore the cardioprotective effects of physical activity, which enhances endothelial function by inducing shear stress that increases the bioavailability of NO . Supporting this, a meta‐analysis revealed that exercise training improved endothelial function, particularly in patients with CVD . Additionally, it was well demonstrated that individuals who practice physical activity even under the recommended threshold (150 min of moderate aerobic physical activity per week, 75 min of vigorous aerobic physical activity per week, or a combination of both) had a lower mortality risk compared to sedentary individuals . Additionally, a most recent study demonstrated that engaging in moderate to vigorous physical activity above the recommended threshold can mitigate the adverse effects of sleep disorders, as well as all‐cause and CV mortality . Interestingly, increasing daily step counts was associated with improved CV health outcomes . In sum, patients following AMI should reduce the sitting time associated with detrimental effects and practice moderate to vigorous physical activity coupled with resistance training contributing to greater clinical effects, particularly on CV health . In this study, an inverse correlation was observed between EQI and HbA1c, as well as triglyceride levels. In this line, a study among patients with type 2 diabetes mellitus demonstrated the presence of a U‐shaped pattern of association between HbA1c and endothelial function. Endothelial dysfunction was observed in patients with HbA1c below 6.5% and above 7% . Furthermore, our results align with a most recent review highlighting an association between dyslipidemia and endothelial dysfunction . Additionally, our findings indicated that age and smoking were significant factors linked to endothelial dysfunction. This is consistent with evidence showing that older individuals and smokers are more prone to endothelial dysfunction . These traditional CV risk factors are linked to an exacerbation of oxidative stress, which is the primary cause of reduced bioavailability of NO. The rapid scavenging of NO by reactive oxygen species can lead to the formation of highly prooxidant compounds, which could be responsible for endothelial dysfunction . It has been established that endothelial function not only reflects poor prognosis but also predicts outcomes after acute coronary syndrome. In the present prospective study, endothelial dysfunction was associated with the occurrence of MAACE such as recurrent acute coronary syndrome and CV death during the mid‐term follow‐up. These findings confirm our hypothesis that endothelial function is considered a prognosis marker in CAD patients. In this regard, a strong connection between assessed endothelial dysfunction and an elevated risk of in‐stent restenosis following primary PCI has been observed . Furthermore, among patients with non‐STEMI, endothelial dysfunction is linked to an increased risk of CV events within 14 months, particularly in the presence of diabetes mellitus . While numerous studies have focused on the association between endothelial dysfunction and MAACE in the mid‐term period after cardiac events, there is a growing interest in exploring its impact on short‐term clinical outcomes following AMI such as cardiorespiratory fitness, CAD status, and LVEF. These outcomes are considered a crucial prognostic marker following AMI. The findings of our study showed that endothelial function was strongly associated with exercise capacity. Cardiorespiratory fitness reflects the synergetic functioning of cardiac function, pulmonary ventilation, and the capacity of vascular function to deliver and unload oxygen. It has been demonstrated that low exercise capacity, which is associated with CV risk factors such as obesity and low physical activity levels, is linked to bad prognosis and increased mortality risk . In this line, reduced exercise tolerance evaluated with 6mwt is considered a short‐term clinical outcome related to increased risk of death and reinfarction in patients with AMI . Reduced exercise capacity after primary PCI is a result of the overproduction of reactive oxygen species, which in turn leads to muscle cell damage. Our results are comparable with previous findings among patients with chronic obstructive pulmonary disease indicating a significant correlation between endothelial function evaluated with FMD and 6mwt . During exercise, NO bioavailability plays a crucial role in increasing blood flow to exercising skeletal muscle, reducing blood pressure, and more importantly, reducing oxygen cost at a given workload . Moreover, while NO plays a crucial role in regulating glucose uptake during exercise, endothelial dysfunction is linked with energy metabolism disorders that impair exercise tolerance . Alongside cardiorespiratory fitness, the severity of CAD was assessed due to its relative importance in prognosis following AMI. In this population, 60.3% of patients had single‐vessel disease and 39.6% presented multi‐vessel disease. Similarly, a previous study has found that 40.1% of AMI patients presented multi‐vessel disease . However, our study did not find a significant link between the severity of CAD and endothelial function. The absence of a significant correlation may be attributed to the relatively infrequent presence of multi‐vessel disease, especially cases of triple‐vessel disease, within the study's participants. Regarding cardiac function, we noted the absence of a link between endothelial function and LVEF following AMI. Ischemic heart failure characterized by reduced LVEF, which implies cardiac dysfunction, is a short‐term clinical outcome that occurs in approximately 30%–40% of patients following AMI. Moreover, it carries a poor cardiac prognosis even after primary PCI . Additionally, endothelial dysfunction is widely linked to an increased risk of mortality in patients with heart failure . Recently, it was demonstrated that patients with ischemic heart failure exhibited endothelial dysfunction with lower FMD . Endothelial dysfunction is widely associated with fibrosis considered a key element in myocardial remodeling leading to a reduction in vascular compliance, imposing elevated loads on the left ventricle contributing to left ventricle stiffness and heart failure . In this line, a linear improvement in FMD was related to an improvement in LVEF . The reduced number of patients with reduced LVEF can explain our results. The findings of this study have significant clinical implications for enhancing patient management following AMI. A crucial step is evaluating various sleep metrics, including quality and quantity, alongside traditional CV risk factors. Additionally, assessing endothelial function, an often‐overlooked clinical measure following AMI, is recommended. To improve prognosis and prevent MAACE, it is essential to promote better sleep health. This can be achieved through key lifestyle practices (i.e., sleeping between 7 and 9 h per night, maintaining a regular bedtime and sleep duration, increasing daylight exposure, and reducing evening light exposure) that regulate light and dark patterns, thereby reducing circadian desynchronization and enhancing endothelial function. Furthermore, post‐AMI patients are encouraged to adhere to guideline‐recommended supervised physical activity levels to optimize CV health. In addition, it is crucial to better control dyslipidemia and glycemia, as well as smoking cessation to mitigate endothelial function and MAACE. This study addresses detrimental lifestyle habits, by incorporating behavioral therapies and promoting a healthy and active lifestyle. This approach is particularly relevant for implementing “Lifestyle Medicine” in low and middle‐income countries, serving as a cost‐effective strategy for primary and secondary prevention. 4.1 Study Limitations Our study has several limitations. It was a single‐center study presenting a relatively small number of patients which limits the generalizability of the findings, necessitating future research with larger cohorts to validate these results. Additionally, the study's duration of mid‐term follow‐up for MAACE was relatively short at approximately 4 months, which may not sufficiently capture mid‐term outcomes. Furthermore, we evaluated endothelial function using the E4‐Diagnose device, which is not the gold standard for assessing endothelial function. Further studies should use FMD to confirm the link between endothelial function and measured outcomes. Additionally, the majority of the patients had preserved LVEF, which may not adequately represent the variability in endothelial dysfunction across different LVEF levels. Future studies should include a more diverse patient population, especially those with heart failure, to better understand the relationship between LVEF and endothelial dysfunction. Additionally, further investigation should measure oxidative stress and additional measurements for inflammatory biomarkers such as IL6, which is considered a prominent mediator of sleep disturbances, physical inactivity, and endothelial dysfunction. In our study, we were unable to measure oxidative stress biomarkers and IL‐6 due to the high cost and lack of funding.
Conclusion Patients who have experienced AMI and received primary PCI displayed endothelial dysfunction. This impairment was linked not only to traditional CV risk factors, such as low levels of physical activity but also to poor sleep quality. Moreover, endothelial dysfunction affected exercise capacity and it was associated with MAACE. These findings emphasize the maintenance of an active lifestyle and addressing sleep‐related issues to potentially improve endothelial function considered as a strong prognosis marker that predicts short and mid‐term outcomes.
is study was conducted according to the Declaration of Helsinki.
Before enrollment in the study, all patients were thoroughly informed about the study's protocol and provided written informed consent.
The authors declare no conflicts of interest.
Supplementary Figure 1: Association between the 6‐minute walking test and Endothelium Quality Index. 6mwt: 6‐minute walking test, EQI: Endothelium Quality Index. Supporting information. Supporting information.
|
Study of the Prevalence of Human Papillomavirus Genotypes in Jeddah, Saudi Arabia | 0c36ad10-0858-42d8-abbf-9add9d097e3d | 11732795 | Biopsy[mh] | Human papillomavirus (HPV) is one of the most prevalent sexually transmitted infections globally and, in most cases, is cleared by the immune system within a few years . However, persistent HPV infections can lead to genital and cutaneous warts and are the primary cause of most invasive cervical cancers and their precursors . It is estimated that approximately half of all individuals, regardless of gender, will experience an HPV infection at some point in their lives . HPV is transmitted mainly through direct skin‒to‒skin contact during sexual activity, including vaginal, anal, and oral sex. It can also be spread through genital-to-genital contact and, less commonly, via fomites, such as objects or surfaces that come into contact with infected areas . HPVs are classified into high-risk (HR-HPV) and low-risk (LR-HPV) types based on their potential to cause cancer. While LR-HPVs are associated mainly with the formation of warts, they can also contribute to the development of certain oropharyngeal and anogenital cancers, affecting areas such as the anus, cervix, vagina, vulva, and penis . Notably, women with HR-HPV types are at increased risk of developing cervical cancer within 3 to 5 years after infection . HPV is a small, nonenveloped virus with a circular double-stranded DNA genome of approximately 8 kb . These proteins belong to the Papillomaviridae family and have tropism for the squamous epithelium . The viral genome encodes eight proteins: two late structural proteins (L1 and L2) that form the capsid and six early nonstructural regulatory proteins (E1, E2, E4, E5, E6, and E7) . The E6 and E7 oncoproteins are key players in disrupting cellular processes involved in transformation, such as cell cycle regulation and apoptosis. They achieve this primarily by targeting and inactivating the host’s p53 and retinoblastoma proteins (pRb) . Although cervical cancer is preventable, it ranks as the fourth most common cancer in women worldwide, with approximately 600,000 new cases in 2020, representing 6.9% of all female cancers . In Saudi Arabia, cervical cancer is the ninth most common cancer among women aged 15–44 years, with an incidence of 2.4 per 100,000 women in 2020 . Persistent infection with oncogenic HPV strains, particularly HPV 16 and 18, is the leading cause of progression from precancer to invasive cancer . Research on the prevalence of HPV in Saudi Arabia is limited, with a primary focus on hospital records of symptomatic patients. To address this gap, the Saudi Public Health Authority introduced the Saudi Clinical Preventative Guidelines 2023 for HPV screening and vaccination, which recommend two vaccine doses for females aged 11–12 years and cervical cancer screening via Pap smears every three years for sexually active women over 30 years . However, without a national HPV screening program, most cervical cancer patients are diagnosed at advanced stages, necessitating extensive treatment . Early HPV screening has the potential to prevent the progression of cervical cancer . Currently, three FDA-approved HPV vaccines are available: the bivalent vaccine, which targets HPV 16 and 18; the quadrivalent vaccine, which covers HPV 6, 11, 16, and 18; and the nonvalent vaccine, which protects against nine HPV types . Despite their effectiveness, awareness and uptake remain low . To effectively curb HPV transmission and reduce the incidence of cervical cancer, epidemiological studies are essential to determine the prevalence of HPV genotypes. This information can help guide vaccination and screening strategies, particularly through awareness campaigns. This study aimed to assess the prevalence of HPV genotypes in Jeddah, Saudi Arabia, categorizing HPV types by risk level to promote early detection of precancerous lesions associated with HPV and ultimately reduce the number of cervical cancer cases. The study also evaluated the effectiveness of vaccination programs in reducing HPV infections, providing insights to guide future prevention and health policy strategies in the region. Addressing HPV-related challenges in Saudi Arabia requires culturally sensitive education campaigns to raise awareness about HPV risks and the benefits of vaccination and screening. Using both traditional and digital media, collaborating with healthcare providers, religious leaders, and community influencers, and implementing multilingual school programs and workshops can enhance outreach efforts. However, barriers such as limited awareness, misconceptions, gaps in the healthcare system, misinformation, the absence of mandatory policies, and the need for parental consent hinder vaccine uptake. This study provides critical data on HPV prevalence and genotypes to support targeted vaccination and education efforts. Strategies like community outreach, subsidized vaccines, partnerships with cultural leaders, mobile vaccination units, and flexible hours can improve vaccination rates, reduce HPV-related risks, and inform global prevention strategies.
Study Design and Population Sample Collection HPV Detection and Genotyping Statistical Analysis This descriptive, cross-sectional study was conducted between May 2022, and February 2023, at King Abdulaziz University Hospital (KAUH). Ethical approval was obtained from the Unit of Biomedical Ethics, Research Ethics Committee, King Abdulaziz University (Reference No 119 − 22). Informed consent was obtained from all participants. The study was conducted at the Special Infectious Agents Unit-BSL3, King Fahd Medical Research Center, King Abdulaziz University.
A total of 106 Pap smear samples were collected in collection vials (BD SurePath, USA) from women attending gynecology clinics at KAUH. Participants were eligible if they were over 15 years old and provided signed informed consent. After providing consent, the participants provided a Pap smear sample and completed a questionnaire that included demographic information, medical history, and risk factor information. All the data was reviewed, verified, and securely stored. Pap smear samples were stored at 4 °C and transported to the Special Infectious Agents Unit at King Fahd Medical Research Center, King Abdulaziz University, for laboratory analysis.
DNA was extracted from the Pap smear samples via the ExiPrep™ 96 Viral DNA/RNA Kit (Catalog No. A-5250) on an ExiPrep™ 96 Lite instrument (Bioneer Corp., Daejeon, Republic of Korea) following the manufacturer’s instructions. To detect the presence of HPV, nested PCR* was performed using the primers MY09/11 and GP5+/6 + in the L1 region and GoTaq Green Master Mix (Promega, Mannheim, Germany) on a 96-well thermal cycler (Eppendorf) . The PCR amplicons were electrophoresed on a 2% agarose gel with ethidium bromide staining and visualized under UV; the PCR band size was 150 bp. Positive PCR products were purified via a DNA Gel Extraction Kit (NORGEN BIOTEK CORP, cat. 13100) following the manufacturer’s instructions. The purified PCR products were then subjected to Sanger sequencing* using cycle sequencing with internal primers on an ABI 3500 Automatic Sequencer (Life Technologies, Carlsbad, CA, USA) via Big Dye Terminator V3.1 (Life Technologies, Carlsbad, CA, USA) according to the manufacturer’s protocol. The obtained sequences were aligned with HPV reference sequences from the National Center for Biotechnology Information (NCBI). Alignment was performed via ClustalW, and phylogenetic analysis was performed via MEGA11 software with the neighbor-joining method of the maximum composite likelihood model with 1000 bootstrap replicates. *Nested PCR: A two-step PCR method enhancing specificity by using internal primers in the second reaction to target a specific DNA region. *Sanger Sequencing: A method to determine DNA sequences using chain-termination chemistry and electrophoresis for precise nucleotide identification.
The study analyzed Pap smear samples using advanced molecular techniques, including DNA extraction, nested PCR, gel electrophoresis, Sanger sequencing, and phylogenetic analysis, to detect and genotype HPV with high sensitivity. Positive samples were precisely genotyped, with sequence alignment and analysis conducted via ClustalW and MEGA11. Statistical analysis was performed using the Statistical Package for Social Sciences (SPSS) version 23. Descriptive statistics were applied to determine the prevalence of HPV, while chi-square tests were used to explore associations between sociodemographic factors and study outcomes. Statistical significance was set at a P value < 0.05, ensuring robust and reliable insights to inform data-driven public health recommendations.
Study Population Prevalence and Distribution of HPV Genotypes Age-Specific Prevalence and Distribution of HPV Genotypes Prevalence of HPV and its Distribution by Nationality The Prevalence of HPV Based on Cytology Results Prevalence of HPV Based on Different Factors Awareness and Health-Related Behavior Questionnaire Results Phylogenetic Analysis for HPV Genotypes This study involved 106 participants from KAUH. Each participant provided a Pap smear sample and completed a related questionnaire. The distribution of HPV positivity was assessed based on age group, nationality, and marital status. The mean age of the population was 43.0 years, with a standard deviation of (SD: 11.441), ranging from 24 to 77 years. Among the age groups, individuals aged 24–34 represented 21 (19.8%) participants, with 5.0 testing positives for HPV, yielding a positivity rate of 4.7% ( P = 0.21). The 35–44 age group, consisting of 46 (43.4%) individuals, had 10 HPV-positive cases, representing 9.4% ( P = 0.09). In the ≥ 45 years age group, 39 (36.8%) participants were tested, and only one was HPV positive, accounting for 0.9% ( P = 0.006*), indicating a statistically significant difference, as illustrated in Table . Nationality analysis revealed that among 96 (90.6%) Saudi participants, 12 tested positive (11.3%), whereas among 10 (9.4%) non-Saudi participants, 4 tested positive (3.8%), indicating a statistically significant difference with a P value of 0.02*. In terms of marital status, among the 92 (86.8%) married participants, 15 tested positive (14.2%, P = 0.37). In contrast, of the 14 (13.2%) unmarried participants, only one tested positive (0.9%, P = 0.37). There were no significant differences between marital statis category, as illustrated in Table .
Among the 106 samples, 16 tested positives for HPV, resulting in an overall prevalence of 15.1% (95% CI: 8.28 − 21.92%), whereas 90 samples were HPV negative, representing 84.9% (95% CI: 78.08 − 91.72%). Among the positive samples, 43.75% were HR-HPV genotypes, with HPV 16 ( n = 7). The remaining 56.25% were LR-HPV genotypes, including HPV 6 ( n = 4, 25.0%) and HPV 58 ( n = 5, 31.25%). Compared with other genotypes, HPV-16 emerged as the most prevalent genotype, as shown in Table . CI = confidence interval, HR-HPV: high-risk HPV, LR-HPV: low-risk HPV.
HPV incidence was significantly higher in women aged 35–44 years, accounting for 10/16, 62.5% of the total positive cases ( P = 0.09), whereas women over 45 years presented a notably lower prevalence (1/16, 6.3%), P = 0.006, indicating a reduced likelihood of HPV infection in this age group. In the 35–44 years age group, HPV 16 was predominant, representing 31.3% of all positive cases, although this difference was not statistically significant. Younger age groups (24–34 years) were more likely to test positive than the oldest group was. Notably, most HPV genotypes 6 and 16 cases were concentrated in the 35–44 years age group. The 24–34 age group displayed a more balanced distribution between genotypes 16 and 58, with fewer cases of genotype 6, as illustrated in Fig. . This age-specific distribution emphasizes the influence of age on HPV incidence and genotypes.
The results revealed that the prevalence of HPV was significantly greater in non-Saudi participants (4/10, 40%) than in Saudi participants (12/96, 12.5%) ( P = 0.020). Notably, non-Saudi women presented a greater prevalence of the HR-HPV genotype 16, with a statistically significant association between nationality and HPV outcome ( P = 0.002). Genotype 58 was observed only among Saudi females, as illustrated in Fig. . These findings suggest that nationality plays an important role in the distribution of HPV infections according to HR genotype.
The cytology results in Table show the prevalence of HPV across different cytological findings. A statistically significant association ( P = 0.036) was observed between HPV status and negative cytology results, indicating that HPV-negative participants were more likely to receive a negative cytology report. In contrast, no statistically significant associations were found between HPV status and squamous or glandular lesions, as reflected by their nonsignificant P values. However, a P value of 0.005 suggested a statistically significant correlation between HPV positivity and malignant cytology findings. These findings suggest that individuals who tested positive for HPV were more likely to have malignant cytology results, underscoring a potential link between HPV infection and the risk of malignant cellular changes.
Table summarizes the different factors associated with the prevalence of HPV. Among the 96 participants who completed the questionnaire, no statistically significant associations were detected between HPV infection and factors such as education level, employment, living abroad, or HPV vaccination status. With respect to employment status, 91.8% of unemployed participants were HPV negative, whereas 95.7% were employed, with no statistically significant association ( P = 0.4291). Similarly, 95.5% of the participants who had not lived abroad had HPV negative, whereas 90.0% of those who had lived abroad had no statistically significant association ( P = 0.3061). The level of education was not statistically significant, with 94.4% of individuals with a high school education or less being HPV-negative compared with 93.3% of those with a bachelor’s degree or higher ( P = 0.8276). Table shows HPV positivity across different categories, including education level, living abroad, employment status, and smoking habits. No statistically significant associations were observed between HPV incidence and the factors examined.
Table highlights the following findings: Among the participants reporting vaginal discharge with color or odor, 95.1% of those without symptoms were HPV-negative, whereas 4.9% were HPV-positive. Among those with symptoms, 84.6% were HPV-negative, and 15.4% were HPV-positive, although this association was not statistically significant ( P = 0.1434). 95.5% of participants reporting occasional symptoms were HPV-negative, and 4.5% were HPV-positive, with no significant associations ( P = 0.7068). A significant association was observed between irregular menstruation or bleeding during these periods. Among participants without symptoms, 98.6% were HPV-negative, and 1.4% were HPV-positive ( P = 0.0014*). In contrast, 75.0% of those with symptoms were HPV negative, and 25.0% were HPV positive, which was a statistically significant association ( P = 0.0007*). Among the participants with occasional symptoms, 90.0% were HPV-negative, and 10.0% were HPV-positive; however, this difference was not statistically significant ( P = 0.6047). In terms of vaginal bleeding after intercourse, 93.8% of participants without symptoms were HPV-negative, and 6.2% were HPV-positive ( P = 0.9421). Among those who experienced bleeding, 88.9% were HPV-negative and 11.1% were HPV-positive, although the association was not significant ( P = 0.5268). The participants who occasionally experienced bleeding had no HPV-positive cases ( P = 0.5136). For pain after intercourse, 94.5% of the participants with no symptoms were HPV-negative, whereas 5.5% were HPV-positive ( P = 0.5784). Among those reporting pain, 100% were HPV-negative, and 0% were HPV-positive ( P = 0.3883). A total of 84.6% of the participants who occasionally experienced pain was HPV-negative, and 15.4% were HPV-positive; however, this difference was not statistically significant ( P = 0.1434). A statistically significant association was found between HPV infection and irregular menstruation or bleeding between menstruations ( P = 0.0007*). No significant associations were observed for other factors. Table displays the outcomes of the awareness questions of the research. The questionnaire, which was completed by 96 participants, revealed varied levels of awareness and attitudes regarding HPV and vaccination. Most participants (72.9%) reported not knowing what HPV is, whereas 27.1% indicated some awareness. When asked whether HPV is dangerous and could lead to multiple types of cancer, 60.4% responded “No,” whereas 39.6% answered “Yes.” With respect to knowledge about HPV symptoms, 64.6% were unaware that vaginal/genital warts could be associated with HPV, whereas 35.4% were aware of this link. Similarly, 67.7% did not know that HPV is the leading cause of cervical cancer, whereas 32.3% were aware of this connection. When questioned about whether HPV affects both women and men, 74.0% were unaware, and 26.0% were aware. Additionally, 68.8% did not know that HPV could be transmitted through sexual contact, whereas 31.3% understood this mode of transmission. In terms of perceived risk, 77.1% of the participants considered the likelihood of HPV infection to be low, whereas 22.9% considered it high. Almost all participants (99.0%) had never heard of an HPV infection within their family, with only 1.0% indicating otherwise. Regarding HPV vaccination, 71.9% of the participants were unaware of the HPV vaccine, whereas 28.1% were familiar with it. All participants who received the HPV vaccine tested negative for HPV. However, no significant association was found between vaccination and negative HPV results. Notably, 65.6% of the participants expressed a willingness to receive the HPV vaccine, whereas 34.4% were hesitant. Additionally, 63.5% were unaware that HPV vaccination could protect a large portion of the population from infection, with 36.5% demonstrating awareness of this fact. Table outlines the respondents’ knowledge about HPV, including its risks, symptoms, transmission, and association with cancer.
Phylogenetic analysis of the HPV genotypes was performed via the neighbor‒joining method (Fig. ). The bootstrap consensus tree inferred from 1000 replicates representsvia the maximum composite likelihood method and are expressed inIn Saudi Arabia, reducing the morbidity associated with cervical cancer requires a comprehensive assessment of several key factors. These include evaluating the prevalence of HPV, identifying the factors that contribute to the transmission of different genotypes, and understanding the level of knowledge and attitudes regarding HPV infection and cervical cancer. This evaluation is critical, considering the availability of highly effective screening tests and vaccines that have transformed cervical cancer into a preventable disease. By gaining insights into these aspects, targeted interventions can be developed to improve prevention, early detection, and treatment strategies, ultimately reducing the burden of cervical cancer in Saudi Arabia. This study investigated the prevalence and distribution of HPV genotypes among 106 females in Jeddah, Saudi Arabia. The detection of HPV was performed via nested PCR, followed by Sanger sequencing to identify specific HPV genotypes. Our study revealed that HPV 16 was the most prevalent genotype identified, followed by HPV 6 and HPV 58. The predominance of HPV 16, a high-risk genotype associated with cervical cancer, underscores the potential public health impact of HPV in this region. HPV 6, associated with noncancerous lesions, was the second most common genotype identified. The detection of both HR and LR genotypes, including 16, 6, and 58, suggests a diverse HPV genotype distribution in the studied population. The overall prevalence of HPV infection in our study population was 15.1%, highlighting the significant prevalence of HPV in the population and emphasizing the need for ongoing monitoring and preventive measures. In our study, we assessed the level of awareness of HPV, its connection to cervical cancer, and the availability of HPV vaccination. Notably, more than 60% of the population tested lacked knowledge and awareness of HPV and its related risks. This significant gap in awareness highlights a critical need for public health education to improve the understanding and uptake of preventive measures such as vaccination. The observed HPV incidence rate of 15.1% is slightly higher than that reported in previous research conducted in 2014, which reported a prevalence of 9.8% . Interestingly, a study conducted at a tertiary care center in Jeddah between October 2017 and April 2018 reported a lower prevalence of 5.9% . Similarly, a longitudinal study spanning five years (2013–2018) in Jeddah reported an initial HPV prevalence of 4.7%, which increased to 5% after a five-year follow-up . In contrast, a study conducted in Riyadh in 2007 reported a combined HPV genotype 16/18 prevalence rate of 31.6% . However, our study revealed a significantly higher prevalence rate of 43.75% for the high-risk HPV 16 genotype. The variation could be due to many factors, including differences in HPV testing methodologies, type of sample collected, age group, or geographical characteristics. The overall HPV incidence rate of 15.1% in Jeddah, Saudi Arabia, is lower than that reported in recent studies in the United States (38.4%), Brazil (25.41%), and China (21.0%) [ , , ]. The prevalence of HPV is lower in Saudi Arabia than in other countries. This difference could be due to variations in study design, population demographics, and sexual behavior patterns. Cultural factors, such as lower rates of sexual activity outside of marriage, may also contribute to the slightly lower prevalence observed in Jeddah. Our findings align with previous studies conducted in other regions of Saudi Arabia, where HPV 16 has also been reported as the most prevalent high-risk genotype (30-42.1%) . The high prevalence of HPV 16 (43.75%) in this study is consistent with global trends, where it is recognized as the most oncogenic genotype and is responsible for the majority of cervical cancer cases worldwide . Furthermore, high-risk HPV types were detected in 43.75% of the positive samples, whereas low-risk HPV types 6 and 58 were detected in the remaining positive samples. In a similar study published in 2014, the most prevalent HR-HPV types detected were HPV 16, HPV 18, and HPV 68. On the other hand, the most prevalent LR-HPVs detected were HPV 6, HPV 42, HPV 53, and HPV 54 . Our study indicated that most HPV-positive women fell within the 35–44 age group, with HPV 16 being the most commonly detected genotype. Additionally, the research highlighted a decline in HPV infection rates in the older age group (≥ 45 years). These findings resonate with a previous investigation conducted in Riyadh, Saudi Arabia, which revealed a similar trend among HPV-positive participants, primarily within the 30–39 years of age . Another study by Khan et al. in the United States aimed to explore the risk of cervical precancerous lesions in women with normal cytology but positive testing for HPV DNA types 16 and 18. The study revealed that women aged 30 years and older had a greater risk of developing cervical lesions than younger women did . These collective findings support the updated WHO recommendation advocating for HPV screening for women aged 30 and older . An essential aspect in developing strategies to control HPV infections and prevent cervical cancer among women involves assessing awareness and knowledge about HPV. However, our study revealed the following findings: 67.7% of the female participants were unaware that HPV is the primary cause of cervical cancer, and 71.9% did not know about the availability of the HPV vaccine. Similarly, a recent study in Hail, Saudi Arabia, reported that 66.3% of participants had never heard of HPV and that 75.1% were unaware of its link to cervical cancer. Additionally, 69.2% were not aware of the HPV vaccine . In another study conducted in the western region of Saudi Arabia, only 34.6% were aware of HPV. The majority (83.8%) believed that HPV can cause cervical cancer, and less than half of the participants were aware of the availability of vaccines against HPV . Several factors were identified as significant contributors to increased knowledge and awareness of HPV infection and its related consequences among Saudi females. These factors included being associated with the health profession, having higher educational levels, being older, having a higher monthly income, and having previous experience with cervical cancer . These results underscore the critical need for education and awareness campaigns regarding HPV infection and its associated risks in all regions of Saudi Arabia, along with education campaigns to encourage young females to be vaccinated against HPV. The high-risk HPV genotypes, especially HR-HPV type 16, which is involved in the majority and most aggressive cancer cases across women worldwide, highlight the importance of targeted public health preventive strategies to control this growing infection in Jeddah. The high prevalence of HPV 16 in our study population suggests a major burden of HPV-related cancer potential, especially because it is the most common type responsible for cervical malignancies. Together, these results make a strong case for initiating awareness programs about HPV vaccination that could greatly decrease the burden resulting from HPV-related disease. A study conducted among undergraduates revealed that social norms from parents, friends, and healthcare providers indirectly influence vaccine intentions and attitudes toward HPV vaccinations ; therefore, strengthening self-efficacy, beliefs, and attitudes via positive healthcare provider/patient relationships will encourage HPV vaccination uptake among the population . Furthermore, our findings underscore the importance of regular cervical cancer screening programs to detect and manage HPV-related diseases. These measures could significantly reduce the burden of HPV-associated cancers in Saudi Arabia, as demonstrated by a systemic review that revealed that cytologic examination is the primary method for cervical cancer screening, leading to a significant reduction in cervical cancer morbidity and mortality in regions where screening programs have been effectively implemented .
This study provides valuable insights into the prevalence and distribution of HPV genotypes among women in Jeddah, Saudi Arabia, with significant public health implications. The use of nested PCR for HPV detection and Sanger sequencing for genotyping ensures the reliability of the findings, identifying an overall HPV prevalence of 15.1%, with HPV 16 emerging as the most dominant high-risk genotype, followed by HPV 6 and HPV 58. The high prevalence of HPV 16, closely linked to cervical cancer, highlights the urgent need for targeted prevention strategies, particularly for women aged 35–44 years. The presence of both high-risk and low-risk genotypes underscores the importance of comprehensive public health efforts, including education, vaccine promotion, and regular cervical cancer screening. However, the study has some limitations. Its cross-sectional design limits causal inferences, and sampling participants from a single hospital (KAUH) may not fully represent the broader female population. Cultural sensitivities around sexual health could have led to underreporting of risk behaviors, potentially influencing HPV prevalence data. Additionally, incomplete questionnaire responses may have introduced bias, limiting the assessment of HPV awareness. Future studies could adopt a longitudinal approach with larger, more diverse samples across Saudi Arabia to explore the natural progression of HPV infections and assess the efficacy of vaccination programs. Expanding behavioral analyses to better understand factors influencing HPV awareness and vaccine uptake will further refine prevention strategies. Despite these limitations, this study highlights the critical need for enhanced public health education and robust vaccination and screening programs to reduce HPV-related health risks in Saudi Arabia.
|
Exploration capacity versus specific enzymatic activity of ectomycorrhizas in response to primary productivity and soil phosphorus availability in Bornean tropical rainforests | 9eb0a7bc-28d6-4940-bfa3-99de7a318bc2 | 10838334 | Microbiology[mh] | Tropical ecosystems exhibit the highest net primary productivity of all terrestrial ecosystems, which plays a crucial role in the global carbon (C) cycle. At the same time, primary productivity in tropical ecosystems is often limited by insufficient soil phosphorus (P) availability because of the high weathering rates of the soil minerals and an associated geochemical transformation of soil P into unavailable forms – . In order to address the mechanisms of how tropical trees maintain primary productivity on such soils with reduced P availability, many studies have focussed on the ecophysiology of the efficient P-use in photosynthetic C assimilation , – , and on the nutrient acquisition strategy of plant root systems , . However, the roles of root-associated microorganisms on biogeochemical P cycles and the plant P acquisition strategy have not fully been understood yet. Most terrestrial plants rely on mycorrhizal fungi associated with plant roots for mineral nutrient acquisition. Mycorrhizal fungi can take up both mineral and organic forms (amino acid) of nutrients from soils by expanding mycorrhizal root tips and extraradical hyphae, both of which function to enhance exploration capacity , . More specifically, ECM have a larger nutrient foraging area, which benefits the host plant. Because ECM fungal sheath can often enclose the colonized root surface , the direct contact of the fungal sheath with the soil act as a substitute for the enclosed root surface. ECM fungi have a significant physiological capacity to access nutrients bound with complex organic compounds including cellulose, protein, chitin, and phytate – . Some specialized species can also access recalcitrant organic matters such as lignin and phenol-complexes, and they have specific forms of extracellular enzymes to degrade such recalcitrant matter . Extracellular enzymatic activities associated with the degradation and nutrient release from soil organic matter reflect the functional diversity of ECM fungal community in situ , . Substantial differences in enzymatic activities have been reported among ECM fungal species in a given site or across sites along environmental gradients , – , This suggests that changes in ECM fungal communities and their host trees can modify ECM enzyme activities, adapting to nutrient availability variations across sites – . Despite these significant potential of ECM fungi on biogeochemical cycles, only a few measurements of extracellular enzymatic activities of ECM have been reported from tropical soils . Moreover, the adaptive response of ECM fungi under nutrient deficiency has been poorly elucidated so far. This oversight is particularly critical when considering the vital role of ECM fungi in facilitating tree adaptation and survival in low-phosphorus (P) soils. Soil microbes including ECM fungi secrete extracellular enzymes including those that mineralize C, N and P to access each nutrient. The relative activity of those enzymes is suggested to be controlled by the availability of reactive substrates and/or soil mineral nutrients (i.e., the product of the enzyme reaction of each nutrient element) . For example, phosphatase activity of soil microbes and its ratio to the C mineralizing enzyme activity in tropical ecosystems were greater where soil organic-P fraction was relatively small , . Thus, the stoichiometry of given two enzymes (enzymatic stoichiometry; i.e. the ratio of the activities of two enzymes) can vary with relative substrate abundances and/or nutrient availability and reflects the enzyme allocation of soil microbes . In Southeast Asia, diverse ECM host trees occur as dominant taxa such as the families Dipterocarpaceae, Fagaceae and Myrtaceae – . The abundance of these taxa could be related to soil nutrient availability, depending on the nutrient acquisition strategy of each taxa through ECM symbioses. Mount Kinabalu, Borneo, is one of the hotspots of world floristic biodiversity and diverse species of ECM host trees occur within/across sites . Soil nutrient availability of P (and N) is highly variable due to complex geology and a wide altitudinal gradient on this mountain , – . ECM fungal communities also remarkably changed along these gradients . This natural setting on Mount Kinabalu is ideal for investigating the adaptive responses of ECM fungi in terms of nutrient acquisition capacity. Here, we aimed to elucidate the adaptive strategy of ECM symbiosis in the tropical ecosystems by investigating the response of the extracellular enzymatic activity of ECM fungi to contrasting soil P and N availability in five tropical rain forests on Mount Kinabalu. We evaluated the enzymatic activity of ECM from two aspects, contact area with soil as an exploration capacity (biomass and surface area of ECM on a ground-area basis) and specific enzymatic activity (enzymatic activity on an ECM surface-area basis). We also characterized the stoichiometric relationships among enzymatic activities degrading different organic elements (C, N, and P). Subsequently, we evaluated the realized ECM enzymatic activity on a ground-area basis by integrating the two aspects of ECM (exploration capacity and specific enzymatic activity) to clarify the overall performance of ECM symbiosis in response to P availability. We hypothesized that ECM fungi, as an adaptive response in P-deficient forests with limited productivity, enhance P acquisition by increasing both exploratory capacity and specific enzymatic activity, especially for the P-specific enzyme.
Site description Soil P availability Distribution of ectomycorrhizal host trees Sampling of ectomycorrhizal tips Assay of ectomycorrhizal enzyme activity Statistical analysis Five tropical rain forests on Mount Kinabalu (4095 m a.s.l.; 6°5′ N, 116° 33′ E), Borneo, were selected for this study. These include two lowland forests and three montane forests based on the vegetation classification by Kitayama . These study sites were part of the study plots for the ecosystem dynamics project designed by Aiba and Kitayama and Kitayama et al. . The climate of the study sites is humid equatorial with little seasonality in air temperature and precipitation. The mountain is non-volcanic and largely consists of Tertiary sedimentary rocks of sandstone and/or mudstone and ultrabasic rocks that protruded the sedimentary rocks as mosaics. Two geological substrates (sedimentary and ultrabasic soils) were selected in each of the lowland zone (700 m a.s.l.) and lower montane zone (1700 m a.s.l.), yielding a total of four forests in a matrix manner of altitude and substrate. The fifth forest is located in the lower montane zone at 1700 m on Quaternary tilloid deposits mostly of sedimentary rocks (hereafter Quaternary substrate) . All sites are on gentle slopes to avoid the effects of topography. The ecosystem properties of the sites can be referred to (Table ) , , .
For soil P availability, the size of soluble inorganic P pools, extracted with hydrochloric-ammonium fluoride solution, varied substantially among the five forests due to differences in geology and weathering as a function of altitude . It was always greater on sedimentary than on ultrabasic substrate at the same altitude (Table ) , . Among the three forests at 1700 m, the pool size of soluble inorganic P was the greatest on the Quaternary substrate of the relatively young age reflecting the lesser weathering of soil minerals (Table ) .
The tree-species composition of the five forests was investigated by Aiba et al. and Aiba and Kitayama . Three families of ECM host trees (Dipterocarpaceae, Fagaceae, and Myrtaceae) were distributed in all study sites. Only the genus Tristaniposis was regarded as an ectomycorrhizal host within the family Myrtaceae . Indeed, we confirmed the ECM formation of Tristaniopsis by observing its ECM tips collected from seedlings in the study sites in our preliminary survey (Figure ). No ECM tips were observed on the seedlings of Leptospermum or Syzygium (both Myrtaceae), which were also abundant in some of our sites. The relative basal area (RBA) of ECM host trees in each site was sorted by family based on the dataset (Table ) . RBA of ectomycorrhizal host trees ranged from 21.4% at the Quarternary montane forest (17Q) to 38.5% at the ultrabasic lowland forest (07U).
ECM tips were collected twice for two measurements separately; biomass and enzyme activity assay of ECM tips. To measure the ECM biomass on a ground-area basis, ten soil cores (2 cm diameter, 15-cm depth) were collected at random positions (at least 10 m apart) from the surface layer including the A 0 , A, and E soil horizons in each study site in September 2012. Each soil core was kept in a plastic bag and stored at 4ºC for up to 2 weeks until further processing. Each soil sample was sieved (500 µm mesh) and rinsed with tap water to remove soil particles. The remaining fine roots (diameter < 2 mm) were then transferred to a Petri dish for stereomicroscopic observation. ECM tips were identified based on the presence of fungal mantle structure , and active ECM tips with turgid surfaces were collected from fine roots. The ECM tips were scanned using a flat screen scanner (GT-X970, EPSON, Tokyo, Japan) and analyzed for tip surface area using Win-Rhizo (Regents Instruments Inc., Quebec, Canada). Scanned ECM tips and fine roots were dried for 3 days at 70ºC and weighed for determining dry mass. Consequently, the mean mass (g m −2 ) and surface area of ECM tips per unit ground-area (m 2 ECM m −2 ), and ECM dry mass ratio to fine-root (w/w) were calculated for each site. To determine specific enzyme activity of ECM, ECM tips were resampled close enough to host trees to trace their roots in the study sites in October 2014. We selected target species from dominant ECM host genera (Table ). Three to seven mature ECM host trees were selected in each species in each site except one species of Castanopsis in the Tertiary lowland site where only one mature tree was found. The target host tree genera and the number of sampling trees are shown in Table and , respectively. Three to five ECM root clusters were collected from each host tree by tracing the lateral roots from the base of a target tree. The total number of root cluster samples collected for each genus in each site ranges from 15 to 23 (Table ). Root cluster samples were stored at 4 °C for up to 2 weeks until further processing . Root clusters were cleaned as in the case of ECM biomass described above and were immediately applied to the assay of enzyme activity.
More than twelve ECM tips (approximately 2 mm) of each root cluster were subjected to the measurements of potential specific ECM activity of five enzymes degrading organic C, N, and P as follows; β-glucosidase (BG, which hydrolyzes cellobiose into glucose), β-N-acetyl-glucosaminidase (NAG, which is related to chitin degradation), leucine-amino peptidase (LAP, which breaks down polypeptides), acid phosphatase (AP, which releases orthophosphate residues from phosphomonoesters) and polyphenol oxidase (PPO, which degrades lignin by oxidizing phenols). BG, NAG, LAP, and AP participate in labile organic matter decomposition, whereas PPO participate in recalcitrant organic matter decomposition. Eight non-ectomycorrhizal (NM) tips obtained from root clusters of each site were also added to the enzyme assay as control (i.e. to determine enzyme activities without ECM). A sequential assay of enzymatic activities in 96-well plates (AcroPrep 96-filter plate with 30–40 μm mesh size: Pall Life Sciences, Germany) was applied with some modification. The concentration of substrates and incubation volume were set following the optimized protocol except for PPO, for which the concentration of substrate L-DOPA and its incubation volume was set at 25 mM and 50 μl, respectively . Incubation was conducted under a pH and temperature condition of the field in each site (Table ). Fluorescence was measured at 364 nm excitation and 450 nm emission in the Corona Grating Microplate Reader SH-9000 (Corona Electric, Japan). The assay for PPO was measured spectrophotometrically at 460 nm. Assayed root tips were scanned and analyzed for root surface area as with the case of the ECM biomass described above. Enzyme activities were calculated from fluorimeter and photometer readings . Because the determination of enzymatic activities relies on the rate of the cleavage of specific substrates by functional enzymes present at the surface of ECM root tips , , specific enzyme activity was expressed for each enzyme as the molar amount of released substrate per minute per unit ECM surface area (pmol min −1 mm −2 ). Specific enzyme activity was averaged in each root cluster for statistical analysis.
The significant differences among study sites were compared by analysis of variance (ANOVA) followed by Tukey’s HSD post hoc test for the following variables: biomass on a ground-area basis, root-tip surface area on a ground-area basis, and the ratio of ECM to fine root on a weight basis, specific enzymatic activities of ECM (enzyme activity on surface-area basis) for each enzyme. Significant differences of specific enzymatic activities among the sites were also compared within each host genus. For evaluating the overall performance of ECM per site (as an integration of exploration capacity and specific enzyme activity), we calculated the ground-area based ECM enzymatic activity by multiplying ECM surface area on a ground-area basis with specific enzymatic activity on ECM-surface-area basis. Because the samples of ECM surface area and specific enzymatic activity did not correspond with each other, we simulated the possible variance of the ground-area based ECM enzymatic activity by applying all pairs between the surface area and enzymatic activity in each site. Linear regression was used to evaluate the contributions of surface area or specific enzyme activity as explanatory variables to ECM surface area enzymatic activity as a response variable. Although both response and explanatory variables were not independent; the purpose of this analysis was to figure out which (surface area or specific enzyme activity) better explained the ground-area based ECM enzymatic activity. Linear regression was conducted in logarithmic scale (with + 1 to include 0) to normalize the distribution. The enzymatic stoichiometry of ECM for C to P was evaluated by the ratio of BG (as C demand) to AP (as P demand), and that for N to P by the ratio of NAG + LAP (as N demand) to AP in each site. To evaluate the relationship of ECM enzyme activities and their stoichiometry with environmental factors, we constructed a generalized linear mixed effects model (GLMM) for each specific enzyme activity as a response variable with resource availabilities of C, N and P as fixed effects by using lmer function in the R package “lme4” . Here ANPP (above-ground net primary productivity) was used as an index of the potential of C supply from host tree to ECM fungi, and soil N mineralization rate and the pool size of soluble soil P were used as nutrient availabilities for ECM fungi. The environmental variables were cited from previous reports consucted in the same site as this study (Tables , ) , , – , . ANPP was calculated as the sum of the above-ground biomass increment and the above-ground litterfall . We preliminarily checked if there were no correlation among environmental variables ( P < 0.05). We standardized each environmental variable to calculate the standardized partial coefficients. The ID of individual tree was selected as the random intercept. Logarithmic transformation was applied to specific enzyme activities to normalize the distribution. The amount of variance explained by the fixed effects only and the combined fixed and random effects of the GLMM models were calculated as the marginal R 2 ( R 2 merginal ) and conditional R 2 ( R 2 conditional ) respectively, using the methods developed by Nakagawa and Schielzeth, (2013) (r.squaredGLMM function in “MuMIn” package). All statistical analyses were conducted using the R statistical program, version 3.4.0 (R Development Core Team 2017).
Biomass and exploration capacity and specific enzymatic activity of ECM Ground-area based ECM enzymatic activity Enzymatic stoichiometry Effect of elements on ECM enzymatic activities Effect of host tree genera on ECM enzymatic activities Mean biomass, surface area, and the ratio of ECM to fine-root tended to be greater in montane forests than in lowland forests (Table ). Especially, ECM biomass and ECM ratio to fine root were the greatest in the montane Tertiary sedimentary forest (17 T). ECM specific enzymatic activities on an ECM surface area basis exhibited different trends. Specific enzymatic activities were significantly higher in the montane ultrabasic forest (17U) among the sites for the following three enzymes; BG, NAG, and LAP ( P < 0.05, Table ). In the case of AP and PPO, the specific enzymatic activities were significantly higher in the montane ultrabasic (17U) and Quaternary sedimentary (17Q) sites than in the other sites ( P < 0.05, Table ).
The ground-area based ECM enzymatic activities were the highest in the 17U forest among the sites for BG, LAP, and AP (Table ). In the case of NAG and PPO, the ground-area based enzymatic activities were significantly higher in the ultrabasic montane (17U) and Quaternary montane (17Q) sites than in the other sites ( P < 0.05, Table ). For all enzymes, the ground-area based enzymatic activities were better explained by the specific enzymatic activities rather than by the ground-area based ECM surface area as indicated by greater adjusted R 2 (Fig. ).
The stoichiometry of ECM enzymatic activity showed a different trend between the ratio of BG:AP and the ratio of (NAG + LAP):AP (Fig. ). The BG:AP ratio was in a similar range among the sites except for Quaternary montane site (17Q), where show a significantly lower than the other site. The ratio of (NAG + LAP):AP was significantly higher only in the ultrabasic montane site (17U) than in the other sites, which is a different trend from the other enzymatic activities.
The results of GLMMs for each enzyme activity and enzymatic stoichiometry exhibited significant positive or negative effects of environmental factors (Fig. ). ANPP negatively affects NAG and AP, and positively affects C:P. N mineralization rate of soil positively and negatively affects AP and C:P, respectively. The pool size of soluble P in soil negatively affects BG, LAP, C:P, and N:P. Particularly, GLMM of LAP represents the higher rate of variance explained by the fixed effects than the other enzymes (Fig. , R 2 marginal = 0.506).
The comparison of enzymatic activity among the sites within the same host genus of ECM suggests similar patterns with the comparison that includes all the host genera described above (Fig. ). ECM enzymatic activity of Shorea was similar between the two lowland sites except for polyphenol oxidase. In the case of Castanopsis distributed at both altitudes, enzymatic activity was relatively high in the montane forests although the difference was insignificant. The enzymatic activities of two dominant genera in the montane forests, Lithocarpus and Tristaniopsis , were significantly higher in the ultrabasic site than in the other sites for most enzymes.
In this study, we simultaneously investigated the two facets of exploration capacity and specific enzyme activity of ECM fungi, which has rarely been attempted previously. Our findings indicate that specific enzyme activities, rather than the root-tip surface area of ECM, better explain the variation in ground-area based enzymatic activities across study sites. (Fig. and Table ). We acknowledge potential impacts of multi-year sampling on our results. Yet, their reliability is supported by the moderate seasonal conditions in the tropical study sites. The nutrient acquisition capacity of ECM is primarily determined by its specific enzyme activity (i.e., enzymatic activity on ECM surface area basis) in tropical ecosystems. This is likely a response to carbon supply limitations from host trees in phosphorus-deficient sites as previously reported , . Limited C supply for ECM fungi at low ANPP site was suggested by the result that ECM biomass and its ratio to fine-root were lower in the ultrabasic site than in the sedimentary site in both montane and lowland forests (Table ). Hence, enhancing enzymatic activities rather than expanding the surface area could be adaptive for ECM fungi to acquire nutrient efficiently under the C limitation in P deficient forests. Specific enzymatic activities on an ECM-surface-area basis were associated with availabilities of C (as indexed by ANPP), N and P across our study sites (Fig. , Table ). Because the ECM is mostly responsible for degradation and nutrient-releasing enzymes on root tips, exhibited by the higher enzymatic activities of ECM tip than those of non-ectomycorrhizal tips across all enzymes as indicated Table and previously documented , . This suggests that ECM fungi are vital to biogeochemical cycles in tropical forest ecosystems, likely in response to the availability of C, N and P. Specific enzymatic activities were negatively affected by C (ANPP) and P availability (Fig. ). Particularly, the activity of AP was negatively affected by C (ANPP) rather than by P availability. Contrary to our results, adaptive responses of fine-roots and the P-use efficiency of above-ground vegetation to P deficiency have been reported on Mount Kinabalu , , . In the same site, Ushio et al. reported a significant increase of both specific root length and phosphatase activity of fine roots in the P deficient site. In addition, Kitayama found a significant negative correlation of specific root length and phosphatase activity of fine roots with the labile organic P content in soils, suggesting that labile organic P is an essential P resource for the trees and associated microbes . Our results indicate that ECM fungi, however, do not respond to P deficiency directly by increasing P-releasing enzyme activity, suggesting an alternative mechanism of ECM fungi involving another nutrient enzyme under P deficiency. Notably, the specific activities for the enzymes degrading not only organic P but also organic C (BG) and N (NAG and LAP) were high in the ultrabasic montane forest (17U) site where P was extremely deficient (Table ). We suggest the following reasons for the simultaneous enhancement of C, N, and P enzyme activities on ECM. The increase of these enzyme activities might facilitate the P gain indirectly by degrading the carbon skeleton organic compound to release bound phosphorus . Higher activity of recalcitrant C degrading enzyme (i.e., PPO) in the 17U site also supports this hypothesis. Moreover, it could be associated with the property of heterotrophy of ECM fungi. Generally, ECM fungi are considered to depend on the C supply from host plant as an exchange with another nutrient. Nevertheless, they have the latent saprotrophic capacity for carbon assimilation , which was possibly stimulated under the insufficient carbon supply from host plant in this site. C allocation to mycorrhizal fungi could be diminished under P limitation because net primary production in both above- and below-ground systems is limited , . Indeed, P availability negatively affected the activity of the C degrading enzyme BG (Fig. ). Such C limitation might enhance the latent saprotrophic ability (i.e. “facultative saprotrophy” , ) of ECM fungi as an alternative source of C from host plant . The concept of facultative saprotrophy on ECM fungi is not a mainstream . Neverthless the specific ecophysiological roles of ECM fungi in P-deficient tropical forests remain underexplored. Emerging evidence of this study could supports “plan B” hypothesis by Talbot et al. (2008) , suggesting increased enzyme production by ECM fungi when plant-derived nutrients are scarce , . On the other hand, the enzymatic stoichiometry indicated a higher demand of ECM fungi for N when P is deficient, because (NAG + LAP): AP ratio (an index of N to P demand) and BG:AP (and index to C to P demand) were negatively affected by the pool size of soluble P (Fig. ). This suggests that ECM fungi invest disproportionately more for enzymes to degrade N than to P with decreasing P availability. This is probably because N is an essential element of enzymes as with the other fundamental components of fungi such as non-enzyme protein, nucleic acid, and cell wall (chitin) . The demand for N is inevitably involved in the synthesis of those enzymes for C and P, similar to nitrogen fixation, in that phosphatase requires N . However, contrasting results have been reported in the meta-analysis of soil enzymatic activity in tropical forests , in which BG:AP ratios were positively associated with soil P availability. The different response from our study might reflect specific nutrient acquisition strategies of target microorganisms in each study site. Soil enzymatic activities dealt by Waring et al., (2014) are mainly driven by free-living soil microbes including fungi and bacteria which are obligate saprotroph gaining C by organic matter degradation. By contrast, we investigated ECM fungi which mainly acquire C from host plants and have a facultative saprotrophic capacity. These differences between ECM fungi and free-living microbes might have resulted in the different specific responses of enzymatic stoichiometry. Altitude was positively associated with ECM biomass and enzymatic activities, which were greater at montane sites than in lowland sites, even though sampling depth in this study was limited to the top 15 cm. However, enzymes typically activated at higher temperature, up to an optimum according to biological metabolism theory . One reason may be related to the biogeographical patterns of ECM fungi that the ECM fungal richness is higher at mid-latitudes than in the equatorial tropical and boreal regions within the northern hemisphere . Altitudinal distributions of ECM fungi on Mount Kinabalu suggest that the ECM fungal richness peaks in montane forests . Likewise, ECM biomass may change with climate conditions. ECM biomass and its ratio to fine-root tend to be higher in cooler regions based on a European latitudinal experiment , . Such climate conditions are likely linked to the dominance of ECM fungi in relation to host productivity and nutrient availability. The composition of ECM host trees also changes along altitude, but the composition of host trees does not seem to affect the community-wide ECM enzymatic activity as indicated by Castanopsis (Fagaceae) in our study. The enzymatic activity of Castanopsis was higher in the montane forest than in the lowland forest (Fig. ). Dipterocarpaceae represents one of a few ectomycorrhizal taxa in the lowland forest and it has been believed that its ECM symbiosis with distinctive exploration capacities of soil nutrients is a reason for their successful domination in the lowland forest where the majority of trees are associated with arbuscular mycorrhiza . However, our results indicate that specific enzymatic activities on Shorea ECM were not greater than those on the other host tree taxa (Fig. ).
ECM fungi enhance the specific enzyme activity (enzymatic activity on an ECM-surface area basis) rather than the exploration capacity (ECM root-tip surface area) to maintain the capacity of nutrient acquisition (i.e. the ground-area based ectomycorrhizal enzyme activities) in P deficient tropical forests. The less dependence of ECM fungi on the exploration capacity in these forests may be related to the limitation of C supply from host trees. These mechanisms of ECM fungi could contribute to maintaining the nutrient cycling and productivity of the tropical forest ecosystems under nutrient (particularly P) deficiency. Although some of the important aspects of ECM fungi such as the performance of soil exploring extraradical mycelium and the nutrient exchange between ECM fungi and host plants are still outstanding , , , we have highlighted the importance of enzymatic activities of ECM, which reflects the nutrient acquisition strategies through ectomycorrhizal symbiosis in tropical rain forests.
Supplementary Information 1. Supplementary Information 2.
|
Evolutionary trend of the broad-snouted crocodile from the Eocene, Early Miocene and recent ones from Egypt | 265b078b-09b6-4b27-89d2-38b7db909ab4 | 11914565 | Musculoskeletal System[mh] | In evolutionary biology, morphology is crucial. Every contemporary group that still exists today preserves relics from the evolutionary path taken by its predecessors . The only archosaurian reptiles those remain are crocodilians , . The crocodilians first appeared in the Early Cretaceous, and they quickly spread throughout the world . The order “crocodylia” refers to a group of large, semi-aquatic carnivores that found in nearly every kind of freshwater habitat in the tropics, subtropics, and some temperate regions , . One of the three major groups comprise modern crocodylians is the true crocodiles Crocodylidae ( Crocodylinae and Osteolaeminae ) . According to , , the largest genus in the Order Crocodylia is Crocodylus . The rostral shapes and proportions differ between taxa. Therefore, based on rostral proportions, crocodylians can be generally categorized into three groups: Longirostrine, Mesorostrine, and Brevirostrine . Crocodilians are noted for their akinetic skull properties due to possessing a secondary palate , . Fossils of mammals, selachians, teleosts, birds, and reptiles have been found in the Eocene–Oligocene rocks of the Fayum Province in Egypt. These rocks date from the Middle through the Upper Eocene and Lower Oligocene – . In these deposits, two crocodylians were collected from the Upper Eocene-Lower Oligocene of the Fayum Province Crocodylus articeps and Crocodylus megarhinu s , . Mook and Müller described an even more complete specimen of C. megarhinus which was acquired by the American Museum of Natural History (AMNH) in 1907. Additionally, Mook described an unassociated mandible (AMNH FARB 5095) that he stated belonged to C. megarhinus and was collected in 1909. In addition to the mentioned two crocodylians Crocodylus sp. is one of the taxa that have existed in the Fayum region since the Eocene. Wadi Moghra is a fossil locality in the northern Sahara of Egypt that preserves a diversity of Early Miocene mammals and non-mammals. The non-mammals fossils are turtles, crocodiles, lizards, fishes, and avians , , – . Crocodylus lloydi , Tomistoma downsoni , and Gavialis species were found to be the majority of the crocodylians from the Moghra Formation listed in the works of Fourtau , and El Khashab , , . It was clearly that Wadi Moghra is the only place where R. lloydi is known to be certain Brochu and Storrs . Fourtau initially identified Rimasuchus as C. lloydi in Moghra, Egypt. C. lloydi has been a part of multiple phylogenetic studies , , where they pointed out a closer relationship with extant African dwarf crocodile ( Osteolaemus ) than with Crocodylus. As a result of this classified the species under the new genus Rimasuchus . AbdelGawad clarified that the Moghra Crocodylian assemblages include four genera with highly morphological variety. Tomistoma dowsoni , Rimasuchus lloydi , Euthecodon sp. and Crocodylus sp. are the four genera. Gavialis species finally identified as Euthecodon sp. Many different countries about forty two on the African continent comprise the Nile crocodile . Shaker and El-bably display anatomical data on the bones of the C. niloticus skull which helps in understanding the explanation of X- ray images and surgical affection of the crocodile heads. The X- ray images were useful for recognizing the paranasal sinuses which their assistance to the morphological organization of the skull. The results spectacle a bony difference between the mammals, crocodile and also the birds. Also, another research on Crocodylus , displays by using geometric morphometrics and geographic analysis. The author compares skulls of C. niloticus with other members of the genus Crocodylus in dorsal view to estimate interspecific and intraspecific differences. A Crocodylus niloticus osteological description is provided. Also, there is an analysis of model-based cluster and morphological clusters irrespective of other factors. The results prove the presence of a cryptic species complex. The largest living crocodylians with reported lengths up to 6 m is Crocodylus niloticus sensu lato . There are differences within C. niloticus s.l. from the Congo River Basin where the sample was significantly difference from other regions including Nile River C. niloticus s.l . This research aims to compare between the skull of the Egyptian fossils of broad-snouted crocodilian from the Eocene–Oligocene; Miocene and the living crocodile in Egypt ( Crocodylus niloticus ). Institutional abbreviations Anatomical abbreviations Egyptian Geological Museum, Egypt (CGM); Cairo University Wadi Moghra collection and Cairo University Vertebrates paleontology Lab, Geology Department, Egypt (CUWM and CUVP); Duke Lemur Center, Division of Fossil Primates, Duke University, USA (DPC); Natural History Museum United Kingdom, London, United Kingdom (NHMUK); American Museum of Natural New York, U.S.A (AMNH); Peabody Museum of Natural History, Yale University New Haven, CT, U.S.A (YPM).
a, angular; ar, articular; as, alisphenoid; bo, basioccipital; bs, basisphenoid; cor, coronoid; cf., choanal fenestra; cqp, cranioquadrate passage; d, dentary; d1, dentary tooth postion 1; d4, dentary tooth postion 4; ect, ectopterygoid; emf, external mandibular fenestra; en, external naris; eo, exoccipital; eoa, external otic aperture; f, frontal; fae, foramen aereum ; fh, foramen hypoglossi; fim, foramen intermandibularis medius; fm, foramen magnum; gf, glenoid fossa; if, incisive foramen; imf, internal mandibular fenestra; itf, infratemporal fenestra; j, jugal; l, lacrimal; ldf, lacrimal duct foramen; lcf, lateral carotid foramen; lec, lateral eustachian canal; lsg, lateral squamosal groove; m, maxilla; mec, median Eustachian canal; mg, meckelian groove; mre, maxillary ramus of ectopterygoid; mt, maxillary teeth; n, nasal; o, orbit; oc, occipital condyle; oo, olfactory opening; ot, olfactory tract; otf, the orbito temporal foramen; pa., parietal; pal, palatine; pf, prefrontal; pfk; prefrontal knob, pm, premaxilla; pmt, primarily tooth; po, postorbital; pob; postorbital bar, pof, preotic foramen; pop, paroccipital process; pp, palatal process; pro; prootic, pt, pterygoid; ptf, post temporal fenestra; ptw, pterygoid wing; q, quadrate; qj, quadratojugal; rp, retroarticular process; sa, suprangular; sc, secondary choanae; soc, supraoccipital; sof, suborbital fenestra; sp, splenial; sq, squamosal; stf, supratemporal fenestra; sym, mandibular symphysis; v, vomer; vf, vagus foramen for IX, X,XI nerves; XII, exit foramen for 12th cranial nerve.
All the compared materials are cranial remains. The Eocene specimens included in this study are housed in CGM, NHMUK, AMNH and YPM VP Adams Crocodylus articeps upper jaw: NHMUK R 3322 (cast of holotype), CGM (C. 10036) (holotype); Crocodylus articeps lower jaw: NHMUK R 3323 (cast of holotype), NHMUK R 3324, NHMUK R 3105, CGM (C. 10065) (holotype); Crocodylus megarhinus upper jaw: NHMUK R 3327 (holotype), FARB AMNH 5061, YPM VP-058532; Crocodylus megarhinus lower jaw: NHMUK R 3328, FARB AMNH 5095; Crocodylus species upper jaw: CGM 84425. Crocodylus species lower jaw: NHMUK R 3104. The Early Miocene specimens are preserved in the CGM , DPC , CUWM and NHMUK PVR. Crocodylus species upper jaw: CGM67106, CGM67107, CUWM90, CGM67123, DPC12548; Rimasuchus lloydi upper jaw: CGM67110, CGM67156, CGM73664, DPC6610, DPC6646, CGM67155, NHMUK PVR14154; Rimasuchus lloydi lower jaw: CGM67117, CGM67118, CGM67155. All specimens were photographed in dorsal, ventral, palatal, lateral, occipital and posterior views with a scale placed. The C. niloticus skull preparation included removing of different organs by using dissecting equipment. The cleaning and bleaching process happened by soaking in a metal container filled with boiling water with powdered detergent and powdered sodium carbonate. Then the running water was used to remove soft tissues manually from the skull. After that, it was soaked in hydrogen peroxide (H 2 O 2 ) through which whiteness was achieved, then washed with water to remove any chemical excesses and finally the prepared specimens were left till became dry . The prepared C. niloticus skull was photographed and drawn from dorsal, ventral, lateral, and occipital views. The measurements of the C. niloticus , Eocene and Miocene specimens are taken and illustrated. The biometric measurements of the investigated specimens were subjected to statistical analysis. The similarity and dissimilarity among the investigated taxa were processed via cluster algorithm analysis performed using a single linkage method and Jaccard similarity index. The PAST software, version 4.13, has been utilized to illustrate the retrieved dendrogram , .
Systematic paleontology The facial components The cranial bones Foramina and fenestrae Lower jaw Mandible is formed by the two halves fused together anteriorly by the median mandibular symphysis via dentary bone. Each half of the mandible was composed of six fused bones; articular, angular, supra-angular, coronoid, splenial, and dentary. The mandible has an oval-shaped external and internal mandibular fenestra (Fig. ). Articular is a small bone at the most posterior end of the mandible and bears the glenoid fossa which is the quadrate’s articulation surface. The articular articulates medially with the surangular and ventrally with angular (Fig. ). On the medial surface of the articular where the retroarticular process meets the glenoid cavity which is the foramen aereum. The foramen aereum occurs at the inner margin of the base of the retroarticular process. Surangular is a dorso-ventrally broad and long bone forming the dorsal mandibular margin. The surangular articulates medially with the articular and forms the lateral wall of the glenoid fossa and angular ventrally. It rests between the dentary and splenial dorsally and projects forward forming a part of the dorsal and posterior margins of the external mandibular fenestra. Its anterior contacts with the dentary become somewhat sinuous and forms two anterior processes of unequal length (Fig. ). A postero-dorsal process of the dentary narrowly separates the surangular from the antero-dorsal margin of the external mandibular fenestra. External mandibular fenestra is bounded by the surangular postero-dorsally, dentary antero-dorsally, and angular ventrally. The surangular–angular suture intersects the fenestra at its most posterior edge and not along its postero-ventral margin. Angular forms the ventral margin of the posterior part of the mandible. It forms a part of the ventral and posterior margin of the external mandibular fenestra (Fig. ). Posterior to the mandibular fenestra, the angular contacts with the surangular dorsally and dentary anteriorly. Deep pits cover most of the ventral surface of it. Dentary is the largest bone in the mandible which contains 15 sharp conical lower teeth at its alveolar border (Fig. ). The 4th dentary alveolus is enlarged due to the 4 th dentary tooth is the largest one. Posterior to the last alveolus, the dentary is overlain by the surangular. Anteriorly, the dentaries meet at the midline and form the mandibular symphysis. The symphysis extends from the 1 st dentary alveolus to the beginning of the 4 th one. The dentary comes into contact with the splenial at the level of the 6 th dentary alveolus. The dentary extends posteriorly to form the anterior and dorsal borders of the external mandibular fenestra. The external suture between the dentary and angular is ‘V’-shaped with a small process of the dentary underlying the level of the external mandibular fenestra but separated from it by a small flange of the angular. Splenial exists in the inner surface of the mandible. The paired splenial is a long bone that contacts with the dentary dorsally and ventrally along its medial margins (Fig. ). It contacts with the surangular postero-dorsally. The dorso-medial margin of the splenial reaches to the coronoid, and the ventro-medial margin contacts with angular. Coronoid occupies the inner surface of the mandible (Fig. ). The coronoid is relatively small. It contacts with both of the splenial anteriorly, the angular ventrally, and the surangular dorsally. It forms the posterior and ventral surface of the foramen intermandibularis medius (Fig. ). The posterior surface is sharply concave. The foramen aereum is located at the mandibular lingual edge in the base of the retroarticular process (Fig. ). The external mandibular Fenestra is enclosed by the supra-angular postero-dorsally, the angular ventrally, and the dentary antero-dorsally (Fig. ). The suture between surangular and angular intersects the external mandibular fenestra at its most posterior margin. The internal mandibular fenestra is surrounded by the splenial.
Crocodyliformes . Mesoeucrocodylia . Neosuchia . Eusuchia . Crocodylia . Family Crocodylidae . The present study deals with the description of a complete skull and a mandible of C. niloticus (Figs. , , and ) and then compares it with skull descriptions of various broad-snouted crocodilian taxa from the Eocene and Miocene epochs. The crocodile’s skull is referred to as a diapsid, or “two-arched,” reptile skull. A pair of large fenestrae are located at the dorsal wall of the skull, behind the large orbits; the supra and lateral temporal fenestrae. The skull of C. niloticus is characterized by a long, broad, and compressed dorsoventrally rostrum with long, sharp conical teeth. The rostrum bears numerous small openings called neurovascular foramina along the rostral alveolar borders. Also, small erratic pits in the dorsal surface become wider when they approach the orbits. The cranium is composed of the cranial and facial components.
Premaxilla is a paired bone that makes up the anterior portion of the rostrum. Dorsally, they are completely sub-circular. The premaxilla contacts with the nasal postero-medially and the maxilla postero-laterally then comes to a sharp point where the nasal, maxilla, and premaxilla intersect. The external naris lies dorsally in the premaxilla. There is a rugosity present lateral to the external naris that corresponds to the enlarged 4 th premaxillary alveolus. This rugosity expands the premaxilla laterally into the external naris, making it sub-circular in shape. Ventrally, the premaxilla intersects posteriorly with the maxilla only and terminates at the level of the second maxillary alveoli. The sutures between the premaxilla and maxilla are irregularly shaped almost making a W shape. There are five premaxillary conical teeth, different in the size of the alveoli, the second alveolus is smallest, whereas the fourth one is largest. Also, there are distinct gaps between the 1 st and 2 nd alveoli, the 3 rd and 4 th as well as the 4 th and 5 th ones. Maxilla is paired, broad, flattened dorsoventrally (Fig. ). It represents the largest bones in the snout. Dorsally, each one contacts with the nasal bones along their medial edge. The maxillae flare out laterally behind the premaxillae, narrow briefly posterior to the fifth alveolus, and then progressively expand posteriorly (Fig. ). It contacts with the lacrimal bone posteromedially and the jugal bone laterally where the sutures between them are irregular. There is a large tuberosity “A rounded protuberance” toward the anterior portion of each maxilla above the large fifth maxillary tooth (Fig. ). The maxillae are contact ventrally at the anterior midline but separated by the posterior palatines. There are fourteen maxillary teeth, the fifth maxillary tooth is the most robust and is circular, while the fourteenth is the smallest (Fig. ). The fifth alveolus is positioned at the maximum concavity. Posteriorly the alveoli become smaller and more oval. Small occlusal pits are visible between the maxillary alveoli from the third one to the eighth maxillary alveoli anterior to the level of the suborbital fenestra for coinciding with the lower tooth row. The maxillae and dentaries have an undulating, concavo-convex lateral contour known as festooned Posteriorly, the maxillary tooth row is bordered by the anterior process of the ectopterygoid. The maxillary foramen for the palatine ramus of cranial nerve V is small. (Fig. ). Nasal is a pair of long and relatively narrow bones. They are located dorsomedially on the rostrum. Laterally, each nasal is contacted with the maxilla, lacrimal, and prefrontal, respectively, from the anterior to posterior (Fig. ). Also, it is associated with the premaxilla anteriorly, the frontal posteriorly, and itself medially. Nasals narrow anteriorly where they intersect with the premaxillae and send a short process into the external naris called anterior processus nasalis. The posterior ends form acute points separated by the anterior process of the frontal. The nasals rejoined medially at the level of the anterior margin of the 10 th alveolus. The suture is obvious along the shared medial margin of the nasals, but it diverged slightly as they entered the distinctly pear-shaped external naris. The nasal breadth narrows anteriorly near the external naris and posteriorly between the frontals. At the approximate level of the most anterior edge of the maxillae, the nasals gradually widen and they gradually narrow at the level of the 6th maxillary alveolus while widen again, reaching their greatest width at the level of the 9th maxillary alveolus at the most anterior prefrontal margin level. Lacrimal lies anterior and lateral to the prefrontals. In the dorsal view, the medial surface of each lacrimal contacts with the nasal anteriorly and the prefrontal posteriorly. It contacts with the maxilla antero-laterally and the jugal postero-laterally (Fig. ). The posterior margin of the lacrimal makes up the most anterior margin of the orbit and bears a lacrimal duct (Fig. ). The maxilla-lacrimal suture is irregular. Prefrontal is approximately triangular in dorsal view (Fig. ). The anterior portion of each prefrontal is wedge-shaped and projects anteriorly. It contacts with the frontal postero-medially, the lacrimal laterally, and the nasal rostrally. Prefrontals comprise the antero-medial margins of the orbit. The medial prefrontal-frontal suture is straight but curves sharply posteriorly to intersect with the orbit (Fig. ). Ventrally, prefrontals send robust vertical processes called the prefrontal pillars that contact with the dorsal surfaces of the pterygoids and palatines (Fig. ). This process supports the secondary ossified palate. The prefrontal pillars do not contact with each other. Jugal bones are paired bones. They make up the majority of the ventral margins of both the lateral temporal fenestrae and orbits. Dorsally, it is surrounded by the maxilla anteriorly, lacrimal antero-medially, and the quadratojugal posteriorly (Fig. ). The jugal-lacrimal suture is irregular. The ventral margin of the postorbital bar is inset from the lateral jugal surface. The jugal narrows gradually towards the posterior, terminating in a sharp point just anterior to the end of the quadratojugal. In the ventral view, the jugal bone meets the ectopterygoid dorsoventrally. Quadratojugal is a pair of elongated bones. This bone is situated between the jugal antero-laterally and the quadrate postero-medially. The quadratojugals comprise the posterior margin of the lateral temporal fenestrae and send within them spine-like processes termed quadratojugal spines (Fig. ). The broad contact with the posterior part of the jugal is extremely elongate and postero-ventrally oriented. Quadrate is large, found at the postero-lateral wall of the skull. The distal borders of the quadrates make up hemicondyles that articulate with the articular bones in mandibles. In dorsal view, quadrate is located between the temporal and quadratojugal bones (Fig. ). The anterior portion of the quadrate is covered by the squamosal and in the occipital region the quadrate is overlain by the exoccipital medially toward the anterior side. Dorsally the quadrate contains on the medial portion foramen aereum (Figs. and a and b). Quadrates form the floor and posterior margin of the external otic aperture. Postorbital forms most of the posterior margin of the orbit as well as the anteromedial margins of the lateral-temporal fenestrae (Fig. ). Medially, postorbitals form sutures with the frontal and parietals, excluding the frontal from connecting with the supratemporal fenestrae. The suture between the postorbital and squamosal is oriented ventrally to the skull table (Fig. ). The postorbital has a descending process that contacts with the ascending process of the jugal and ectopterygoid to form the postorbital bar (Fig. ). Palatine is a long, paired bone and meets at the midline. Each palatine contacts with the maxilla anterio-laterally, and the pterygoid posteriorly (Fig. ). The sutures with the pterygoids are very irregular and transverse with a convex curve; they do not occur at the most posterior margin of the suborbital fenestra. Its lateral margin makes up the anteromedial edge of the palatal fenestra. The palatine has an elongated, anterior process that extends to approximately the level of the 6 th maxillary tooth position (Fig. ). The palatine-maxilla suture intersects with the suborbital fenestrae at the beginning of 9 th maxillary alveolus which is situated at the most anterior end of the suborbital fenestrae. Ectopterygoid is a paired, tripartite one. Ventrally, it contacts with each of the maxilla anteriorly and with the pterygoid lateromedially and postero-laterally in addition to forming the lateral boundary of the suborbital fenestra (Fig. ). Posterior to the suborbital fenestra, ectopterygoids are oriented postero-ventrally. Both ectopterygoids extend to the border of the 10 th maxillary alveoli so, it is parallel to the last five maxillary alveoli. The maxillary ramus of ectopterygoid is forked. The ascending process of the ectopterygoid articulates with the jugal and forms the lateral part of the lower half of the postorbital bar (Fig. ). Pterygoid is a paired, broad bone. In the ventral view, it is articulated anteriorly with the palatine (Fig. ). The pterygoids meet at the midline and contact with the latero-sphenoid. In connection with the palatines a narrow wedge of pterygoid creates a flared-out structure, intersecting the suborbital fenestra medially, so it makes the part of postero-medial margin of the palatal fenestra and meets the basisphenoid postero-ventrally. It consists of a body and wing; the body bears an unpaired opening along the midline, called the secondary choanae. Latero-ventrally the pterygoids form sutures with the ectopterygoids. Directly posterior to the intersection of the pterygoid and ectopterygoid there are two shallow recesses separated by a narrow ridge. Near the midline, dorsal to the internal choana, a thin lamina extends dorsally on each side, forming a V-shaped saddle, where the basisphenoid sits ventrally to the median eustachian foramen. The pterygoids completely enclose the duct behind the suborbital fenestrae before exiting at the internal choanae. Wings are dorso-ventrally thickened and form a buttress at the articular surface with the ectopterygoid, so it meets the ectopterygoid antero-laterally. Vomer is a single bone that divides the secondary choana into two cavities, or subfossae called the choanal fenestra (Fig. ).
The occipital bones constitute the most posterior part of the skull. They are composed of a completely fused four bones enclosing a foramen magnum; the supraoccipital, basioccipital, and exoccipital on each side. The supraoccipital exists in heart shape in the occipital surface dorsomedially. The supraoccipital is exposed on the dorsal surface of the most posterior part of skull as a very narrow triangular wedge, separating the posterior margin of the parietal (Fig. ). The ventral margins of supraoccipital come together in a V- shape to meet the exoccipitals in occipital view so it contacts with the exoccipitals laterally and ventrally (Fig. ). In the occipital view, the supraoccipital becomes inward from the edge of the most posterior part of the skull and forms the medial margins of the post- temporal fenestra (Fig. ). The exoccipitals meet at the midline and dorsal to the foramen magnum, separating the supraoccipital from the foramen magnum (Fig. ). They are extended laterally forming the paraoccipital processes. This process is bordered dorsally and anteriorly by the squamosal and ventrally by the quadrate (Fig. ). The exoccipitals are sutured with the squamosal bone dorso-laterally and the supraoccipital bone dorso-medially. The exoccipital meets the basioccipital ventrally. They form the dorsal and lateral margins of the foramen magnum. Also, it is surrounded ventrally by the basioccipital. Also, it is surrounded ventrally by the basioccipital. Within the braincase, exoccipital forms the most posterior portion of the roof of the cranial cavity. It also contributes greatly to the posterior wall of the otic capsule. The basioccipital makes up the most ventral portion of the most posterior part of the skull. It contacts with the exoccipital dorso-laterally in the occipital view (Fig. ). It had mainly a single spherical shape occipital condyle which articulated with the condyloid fossa of the atlas vertebra forming the atlanto-occipital joint (Fig. ). Dorsal to the occipital condyle it makes the ventral boundary of the foramen magnum which represented the posterior floor of the cranial cavity. It contacts with the basisphenoid along its posterior margin (Fig. ). Prootic is visible as a small crescent shaped bone. The prootics bound the trigeminal foramina posteriorly and contact with the laterosphenoids anteriorly, while the quadrates cover them medially (Fig. ). The laterosphenoid lies ventral to the parietal. It is overlain in part by the postorbital along its lateral edge (Fig. ). Ventrally these bones contacts with the pterygoid, quadrate and prootic posteriorly where they make up the anterior portion of the braincase and meet at the midline. Anteriorly they contact with the main body of the frontal since they form the ventral opening for the olfactory tract (Fig. ). Ventral to the olfactory opening they form the opening for the optic nerve. Anteriorly, it bounds a deep oval-shaped opening of the trigeminal foramen. The basisphenoid is bounded by the pterygoid anteriorly and the basioccipital posteriorly. Also, it articulates laterally with the quadrate and exoccipital. It is visible ventrally, just posterior to the secondary choana (Fig. ). Between the basioccipital and basisphenoid bones behind the choana there a singular opening of the median Eustachian tube and the paired openings of lateral Eustachian canals. (Fig. ). Frontal is a singular bone. Dorsally, it is bordered anteriorly by the nasals, antero-medially by the prefrontals, postero-laterally by the postorbitals, and posteriorly by the parietal (Figs. and ). The prefrontal-frontal suture is straight temporarily before curving sharply to intersect with the orbit. The frontal does not participate in the formation of supratemporal fenestrae. A deep and well-defined olfactory canal is formed by descending processes of the frontal. Parietal contacts with the frontal bone in the dorsal view, thus the fronto-parietal contact is concave anteriorly, it bounds to the squamosal postero-laterally. Also, it has a narrow contact with each of the postorbitals antero-laterally and the supraoccipital bones posteriorly (Fig. ). The ventral margin of the parietal makes up a portion of the post-temporal fenestrae. Most of the lateral margins of the parietal comprise the supratemporal fenestrae. The parietal expands to make up most of the interior portion of the supratemporal fenestra. The interfenestral bar is flat dorsally and narrow. Squamosal is paired and makes up the posterior and lateral portion of the skull table. It forms the postero-lateral margin of the supratemporal fenestra. It is sutured to the postorbital antero-dorsally, parietal laterally, and the exoccipital posteriorly. In the occipital view, the squamosal participates in the lateral margin of the post-temporal fenestra (Fig. ). This bone connects with the lateral temporal fenestrae anteriorly. The strongly developed descending process of the squamosal postero-dorsally overlies the quadrate and laterally overlaps the exoccipital (Fig. ). The most posterior part of the ventral edge of the temporal bone curved over the otic recess (Fig. ) and contacted with the quadrate bone. The squamosal is expanded laterally and postero-laterally where it forms the dorsal roof and a part of the posterior wall of the external otic aperture.
External naris is a large pear-shaped nose orifice situated dorsally on the premaxillae. It is bounded anteriorly and laterally by the premaxillae and posteriorly by nasal processes. About 2.5 cm separates the external naris from the tip of the snout. The length of the external naris is greater than its width, and the narial chamber is rather deep (Fig. ). Foramen for the first dentary tooth is a circular foramen in the shape dorso-ventrally. It presents on the premaxilla anterolaterally to the external naris and between alveoli 1–2 from the ventral view. The incisive foramen is another opening on the premaxillae posterior to the 1 st premaxillary alveoli, smaller than the external naris. This foramen has a pointed edge anteriorly and its posterior margin is separated into three lobes, the lateral two are short and rounded, while the medial one is narrow, pointed, and longer than the other two lobes (Fig. ). Orbit is surrounded by seven bones, the prefrontal, frontal, lacrimal, jugal, and postorbital (Figs. and ). The anterior wall of the orbit is bounded by the lacrimal bone while its medial wall is formed by the prefrontal and frontal. The jugal is located near the orbit’s border dorsoventrally at the margin of the orbit. The posterolateral orbital margin is formed by the postorbital bone. The postorbital bar forms the posterior margin of the orbit. The orbits are wider than supratemporal and lateral temporal fenestrae. Each orbit converges to a rounded point anteriorly (Fig. ). Along the anterior margin of the orbit, within the prefrontal-lacrimal suture lies the lacrimal foramen. There is a palpebral ridge along the medial portion of the orbit that expands dorsally. Supratemporal fenestrae is relatively sub circular in the shape. Dorsally, the supratemporal fenestrae are bounded laterally by the squamosal-postorbital suture, anteriorly by the postorbital-parietal, and posteriorly by the squamosal-parietal sutures (Fig. ). The supratemporal fenestra did not connect with the frontal due to its medial contact with the parietal and postorbital anteriorly. The flat and narrow interfenestral bar creates the medial walls of the supra-temporal fenestrae. The interfenestral bar is formed by the parietals and it is expanding ventrally (Fig. ). Lateral temporal fenestrae: Infratemporal fenestra is triangular and can be seen the process from quadratojugal. The infratemporal fenestrae are medium in size, being slightly larger than the supratemporal one while it is smaller than the orbit. The fenestra is bounded by the quadratojugal and quadrate posteriorly and the postorbital bar anteriorly, the squamosal medially, and the jugal laterally (Figs. and ). The external otic aperture (Recessa otica externa) is a deep opening located ventral to the overhang formed by the squamosal’s lateral border. The squamosal forms the external otic aperture’s dorsal roof and a portion of its posterior wall, while the quadrate forms the external otic aperture’s floor and part of the posterior boundary. The preotic foramen is located anterior to the external otic aperture (Fig. ). Olfactory opening is the opening for the olfactory tract which is formed by the connection between anterior part of laterosphenoids and ventral main body of frontal (Fig. ). Optic foramen is the opening formed by the laterosphenoids medially ventral to the olfactory opening. This foramen creates for the optic nerve. The orbito-temporal foramen is a small opening. It is situated in the posterior wall of the supra-temporal fenestra. It is surrounded dorsally by the squamosal, quadrate, and ventrally by parietal (Fig. ). Posttemporal fenestrae are small, crescent-shaped openings on the occipital region’s surface (Fig. ). The post-temporal fenestra is bounded by the squamosal dorso-laterally, the exoccipital ventro-laterally and by the supra-occipital ventro-medially and dorso-medially. The occipital veins pass through their ventral margin. They also contribute to the anterodorsal portion of the otic capsule. Palatal fenestrae (the suborbital fenestrae) are elongated oval fenestrae in the shape. They have a rounded end anteriorly. Palatal fenestrae is bounded anteriorly by the maxilla, postero-laterally by the ectopterygoid, posteriorly by the pterygoid and medially by palatines. These fenestrae extend from the beginning of the ninth maxillary teeth to the level posterior the fourteenth ones. Much above the anterior border of the suborbital fenestra is the anterior palatal process of the palatines (Fig. ). Trigeminal foramen (foramen ovale) is an oval, deep opening. This foramen lies on the skull’s lateral surface. The trigeminal foramen for the passage of the trigeminal nerve V. The quadrate expands forming the ventral margin and part of the posterior margin of it. It is surrounded posteriorly by the prootic and anteriorly, by the laterosphenoid (alisphenoid). Secondary choana lies ventrally at the skull’s posteromedial portion. Pterygoids completely enclosed it (Fig. ). It is divided by the vomer bone into two cavities called choanal fenestra. Median and lateral Eustachian canals lie in the depression between the basioccipital and basisphenoid bones ventrally. The lateral Eustachian canals are paired openings and the median Eustachian tube (foramen intertympanicum) is the singular opening. The openings of lateral Eustachian canals are presented posterolateral to the median one. The median eustachian opening is almost enclosed completely by the basisphenoid, although the basioccipital forms a small part of its posterior margin. The lateral eustachian openings connect with the basisphenoid anteromedially, anterolaterally and with the basioccipital posterolaterally (Fig. ). Foramen aereum is located on the quadrate’s medial dorsal surface at a corresponding opening on the articular of the mandible (Fig. ). It contributes to skull pneumatization (siphonium). Foramen magnum is a circular opening in the center of the occipital surface then lies above the occipital condyle (Fig. ). Laterally to the foramen magnum there are four foramina. The medial and small one is the foramen hypoglossal for the passage of the XII cranial nerve (Hypoglossal nerve), which lies directly lateral to the foramen magnum (Fig. ). Ventrally and slightly lateral to the foramen hypoglossal and just dorsal to the basi-occipital there is a foramen which is a much larger foramen “Lateral carotid foramen” which acts as the passage for the internal carotid artery (Fig. ). Lateral and near the base of the occipital condyle, the otoccipital is pierced by two foramina” The larger one was the foramen vagus for the passage of the IX, X, and XI nerves as well as the jugular vein. The dorso-lateral foramen (The cranioquadrate passage) is a foramen for each the facial nerve (VII) and the cranioquadrate. The cranioquadrate passage opens laterally beneath the paroccipital process.
The present study deals with the cranial description of broad snouted crocodile C. niloticus from Egypt and comparing with the Eocene and Miocene Crocodiles in Egypt and pointing out the similarities and differences. Eocene Crocodylus species from Egypt represented by NHMUK R 3327 (holotype), FARB AMNH 5061, YPM VP-058532 referred to C. megrahinus and NHMUK R 3322 referred to C. articeps by Andrews , Adams . And there is unidentified specimen from the Egyptian Geological Museum (Egypt) numbered CGM 84425 recorded from the Eocene. Early Miocene specimens referred to Rimasuchus lloydi includes CGM67110, CGM67155, CGM67156, CGM73664, DPC6610, DPC6646, NHMUK PVR14154 and Crocodylus specimens include CGM67107, CGM67106, CUWM90 as well as DPC12548 . The authors report those morphological difference with the CUVP001 as follows: Premaxillae in each of CUVP001, the CGM84425, C. megarhinus , CUWM90, DPC6610, CGM67110, CGM67155, and in NHMUK PVR14154 are sub-circular and completely round from the anterior end (Fig. a–j). But are elongated oval compressed laterally in NHMUK R 3322. Also, in the palatal view, the suture between premaxillae and maxillae is an irregular shape in the CUVP001 forming almost a W-shape and it is irregular in CUWM90. While in the NHMUK R 3322, the same suture forms an inverted V-shape. In contrast to case of C. megarhinus , and the CGM84425, the posterior branches of the premaxillae act as a barrier to separate the anterior branches of the maxillae which are terminated in the notch between the premaxilla and maxilla as a gentle convex curve oriented posteriorly (V-shape) as shown by Adams . In NHMUK PV R14154, CGM73664 and DPC6610, the suture between the maxilla and premaxilla from ventral view is straight. In CGM67110 and CGM67155, the suture isn’t obvious. In the relation to the extension of premaxilla, from the present study on CUVP001, it was found that it extends at the level of the second maxillary alveolus. This case coincides with the other cases such as CGM84425 and C. megarhinus . In contrast, there are other cases of premaxilla terminated. In the ventral view at the level of 5 th premaxillary alveolus in the case of NHMUK R 3322. But at the notch between premaxilla and maxilla and sometimes extended until to mid of the 1 st maxillary alveolus as in cases of NHMUK PV R14154, CGM73664, DPC6610. While between first and second alveoli in CUWM90 (Fig. a–j). The premaxillary alveoli are oval and more elongated in CUVP001, while, in NHMUK PVR14154, C. megharinus , CGM84425, CUWM90, DPC6610, CGM67110, CGM67155, CGM73664 and NHMUK R 3322 the premaxillary alveoli are circular (Fig. a,b,e,g). From the measurements of the premaxillae length, it is found that in CUVP001 the premaxillae length lies between the NHMUK R 3322 and YPM VP-058532 but the width of premaxilla at the level of notch in CUVP001 is smaller than that in YPM VP-058532 and NHMUK R 3322 (Fig. a,b). So, it is indicating that the elongation of the premaxilla in CUVP001 goes beyond that of C. megarhinus . But each of the C. megarhinus and C. articeps premaxilla is broader than that of C. niloticus . The width of premaxilla at the notch is closest in both YPM VP-058532 and NHMUK R 3322 but the premaxilla length is longer in the NHMUK R 3322 where was compared to YPM VP-058532. Thus, this confirms that the C. articeps premaxilla is more elongated than that of C. megarhinus . So, C. megharinus and C. articeps weren’t synonymous as regarded in . The ratio between the breadths of premaxilla at the level of notch “G” to the dorsal length of premaxilla “D” is 0.4 in both CUVP001 and NHMUK R 3322. While in both YPM VP-058532 and CGM84425 is almost 0.6 and is 0.7 in NHMUK PV R14154 and 0.8 in CUWM90. So, the closest species to CUWM90 is the R. lloydi . This ratio shows that R. lloydi has the largest breadth “G”. Indeed, R. lloydi preserves the proportions of an extreme brevirostrine crocodile as shown by . C. articeps and CUVP001 preserves a significant rostral elongation . The maxillae of CUVP001 contains 14 maxillary alveoli. The same number was found in NHMUK R 3322 and CGM67106. In contrast, CGM84425, FARB AMNH 5061, YPM VP-058532 and NHMUK PV R14154 in which there are 13 maxillary alveoli. This thrown the light onto the possible close relationship between C. megarhinus and Osteolaemus as suggested by . In the NHMUK R 3327, CGM67107, CUWM90, CGM67123, DPC12548, CGM67110, CGM73664 and DPC6610 the maxillae aren’t complete so, the number of maxillary alveoli doesn’t mentioned. Also, in the CGM67156, DPC6646 and CGM67155 the maxillary alveoli don’t mention due to the maxillae were broken in them. The 5 th alveolus is the largest and the fourth one is the second largest in the size in these specimens (Fig. a, b), as well as FARB AMNH 5061, CGM67106, CGM67107, CUWM90, DPC12548, CGM73664 and DPC6610. The largest maxillary alveolus can’t be determined in NHMUK R 3327, CGM67123 and CGM67110 due to the maxilla in them isn’t complete. Laterally the maxillae-palatine suture in the CUVP001 are almost straight, united at the pointed anterior end at midline then posteriorly form a V-shape with suborbital fenestrae. In NHMUK R 3322 the maxillae-palatine suture from anterior and lateral margins is closest to that of a CUVP001 but posteriorly maxillae aren’t complete in NHMUK R332 (Fig. a,b). In contrast in FARB AMNH 5061 and NHMUK PV R14154 the palatine-maxilla suture laterally is more irregular and represented in concave and convex curves (Fig. a,b,e). In the CGM84425, the maxillae- palatine suture does not appear obviously but is closest to that of C. megarhinus . This suture was broken in the YPM VP-058532, CGM67107, CGM67106, CUWM90, CGM67123, DPC12548, CGM67110, CGM73664 and DPC6610. Dorso-laterally, the maxilla margins show the broadest curves in case of NHMUK PV R14154, DPC12548, then the maxilla margins followed by CUWM90, C. megarhinus and CGM84425. In contrast to the CUVP001 and NHMUK R 3322 in which curved narrowly (Figs. e,h and a–e). The maxilla anterior margins in the CGM67107 and CGM67106 aren’t obvious and are broken in the CGM67123, CGM67110, CGM73664 and DPC6610. In CGM67110, DPC6610, CGM73664, CUWM90, CGM67155 and NHMUK PV R14154 the occlusal groove (notch between the premaxilla and the maxilla) is deeper than CUVP001, CGM84425 and C. megarhinus and the less deep appears in NHMUK R 3322 (Figs. e,k and a–g,j,k). This notch is broken in the CGM67107, CGM67106, CGM67123, DPC12548, CGM67156 and DPC6646. It is found that CGM67107 is identical to the description of the NHMUK PV R14154 from 4 th to 10 th maxillary alveoli, the maxillae of both get narrow at the level of the 7 th maxillary alveoli, while the 9 th and 10 th maxillary alveoli are larger than the 7 th and 8 th maxillary alveoli. In addition to that the 5 th maxillary alveolus is the largest one. The alveoli of the both are circular in the shape. So, CGM67107 refers to R. lloydi . Both of CGM67106 and CUWM90 are identical through the close alveoli from each other in between the 6 th to 9 th alveoli in contrast to that of NHMUK PV R14154. But the shape of the alveoli is the same. Also, both of CGM67123 and CGM67106 have the same morphological feature of the posterior part which represented in the shape of teeth and alveoli which are more elongated and these features is similar to CUVP001. These two previous cases are contrast to R. lloydi . Both of the CGM67106, CGM67123 as well as CUWM90 are probably referred to new species which contain 14 alveoli from Miocene epoch and preserves the broadest snout. This snout is characteristic of the Miocene crocodilian R. lloydi . The measurements revealed that although the C. articeps snout breadth “J” lies between YPM VP-058532 and CGM84425, the snout length “C’” of the C. articeps is longer than and YPM VP-058532 and CGM84425 (Fig. b). That graph shows that the snout length increases parallel with the snout width in the specimens except in the C. articeps which is more elongated. The present study on CUVP001 and YPM VP-058532 have the closest snout length, but, CUVP001 showed the less breadth than the YPM VP-058532. The width across 5 th maxillary teeth “canines” in CUVP001, CGM84425, NHMUK R 3322, CUWM90 and C. megarhinus are closer and does not exceed the range of NHMUK PV R14154. Through the ratio between smallest breadth of snout “G”/breadth of snout at fifth maxillary tooth “H” it is found that NHMUK PV R14154 has the ratio 0.4 and CUWM90 has the ratio 0.71 while CGM84425 and CUVP001 have the same ratio 0.5. But YPM VP-058532, NHMUK R 3322 have the approximately ratio 0.6. This ratio refers that the CUVP001is closer to CGM84425 than YPM VP-058532 and NHMUK R 3322. While the ratio of the NHMUK PV R14154 is the smallest one due to it contains the largest breadth of snout at 5 th maxillary alveoli. The suture through the medial margin of the nasal is obvious in CUVP001, C. megarhinus , CGM73664, CGM67155 and CGM84425. The reverse condition is found in cases of NHMUK R 3322 and NHMUK PV R14154 which the suture is obliterated along part of the shared medial margin of the nasal. The nasals of CUVP001 and NHMUK R 3322 reach their greatest width at the level of the 9 th maxillary alveolus at the level of the most anterior margin of the prefrontal. On the other hand, the nasal of NHMUK PVR14154 reaches to its greatest width almost anteriorly far away from the level of the beginning of the prefrontal. While in YPM VP-058532, the greatest width extended at the level of the most anterior margin of lacrimals. The other condition was found in NHMUK R 3327 and FARB AMNH 5061 where their nasal extension reach the greatest width at the level of 4 th maxillary alveolus. In the CGM84425 is not clear. Nasal is broken in all other specimens. Lacrimobvious in NHMUK R. 3327, CGM67106, CGM73664, NHMUK R 3322, and CUWM90 and broken in the others. Prefrontal has the same shape in CUVP001, CGM73664, FARB AMNH 5061 and YPM VP-058532. In CUVP001 and CGM73664 the anterior end of the prefrontal lies above the frontal, but the anterior end of the prefrontal and frontal lies at the same level in NHMUK R 3327, YPM VP-058532 and FARB AMNH 5061 (Fig. b,c). It isn’t obvious in CGM67156, NHMUK R 3327, NHMUK R 3322, CUWM90, CGM67156, CGM84425 and NHMUK PVR14154, and broken in the rest specimens. Jugal has the same morphological feature in CUVP001, YPM VP-058532, FARB AMNH 5061, CGM84425 and NHMUK PVR14154. It isn’t complete in CGM67106, CUWM90, CGM67123 and NHMUK R 3322, and broken in the other specimens. In the quadratojugal of CUVP001, there is a spine that enters infratemporal fenestra on each side. On contrary in the case CGM84425, this is not obvious on both two sides. It appears somewhat on the left side and is absent on the right one. The same case showed from the YPM VP-058532. Adams demonstrated that in the juvenile of C. megarhinus the quadratojugal spine is prominent but absent and is not obvious in the adult C. megarhinus . In the NHMUK R 3322, the CGM67106, CGM67107, CGM67123, DPC12548, CGM67110, DPC6610, CGM73664, CGM67155, NHMUK R 3327 and CUWM90 the quadratojugal is broken. In FARB AMNH 5061, CGM67156, DPC6646 and NHMUK PVR14154 a quadratojugal spine is not clear (Fig. a–e). The C. niloticus compared with C. megarhinus and CGM84425 shows that the width of the skull across the anterior end of orbits in relation to the width of the skull across the quadratojugals is the same 0.6 as where Mook suggested that C. megarhinus is closely related to C. niloticus , while it is wider within R. lloydi which equal to 0.7. Quadrate has the same morphological feature in CUVP001, YPM VP-058532, FARB AMNH 5061, CGM67156, DPC6646, CGM84425 and NHMUK PVR14154 and damaged in other specimens. Postorbital, and damaged in other specimens. The palatine is a long bone due to its anterior long process in case of CUVP001. In case of NHMUK R 3322, FARB AMNH 5061, and CGM84425 and NHMUK PV R14154 the palatine is short due to its short process. The palatine-maxilla suture intersects with the suborbital fenestrae at the beginning of the 9 th maxillary alveolus in CUVP001. But this suture was found at the middle of the 9 th maxillary alveolus in FARB AMNH 5061, CGM84425 and between 8 th and 9 th maxillary alveoli in NHMUK PV R14154. Lastly, it was not clear in NHMUK R 3322. In CUVP001, the anterior process merges at an acute tip at the level of the 6 th maxillary alveolus approximately above the anterior-most tip of the suborbital fenestrae which is at the beginning of the 9 th maxillary alveolus. Also, anterior process in each of NHMUK R 3322, FARB AMNH 5061 and CGM84425 merges at the level of the 8 th maxillary alveolus approximately above the anterior tip of the suborbital fenestrae which located at the 9 th maxillary alveolus. In NHMUK PV R14154 the anterior process of palatine merges at an acute tip between the 6 th and 7 th maxillary alveoli approximately above the anterior-most tip of the suborbital fenestrae which is between the 8 th and 9 th maxillary alveoli (Fig. a–e). So, the anterior process of palatine in CUVP001 is long. Palatine is broken in other specimens. Ectopterygoids extended anteriorly to the border of the 10 th maxillary alveolus in case of CUVP001 and FARB AMNH 5061, CGM84425, while in NHMUK R 3322 this bone extended to the border of the 11 th maxillary alveolus. In NHMUK PV R14154 extended anteriorly to the border between the 9 th and 10 th maxillary alveoli. In the present study on CUVP001 and CGM67123 the maxillary ramus of the ectopterygoid lies parallel to the last five maxillary alveoli and has a forked anterior tip. In other cases, it lies parallel to the last four maxillary alveoli such as FARB AMNH 5061, CGM84425, CGM67106 and NHMUK R 3322. A forked anterior tip appears in FARB AMNH5061, CGM84425 which referred to C. megarhinus . But that finding conflicts with Adams who demonstrated that C. megarhinus doesn’t contain a forked ectopterygoid. Also, Brochu , showed that this forked anterior tip of ectopterygoid is a specific character of the genus Crocodylus . While the maxillary ramus of the left ectopterygoid lies parallel to the last four maxillary alveoli and bears an unforked anterior tip in NHMUK PV R14154 (Fig. a–e). Ectopterygoid is damaged in others. Petrygoid has the same morphological feature in CUVP001and FARB AMNH 5061 and it is broken in the other describe materials herein. Vomer has the same morphological feature in CUVP001 and FARB AMNH 5061. It is broken in other described materials herein. The supraoccipital has the same shape from dorsal view in CUVP001, FARB AMNH 5061, DPC6646, CGM84425, YPM VP-058532 and CGM67156. The supraoccipital has a heart shape in CUVP001, NHMUK PVR14154 which is less length than that in CGM84425, YPM VP-058532 and CGM67156. In CUVP001 the parietal-squamosal suture exists interiorly related to lateral suture of the supraoccipital. On contrary the suture of supraoccipital in NHMUK PVR14154 and CGM67156 is continuous with parietal-squamosal suture from dorsal view. It is not obvious in YPM VP-058532 and CGM84425. The supraoccipital is broken in the NHMUK R 3322 DPC6610, CGM67106, CGM67107, CGM67123, DPC12548, CGM67110, CGM73664, CGM67155, NHMUK R 3327, CUWM90 and part of the triangular wedge of NHMUK PVR14154. The triangular wedge of the supraoccipital from dorsal view in CUVP001 is less wide than that in YPM VP-058532, CGM84425, DPC6646 and CGM67156. In addition to the YPM VP-058532 showed the widest triangular wedge of supraoccipital. By the comparison of the width of triangular wedge of the supraoccipital from dorsal view to the width of the bar between supratemporal fenestrae between the different specimens it is found that, the wedge width is larger than the bar width CUVP001, YPM VP-058532 and DPC6646. On the other hand, the bar width is larger than the wedge width in CGM84425 and CGM67156. Exooccipital has the same shape in CUVP001, NHMUK PVR14154, DPC6646, CGM84425, YPM VP-058532 and CGM67156. It is not clear in the FARB AMNH 5061. It is damaged in others. Basioccipitaland complete in the NHMUK PVR14154 and FARB AMNH 5061. It is damaged in others. Prootic is similar in the morphology in the CUVP001, DPC6646, NHMUK PVR14154, CGM84425, YPM VP-058532 and CGM67156. It is not clear FARB AMNH 5061. It is broken in others. Laterosphenoidthe other described materials herein. Basisphenoid is similar in the morphology in the CUVP001, DPC6646, CGM84425, YPM VP-058532 and CGM67156. It is not clear FARB AMNH 5061 and it is not complete in NHMUK PVR14154. It is broken in the others. The frontal has the same shape in CUVP001, YPM VP-058532 and FARB AMNH5061. In CUVP001 and CGM73664 the anterior end of the frontal lies below the prefrontal but the anterior end of the frontal and prefrontal lies at the same level in NHMUK R 3327, YPM VP-058532 and FARB AMNH 5061 (Fig. b,c). In the same time the anterior end of the frontal is not clear in NHMUK R 3322, CGM84425 and NHMUK PVR14154 and it isn’t complete in DPC6646 and CGM67156 and broken in other described materials. Parietthe others. Squamosal is similar in the morphology in the CUVP001, DPC6646, CGM84425, YPM VP-058532, CGM67156, FARB AMNH 5061 and NHMUK PVR14154. It is broken in others. There are differences in the shape and size of the external naris between CUVP001, DPC6610, CGM73664, CUWM90, CGM67155, and C. megarhinus , NHMUK R 3322, CGM84425 and NHMUK PVR14154 (Fig. a–d,g,h,j). The CUVP001 and CGM73664 have a pear-shaped and are bisected posteriorly by anterior projections of the nasals. In C. megarhinus , CGM84425, CGM67155, DPC6610, CUWM90 and NHMUK PVR14154, the external naris is bisected posteriorly by anterior projections of the nasals, and, the overall shape resembles an apple. In the NHMUK R 3322, it is found that the shape of the external naris looks like that of the CUVP001 but, when compared it there is no naris bisected posteriorly by anterior projections of the nasals. It is found that the length of external naris in CUVP001 is larger than their width but vice versa in case as the NHMUK PVR14154, CGM67155 and DPC6610. In other cases, such as CGM84425, YPM VP-058532 and the NHMUK R 3322 the width of the external naris is equal in length (Fig. a,b). It is broken in the other described materials. Foramen for the first dentary tooth isn’t clear in the fossils. The incisive foramina in the CUVP001, and C. megarhinus have three posterior lobes (Fig. a–d,g,h). In the NHMUK R 3322 only it has two posterior lobes. But in each of the CGM84425, DPC6610, NHMUK PVR14154 the formina are semi-circular in shape. It is not clear in CGM67155. It is broken in the rest specimens. The anterior side of the orbit in CUVP001 and NHMUK PVR14154 is round. In the NHMUK R 3322, only the anterior margin of the orbit is preserved with rounded borders and slightly pointed anterior ends. Orbits in the CGM84425, YPM VP-058532 and FARB AMNH 5061 referring as in Adam are oval in shape with rounded borders and slightly pointed anterior ends (Fig. a–e). It is damaged in the rest materials. The shape of the supratemporal fenestrae is the same in CUVP001, C. megarhinus , CGM67156, DPC6646, CGM84425 and NHMUK PVR14154. In the NHMUK R 3322 the supratemporal fenestrae is damaged. The space between the two fenestrae is very narrow in C. niloticus , DPC6646 and Rimansuchus lloydi while it is wide in CGM67156, C. megarhinus and CGM84425 (Figs. a–g and b). It is broken in the others. Infratemporal fenestrae are similar in the morphology in the CUVP001, CGM84425, YPM VP-058532, FARB AMNH 5061 and NHMUK PVR14154. It is broken in the others. The external otic aperture in each of CGM84425, C. megarhinus , CGM67156 and in NHMUK PVR14154 is identical to that of the CUVP001. It is not clear in DPC6646. It is damaged in the rest specimens. Olfactory opening is similar in the morphologyrest specimens. Optic foramen is similar in the external feature in the CUVP001, DPC6646, CGM67156, and YPM VP-058532. It is not clear in CGM84425, FARB AMNH 5061 and NHMUK PVR14154. It is damaged in the other specimens. Orbitotemporal foramen is similar in the external feature in the CUVP001, and YPM VP-058532. It is not clear in DPC6646, CGM67156, CGM84425, FARB AMNH 5061 and NHMUK PVR14154. It is damaged in the other specimens. Post temporal fenestrae is similar in the morphology of the CUVP001, CGM67156, NHMUK PVR14154, and YPM VP-058532. It is not clear in DPC6646, CGM84425 and FARB AMNH 5061. It is damaged in the other desrcibed specimens. Palatal fenestrae have the same shape in the CUVP001, FARB AMNH 5061, CGM84425 and NHMUK PVR14154. It is not complete in CUWM90, NHMUK R. 3322, NHMUK PVR14154, CGM84425 and CGM67106. It is damaged in the rest specimens. Trigeminal foramen has the same shape in the CUVP001, CGM84425, CGM67156 and YPM VP-058532. It is not clear in NHMUK PVR14154, FARB AMNH 5061and DPC6646. It is damaged in the rest specimens. The shape of choana in the CUVP001 looks like the heart where the posterior border of it is concave toward the exterior. But, in the FARB AMNH 5061 the shape of choana is triangular in the shape and its posterior border is straight where Adams confirmed that the choana does not appear to be notched along the posterior rim. The choana is damaged in the other described specimens herein. Median and lateral Eustachian canals have the same shape in the CUVP001, CGM84425, FARB AMNH 5061, DPC6646 and YPM VP-058532. It is not clear in CGM67156. It is damaged in the rest specimens. Foramen aerium has the same shape in the CUVP001 and NHMUK PVR14154. It is not clear in CGM84425, FARB AMNH 5061, DPC6646 and YPM VP-058532. It is damaged in the rest specimens. Foramen magnum is sub-circular in CUVP001 while it has anterior concave curve and posterior V-shape in CGM67156 and YPM VP-058532. In CGM67156 and YPM VP-058532, the width of the foramen magnum is larger than their length. In the CUVP001, the width of the foramen is almost equal to its length. So, the CUVP001 has almost the same width of the foramen magnum in case of YPM VP-058532 but the foramen magnum length in CUVP001 is longer than that in YPM VP-058532. While CGM67156 foramen magnum has approximately the twice width of that in CUVP001, so foramen magnum of CGM67156 is broader than that in CUVP001. The distance of the exoccipital separate between supraoccipital and foramen magnum is larger in the CUVP001 than in the CGM67156 and YPM VP-058532. It is not clear in DPC6646, CGM84425 and FARB AMNH 5061 and not complete in NHMUK PVR14154. It is damaged in the rest specimens. Occipital condyle has the same shape in the CUVP001, CGM84425, FARB AMNH 5061, CGM67156 and YPM VP-058532. It is not complete in DPC6646 and NHMUK PVR14154. It is damaged in the rest specimens. Its function is articulation with the first vertebrae so, it is found that it increases in size as the skull size increases. The angle made by the intersection of a line parallel to the skull table lateral margin and sagittal plane is larger in NHMUK PV R14154, CGM84425, YPM VP-058532 than in CUVP001. This angle in CGM84425 equals 9° to that of YPM VP-058532 see (Table ). From morphological features and different ratios of measurements, it is found that CGM 84425, Crocodylus sp. from Eocene epoch refers to C. megarhinus . From studying the morphological features of the lower jaw of the Eocene specimens which included NHMUK R 3324, NHMUK R 3323 cast of C. 10065, NHMUK R 3105 and CGM (C. 10065) are represented C. articeps , NHMUK R 3328 and FARB AMNH 5095 are represented C. megarhinus , NHMUK R 3104 is represented Crocodylus species. In addition to the lower jaw of the Miocene specimens which included CGM67117, CGM67118, and CGM67155 which are represented Rimasuchus lloydi . The comparison between the previous specimens and CUVP001 demonstrated the following: The complete mandible of CUVP001 are identical to C. articeps , NHMUK R 3328 and FARB AMNH 5095 in the number of the alveoli. All contain 15 alveoli. The alveoli shape are elongated and more oval in NHMUK R 3104, CUVP001, and C. articeps . In contrast to their shape in the NHMUK R. 3328, FARB AMNH 5095, CGM67117, CGM67118, and CGM67155 which are circular. In C. niloticus; C. articeps; C. megarhinus; R. lloydi and Crocodylus sp. lower jaws, the 4 th alveolus is the largest one. The smallest alveolus is the third in CUVP001, NHMUK R 3328, FARB AMNH 5095, NHMUK R 3324, NHMUK R 3323, NHMUK R. 3105 and CGM (C. 10065). In addition to that, there are differences in the second large alveolus after the 4 th one. The 1 st alveolus is the second large one in CUVP001 and C. articeps . In contrast to the previous case in NHMUK R 3328 and FARB AMNH 5095 which is 10 th alveolus. The third and fourth alveoli are very close to each other. They were only separated by a bony septum not the same in all specimens. The bony septum with a width of 0.3 cm in CUVP001 and ranging from 0.4 to 0.6 cm in C. articeps , 0.5 in CGM67117 and 0.6 cm is shown in NHMUK R 3328. The shape of the external mandibular fenestrae is not the same in all specimens. In CUVP001, the external mandibular fenestrae is identical to that of NHMUK R 3328 and FARB AMNH 5095 which have a broad rounded posterior margin and narrow pointed anterior edge. In the case of C. articeps and NHMUK R 3104 this fenestra looks like a convex lens which has narrow pointed anterior and posterior margins then broad at the middle of the fenestrae. The suture between the suraangular and angular is straight. It intersects the external mandibular fenestrae in the middle of its posterior margin in CUVP001, NHMUK R 3328 and FARB AMNH 5095. In contrast, the suture is straight and then turned upward making a concave curve in C. articeps and NHMUK R 3104. From the dorsal view of mandible of NHMUK R 3328 the symphysis length is 14.28 cm and the width of each lower jaw at the posterior of symphysis equal 7 cm. From the dorsal view of mandible of NHMUK R 3324, NHMUK R 3323, NHMUK R 3105 the symphysis length are 11.5 cm, 8.4 cm and 12.8 cm respectively and the width of each lower jaw at the posterior of symphysis of these specimens equal 4 cm, 3.8 cm and 5.3 cm respectively. The lower jaw is thick with a symphysis long measures 4 cm, 7.5 cm and 16.4 cm respectively in CGM67117, CGM67118 and CGM67155. The symphysis width measures 5.1 cm in CGM67118. While in CUVP001 is narrow with a short symphysis measured 3.2 cm in length and 1.6 cm in width. So, the least symphysis length is in CUVP001. The anterior part of mandible is very wide in C. articeps , CGM67118 and NHMUK R 3328 but is narrow in CUVP001. In CUVP001, the fourth mandibular tooth is sharp and less wide and bulbous than of the CGM67155. The fourth mandibular tooth measured 0.8 cm in width in the CUVP001 while in CGM67155 it attained 1.8 cm in diameter. It is found that the shape of preserved teeth is conical in NHMUK R 3324, NHMUK R 3323 and CGM (C. 10065). They are not obvious in NHMUK R 3104 and FARB AMNH 5095. None of the teeth are existed in NHMUK R. 3328 and NHMUK R. 3105. So, the mandibular teeth are wide and thick in C. articeps , in contrast to the CUVP001 which are thin and more concavely curved toward the interior side. The cluster analysis (Fig. ) of the different morphometric measurements mentioned in Table (from A to P) for these specimens shows four clusters. Cluster number 1 includes only one species is which is less similar to C. niloticus . From the Eocene epoch, Cluster number 2 consists of three different species the first two are the most 2 species similar to each other almost up to 0.97. The third species is the recent crocodile which has a high degree of similarity to the first two species approximately 0.97. Cluster number 3 forms from only one species Rimasuchus lloydi which is similar to a cluster number 2. Cluster number 4 forms from only one species Crocodylus articeps which keeps similarity to the species in clusters number 2 and 3 but the closest one is Rimasuchus lloydi . So, this analysis confirms parallel with the morphological studies that the closest species to the recent crocodile ( C. niloticus ) is FARB AMNH 5061, YPM VP-058532 which referred to C. megrahinus and CGM 84425 more than NHMUK PVR14154. Therefore, the most probably ancestor to the recent C. niloticus is C. megrahinus from the Eocene epoch more than Rimasuchus lloydi from Miocene epoch. Also, this cluster analysis demonstrate with morphological studies that the specimen of Crocodylus sp . CGM84425 is identical to both YPM VP-058532, FARB AMNH 5061 which referred to C. megrahinus except for a very few differences which may return to the maturity and health degree or the genus of each of them. From the comparison, it suggests that the specimen of Crocodylus NHMUK R 3104 may be more closely related to NHMUK R 3322.
This study demonstrated the high degree of differences in relation to the morphological features and dimensions of the cranial part of the living crocodile in Egypt which refers to C. niloticus and other broad-snouted extinct species from the same country over different epochs. Also, the present work showed the relationships between them. The Eocene epoch broad-snouted crocodile species are C. articeps and C. megarhinus. The Miocene epoch broad-snouted crocodile species are R. lloydi and Crocodylus sp. which preserves the broadest snout which is characteristic of the Miocene crocodilian. It confirmed by dimensions that however the C. articeps has a longer snout than the C. megarhinus of the closest snout breadth, C. articeps is a member of broad snouted crocodile rather than long snouted crocodile. The measurements and morphological characters showed that C. articeps and C. megarhinus are not the same species at different ages due to differences in maxillary alveoli number and shape as well as the differences in ratio between the snout width to snout length which indicates that C. articeps is more elongated than C. megarhinus . The C. niloticus , C. articeps and C. megarhinus differ from R. lloydi and Crocodylus sp. through measurements and morphology of them which proved that R. lloydi and Crocodylus sp. has the shape and proportions of an extreme brevirostrine crocodile this is exactly absent in the other mentioned species. It is found that both Eocene and Miocene crocodiles as well as Rimasuchus lloydi is closer in skull proportions of C. niloticus . But from the morphological characters it is shown that the C. megarhinus is the closest species and may a probable ancestor of C. niloticus . From the morphological features of the mandibles, it is deduced that C. niloticus lower jaw differs exactly from that of Rimasuchus lloydi and it is closer to that of both C. megarhinus and C. articeps. From the morphometric and morphological results concerned to NHMUK PVR14154 it is concluded that the Rimasuchus lloydi is similar to a cluster number 2 and FARB AMNH 5061 but more similar especially to CGM84425. This indicates that the Rimasuchus contains the common characteristics of both C. megrahinus and C. niloticus . From the cluster analysis and morphological results concerned to NHMUK R 3322 it is concluded that C. articeps contains unique characters related to it and doesn’t appear in other species beside this it contains a few similarities with Rimasuchus lloydi , FARB AMNH 5061, CGM84425 and C. niloticus respectively. Egyptian Eocene Crocodylus is the ancestor to all known broad snouted species recorded from Egypt since the Eocene time. The closest species to the Eocene species is the living Crocodylus niloticus . That in fact make that most of the broad snouted crocodiles in Egypt are endemic. Climatic change and a tectonic event may be the causes that led to the extinction of these species so, it is important to keep the only existing species in Lake Nasser and prevent the hunting of them to preserve the environmental biodiversity.
|
Comparison of saliva and blood derived cell free RNAs for detecting oral squamous cell carcinoma | eafff9d1-87c0-473f-ba59-21f8b0a8426c | 11806035 | Biopsy[mh] | Oral cancer, a major subtype of head and neck cancers, significantly impacts public health. In 2020, cases of lip and oral cancer accounted for 377,713 new incidents and led to 177,757 deaths globally . More than 90% of oral cancer originates from the squamous tissues, hence widely known as oral squamous cell carcinoma (OSCC) . Early detection and continuous monitoring of OSCC are vital for improving patient outcomes and prognosis. Currently, traditional diagnosis and monitoring of oral cancer through tissue biopsies, while effective, often cause significant pain and discomfort for patients. In contrast, liquid biopsy presents a more favorable alternative. This method typically involves detecting biomarkers such as cell-free RNA (cfRNA), cell-free DNA (cfDNA), circulating tumor cells (CTCs), circulating tumor DNA (ctDNA) and circulating tumor RNA (ctRNA), proteins, and exosomes from blood, urine, saliva, seminal plasma, pleural effusions, cerebrospinal fluid, sputum, and stool samples – . Specifically in oral squamous cell carcinoma (OSCC), research predominantly focuses on blood (serum or plasma) or saliva samples. For example, for cfDNAs, Human Arthobacter luteus (ALU) retrotransposon, beta-2-microglobulin and Mitochondrial gene from saliva samples were found might be potential OSCC diagnostic biomarkers. For extracellular vesicles (EVs), miR-210 and miR-503-3p in EVs from blood samples could have potential prognostic value for OSCC. Currently, despite its promise, liquid biopsy research in OSCC is still in its nascent stages and warrants further investigation , , . So far, there have been no studies exploring cfRNA transcriptomics in OSCC, an area that presents a valuable research opportunity. In our study, we collected blood (plasma) and saliva samples from three distinct groups: individuals with OSCC, those with oral benign tumors, and a normal control group. We performed RNA sequencing on these samples to identify robust cfRNA biomarkers specific to OSCC. Our goal is to enhance early OSCC screening through non-invasive methods and improve patient outcomes and diagnostic efficiency.
Sample similarities on different liquid mediums Potential liquid biomarkers specific to OSCC Differences in biological processes among the saliva groups by gene set enrichment analysis (GSEA) Immune infiltration profile differences among the saliva groups Since we noticed there was a positively significant enrichment in a range of immunity-related pathways in OSCC saliva group when compared to all other groups, we further checked the detail of the immune profile based on our data. By using ESTIMATE, we quantified the immune infiltration levels of each group. At the transcriptomic level, we did observe significantly higher immune scores in the OSCC group when compared to all other groups (Fig. A) ( p < 0.05). Then, w. There was a significantly increased neutrophil infiltration in OSCC group compared to other groups ( p < 0.05). B cells, CD4 + T cells (non-regulatory) and regulatory T cell (Tregs) were lowered expressed in OSCC (Fig. C) ( p < 0.05). Further, we also checked the immune differences in the blood samples. And we didn’t find the difference among each group in blood cohort (Fig. B) ( p < 0.05).
After normalizing the data, we used principal component analysis (PCA) to compare sample similarities across different liquid biopsies. In the saliva cohort, tissue characteristics (OSCC, benign oral tumor, normal healthy control) could be roughly separated from each other at dimension 1 (32.1% variance explanation ability) against dimension 2 (6.3% variance explanation ability) (Fig. A). It was more difficult to distinguish OSCC from other groups in the blood cohort in the primary dimensions (Fig. B). It indicated that saliva may be a more effective medium for OSCC detection compared to blood (plasma).
Then, we conducted differential expression analyses at both the blood and saliva samples. In order to achieve a balance between minimizing false positive rates and maximizing marker discovery potential, we applied a statistical threshold of false discovery rate (FDR < 0.1). Firstly, we found that there were many significantly differential expressed RNAs between OSCC group and benign oral tumor group no matter in the saliva or blood samples, which indicated that there were distinct differences between OSCC group and benign oral tumor group at both levels (Fig. A and B) (FDR < 0.1). We then compared RNA expression differences between the OSCC group and the normal healthy control group. Notably, significant differential expression of RNAs was observed only in the saliva samples (Fig. C) (FDR < 0.1). No significant RNA expression differences were observed between the OSCC group and the normal healthy control group in blood samples (Fig. D) (FDR < 0.1). This suggested that detecting RNA differences between the OSCC group and the normal healthy control group is more challenging in blood samples compared to saliva samples. Additionally, we compared the molecular profiles between the benign oral tumor group and the normal healthy control group in both saliva and blood samples. No significant molecular differences were observed in saliva samples (Fig. E) (FDR < 0.1), whereas differences were found in blood samples (Fig. F) (FDR < 0.1). This may indicate greater molecular variability in blood samples compared to saliva samples. Furthermore, in order to identify the most relevant RNAs, we intersected the differentially expressed genes from the OSCC versus normal control comparison with those from the OSCC versus benign tumor comparison. The analysis in the saliva group revealed seven intersecting genes. Upon examining the normalized expression values (FPKM) of these genes, we found CLEC2B (C-Type Lectin Domain Family 2 Member B) was significantly upregulated in the cancer group compared to both benign tumor and normal control groups (Fig. A) ( p < 0.05). F9 (Coagulation Factor IX), DAZL (Deleted In Azoospermia 1), and AC008735.2 were found significantly downregulated in the cancer group (Fig. B, C and D) ( p < 0.05). The genes CSH1, AC010325.1, and FP236383.2 showed no significant changes (Figure ) ( p < 0.05).
In order to investigate potential biological processes in saliva samples and to capture subtle molecular changes, we performed GSEA at various comparison levels (OSCC vs. benign oral tumor; OSCC vs. normal control). As shown in Fig. , it was found that the signals of many immune related pathways were increasing in OSCC when compared to other groups, including neutrophil activation, antigen processing and presentation, myeloid leukocyte activation, natural killer cell mediated cytotoxicity, etc. (FDR < 0.01). Further, in order to identify pathways common to both comparison settings, we intersected the significant results of enriched pathways. This analysis confirmed increased immune signaling in the OSCC saliva group compared to other groups for pathways related to antigen processing and presentation, neutrophil activation, natural killer cell-mediated immunity, and granulocyte activation (Figure ). These findings suggested distinct immune profile differences in the saliva of OSCC patients compared to that of healthy individuals or those with benign oral tumors.
Currently, physical examination (visual screening and palpation of suspicious lesions) and imaging studies, followed by pathological biopsy of the suspected site, constitute the conventional approach for OSCC detection . While physical examination and imaging studies earlier diagnosis, they can also lead to underdiagnosis or misinterpretation . Liquid biopsy, with its non-invasive and objective characteristics, may provide supplementary information to aid in the early and more accurate detection of OSCC. However, its adoption in oral cancer diagnosis remains limited compared to other cancer types, highlighting the need for further research in this area . In our study, we utilized state-of-the-art High-Throughput Sequencing to investigate the potential of cfRNAs in diagnosing OSCC using blood (plasma) and saliva samples. Notably, our study is one of the first to focus on cfRNAs in the context of OSCC (mRNAs and lncRNAs), providing valuable insights into this underexplored field. Additionally, while previous research typically compared cancerous samples to normal controls, our study uniquely included a benign oral tumor group, thereby enhancing the reliability and comprehensiveness of our findings. Our results indicated that blood (plasma) samples might not be the ideal liquid medium for cfRNA-based liquid biopsy in OSCC, as evidenced by the inability to distinctly separate groups in PCA and the lack of significantly differential genes at the OSCC versus normal control level. In contrast, saliva samples showed promise as a reliable medium for cfRNAs liquid biopsy, demonstrated by the presence of significantly differential genes across all comparisons. We identified saliva derived CLEC2B, DAZL, F9, and AC008735.2 as potential diagnostic biomarkers for OSCC. None of these markers had been reported before. In our study, CLEC2B emerged as the most significant marker, showing significant upregulation exclusively in the OSCC group compared to both benign oral tumor and normal control groups. According to past knowledge, CLEC2B encodes a member of the C-type lectin/C-type lectin-like domain (CTL/CTLD) superfamily. Members of this family share a common protein fold and have diverse functions, such as cell adhesion, cell-cell signaling, glycoprotein turnover, and roles in inflammation and immune response. And this gene is closely linked to other CTL/CTLD superfamily members on chromosome 12p13 in the natural killer gene complex region , . Based on function enriched results, CLEC2B’s location and activation in the plasma membrane align with its detection in saliva samples. It was enriched in many immune related pathways which was consistent with the results from GSEA and immune infiltration. These observations suggest heightened CLEC2B related immune reactions in the OSCC oral environment. To date, no studies have directly linked CLEC2B with OSCC. However, its role in other cancers provides valuable insights. In other cancers, Dufva et al. checked immunogenomic landscape of hematological malignancies by exploring large-scale genomic datasets and found CLEC2B played as an immunomodulatory gene in myelodysplastic syndrome (MDS) like acute myelogenous leukemia (AML) and lined to poor prognosis . Xu et al. used bulk and single cell sequencing data of pancreatic cancer and found CLEC2B exhibited close associations with infiltrating Treg cells in pancreatic cancer, suggesting its involvement in Treg cell functions. And CLEC2B was associated with poorer prognosis in pancreatic cancer patients . These findings highlight its significant role in cancer immunity, offering a direction for future research on CLEC2B in the context of OSCC. F9 was a significantly downregulated genes found in OSCC groups when compared to other groups. From literature review, F9 encodes vitamin K-dependent coagulation factor IX that circulates in the blood as an inactive zymogen. This factor is converted to an active form by factor XIa, which excises the activation peptide and thus generates a heavy chain and a light chain held together by one or more disulfide bonds. The role of this activated factor IX in the blood coagulation cascade is to activate factor X to its active form through interactions with Ca + 2 ions, membrane phospholipids, and factor VIII. Alterations of this gene, including point mutations, insertions and deletions, cause factor IX deficiency, which is a recessive X-linked disorder, also called hemophilia B or Christmas disease. Alternative splicing results in multiple transcript variants encoding different isoforms that may undergo similar proteolytic processing . The observed downregulation of F9 in OSCC suggested potential alterations in blood coagulation processes in OSCC patients. As for another downregulated gene, DAZL was found that it could encode potential RNA binding proteins that are normally expressed in prenatal and postnatal germ cells of males and females . DAZL was identified as a novel cancer germline gene that could promote progression and cisplatin resistance in non-small cell lung cancer (NSCLC) . In glioblastoma, Zhang et al. found DAZL was upregulated in glioblastoma tissues and glioblastoma cell lines. DAZL knockdown glioblastoma cells showed decreased cellular proliferation, migration, invasion, and resistance in vitro, and inhibited the initiation of glioblastoma in vivo . As no studies have linked DAZL with OSCC, the mechanism underlying the downregulation of DAZL in the saliva of OSCC patients warrants further research. Then, we also found a downregulated lncRNA (AC008735.2) of OSCC in saliva dataset. In head and neck squamous cell carcinoma, Li et al. identified AC008735.2 was an immune-related lncRNA and could be used to predict immune checkpoint blockade and prognosis of patients by using The Cancer Genome Atlas (TCGA) head and neck data . Some studies found lncRNA was correlated with N6-methyladenosine (m6A). Huang et al. used bladder cancer data from TCGA and found AC008735.2 was an potential up-regulated, prognostically m6A-associated lncRNA . Lastly, we also identified the different immune infiltration profile between the OSCC group and all other normal groups in the saliva samples, which supported our findings of OSCC-related markers and suggested immune disorder in oral environment of OSCC patients and worthed further investigation. Our study has several limitations that should be addressed in future research. Firstly, the small sample size limits the generalizability of our findings. In follow-up studies, it will be essential to validate these OSCC-specific saliva biomarker candidates in a larger and more diverse saliva cohort. Additionally, it is also important to consider oral potentially malignant disorder (OPMD), such as leukoplakia, erythroplakia, lichen planus, lupus erythematosus (LE), oral submucous fibrosis, etc. Differentiating these conditions in clinical practice can be challenging and often requires tissue biopsy and histopathological examination , . If OSCC-specific saliva biomarkers prove applicable in distinguishing these cases, they could significantly benefit patients with benign oral lesions, sparing them from unnecessary and uncomfortable incisional biopsies. Secondly, while we identified differentially expressed genes, the underlying mechanisms of their expression changes remain unexplored. Future studies should aim to elucidate these mechanisms. Thirdly, we have not assessed the potential of these biomarkers for predicting therapeutic responses or their prognostic value in OSCC. Investigating these aspects could reveal broader applications for these biomarkers. Lastly, despite our efforts, completely eliminating technological and group biases proved challenging. Some samples, particularly blood samples, may have been of relatively lower quality. Future studies could benefit from more rigorous cohort selection and the adoption of standardized protocols to enhance data quality and improve interpretative accuracy. In summary, our study conducted a comprehensive transcriptome-wide analysis of cfRNA profiling in OSCC, benign oral tumor, and normal control groups, utilizing both blood (plasma) and saliva samples. We observed that saliva medium was more indicative as liquid biopsy targets for diagnosing OSCC compared to plasma samples, which did not yield significant findings. Specifically, in the saliva samples, CLEC2B was significantly upregulated in the OSCC group in comparison to the other groups, while DAZL, F9, and AC008735.2 showed significant downregulation in the OSCC group. Finally, there was a higher neutrophil infiltration and lower B cells, CD4 + T cells (non-regulatory) and regulatory T cell (Tregs) infiltration in OSCC saliva group when compared to other saliva groups. These findings highlight the potential of saliva-based cfRNAs for early, non-invasive diagnosis of OSCC. However, further research is necessary to validate these findings and assess their clinical applicability.
Sample information and ethic RNA library preparation and quality control Quantification Statistical analysis Function annotation Immune infiltration analysis In our study, participants were categorized into three groups (OSCC group, oral benign tumor group, normal control group). OSCC patients were enrolled in this study if the following inclusion criteria were met: (1) histological diagnosis of OSCC and oral benign tumor group, (2) for normal control group, no abnormal in physical examination. Exclusion criteria were: (1) recurrent or metastatic disease, (2) previous treatment with radiation or chemotherapy and (3) a history of synchronous or metachronous cancers. Finally, we collected a total of 30 paired blood and saliva samples from Cancer Hospital of Shantou University Medical College, which were 10 paired samples per group. For ethic, all informed consents were obtained. The Ethical Committee of the hospital approved the study protocol. Research involving human research participants had been performed in accordn, ten milliliters of whole blood and saliva were separately collected from each individual in the morning after fasting using vacutainer tubes. The samples were then stored in a low-temperature environment to ensure their stability and integrity for subsequent analyses.
The general procedures were as following: (1) DNase I Digestion; (2) Ribosomal RNA (rRNA) Removal; (3) First-Strand cDNA Synthesis; (4) First-Round polymerase chain reaction (PCR) Amplification; (5) Purification of PCR Products; (6) Second-Round PCR Amplification; (7) Another around purification of PCR Products; (8) Quality assessment. After conducting quality control checks, we found that 5 samples from the OSCC group and 2 samples from the normal control group in the blood (plasma) category did not meet our quality standards. Additionally, in the saliva group, 2 normal control samples failed to pass the quality control. Consequently, these samples were excluded from subsequent analyses. The Fragments Per Kilobase of transcript per Million (FPKM) expression levels of the remaining samples are presented in boxplots, as shown in Figure A and B.
Next-generation sequencing was performed on the NovaSeq 6000 platform. Reads were aligned to the human Ensemble genome GRCh38 using Hisat2 aligner (v2.1.0) . The read counts mapped to the genome were then quantified using the featureCounts tool (v1.6.3) .
Principal component analysis (PCA) was conducted by using FactoMineR . Differential gene expression analysis was performed using the EdgeR R-package (v3.40.2) . The false discovery rate (FDR) was applied to control for multiple testing and reduce false positives, with FDR < 0.1 set as the threshold for statistical significance. At the individual gene level, a Wilcoxon signed-rank test was conducted as an additional statistical measure, with p < 0.05 considered statistically significant.
For interpretation, Gene Set Enrichment Analysis (GSEA) based on Gene Ontology (GO) (cellular component (CC); molecular function (MF); biology process (BC)) and Kyoto Encyclopedia of Genes and Genomes (KEGG) – were performed using clusterProfiler R package (v3.6.0) . Gene information from Genecards ( https://www.genecards.org/ ) was checked for interpretation . And Pathcards module of GenecardsSuite ( https://pathcards.genecards.org/ ) was also used to annotation . FDR < 0.01 was considered statistically significant.
To check the immune profile, ESTIMATE (Estimation of Stromal and Immune cells in Malignant Tumor tissues using Expression data) was used to quantify the immune infiltration levels of each group (v1.0.13) . We performed immune cell deconvolution to check the immune cell infiltration situation by using immunedeconv (v2.1.0) . Wilcoxon signed-rank test was performed for statistical test. p |
Effects of GABA | 97d906e8-4a50-4461-9875-b571aeeb3b3b | 5734767 | Physiology[mh] | Contrast is an important parameter in assessing visual function. A person with reduced contrast sensitivity will have difficulty with many common daily tasks, such as detecting curbs or stairs, reading facial expressions, and driving at night. In clinical practice, contrast sensitivity charts are widely used to test the ability of a patient to perceive small differences in luminance between adjacent surfaces. In patients with retinal degenerative diseases, such as retinitis pigmentosa and age-related macular degeneration, contrast sensitivity may be diminished while visual acuity is still good as determined with a standard eye chart [ – ]. The neural mechanisms underlying the contrast sensitivity reduction are currently unknown. In both retinitis pigmentosa and age-related macular degeneration, there is a loss of photoreceptors with concomitant remodeling of cells within the inner retina (for review see , ). Details of the changes that emerge within the inner retina following degeneration of photoreceptors have come primarily from studies conducted in animal models of retinitis pigmentosa. Horizontal cells and bipolar cells, which are postsynaptic to photoreceptors, appear to be affected initially. Horizontal cells retract their dendrites and may grow processes directed towards in inner plexiform layer . Bipolar cells also retract their dendrites , and in ON bipolar cells there is a down-regulation of dendritic mGluR6 receptors and TRPM1 channels [ , , ]. Amacrine cells, which are postsynaptic to bipolar cells, are likewise affected. Morphological alterations in one type of amacrine cell–the AII amacrine cell–have been described in several animal models of retinitis pigmentosa [ , , ]. In addition, these amacrine cells show elevated phosphorylation of the gap junction subunit Cx36 , which may increase electrical coupling between AII amacrine cells. In the inner retinas of degenerate retinas, alterations in the expression of AMPA, glycine, GABA A , GABA C and NMDA receptors have been described . Increased levels of synaptic proteins in both bipolar cells and amacrine cells in the degenerate retina have also been reported , suggesting increased synaptic activity in these cells. These and very likely other, yet to be discovered, changes that take place in inner retinal neurons may contribute to the loss of contrast sensitivity in the patients with retinitis pigmentosa and age-related macular degeneration. Previously, I showed that the GABA C R antagonist TPMPA and the mGluR1 antagonist JNJ16259685 increase the sensitivity of retinal ganglion cells (RGCs) in the P23H rat model of retinitis pigmentosa to brief flashes of light . The effects of these receptor antagonists are likely due to actions on cells in the inner retina since the receptors for these antagonists are found predominately on cell processes within the inner retina . In the interest of determining how TPMPA and JNJ16259685 may affect contrast sensitivity of RGCs, I have investigated the effects of these receptor antagonists on the responses of RGCs in P23H and SD rat retinas to a drifting sinusoidal grating of various contrasts.
Animals Extracellular recordings Visual stimulation Drugs Data analysis P23H-line 1 homozygous rats and Sprague-Dawley (SD) rats of 30–41 weeks of age were used in this study. Breeding pairs of P23H-line 1 homozygous rats were donated by Dr. Matthew LaVail (University of California, San Francisco). SD rats were obtained from Harlan Laboratories (Indianapolis, IN). The room light was kept on a 12 hr light/dark cycle using standard fluorescent lighting. During the light cycle, the illumination at the level of the cages was 100–200 lux. Both male and female animals were useVA Boston Healthcare System Committee on Use and Care of Animals (Protocol Number: 304-J-060514). All surgery was performed in euthanized animals, and all efforts were made to minimize animal stress.
Following euthanasia of a rat with sodium pentobarbital (150 mg/kg, i.p.), an eye was removed and hemisected under room light. After removal of the vitreous, the eyecup was submerged in carboxygenated (95% O 2 , 5% CO 2 ) Ames' Medium (supplemented with 2 g/L sodium bicarbonate and 1.5 g/L d-glucose). A square piece of retina measuring ∼2–3 mm on each side was cut out with Cohan-Vannas spring scissors (Fine Science Tools, Foster City, CA) and transferred with the ganglion cell side down onto a 64-channel planar Muse MEA (Axion Biosystems Inc., Atlanta, GA) with 30 μm-diameter nano-porous platinum electrodes at a 200 μm center-to-center spacing. To anchor the preparation, a piece of porous (30 μm pores) polycarbonate membrane (Sterlitech Corp., Kent, WA) was placed on the retina and this membrane was in turn held down by a nylon ring. To maintain viability of the retina, a gravity-flow system administered the carboxygenated Ames' Medium at a flow rate of 1.5 ml min −1 . The temperature of the bath was maintained at 31 to 33°C with an in-line heater (Warner Instruments, Hamden, CT). The retina was superfused for at least 20 min before data acquisition to permit stabilization of spike amplitudes. Raw data was digitized at 20 kHz and stored on a hard disk for offline analysis. Spike detection of single action potentials was performed using the Axion Biosystem software using a voltage threshold 5–6 fold the standard deviation of the noise over 200 Hz high-pass filtered traces. Principal component analysis of the spike waveforms was used for sorting spikes generated by individual cells (Offline Sorter, Plexon).
Visual stimuli were generated with the PsychoPy (v1.81) package and delivered to an overhead projector (Toshiba TDP-T420 DLP). The images from the projector were minified with external lenses, directed into the camera port of a Nikon microscope, and focused onto the photoreceptor surface of the retina with a 10X microscope objective. Visual stimuli consisted of drifting sinusoidal gratings that were presented with a mean illuminance that equaled that of the background. The mean stimulus illuminance was adjusted by neutral density filters positioned adjacent to the projector output. The mean stimulus illuminance, measured with a digital lux meter (model 840020; Sper Scientific LTD, Scottsdale, AZ), was either 15 or 60 lux. (15 lux corresponds to 4.3 μW/cm 2 as measured with an ILT900-R spectroradiometer from International Light Technologies.) Spatial frequency of the sinusoidal gratings was held constant at 1 cycle/mm, and temporal frequency was held constant at 2 cycles/s. All gratings were presented within a circular patch of 2.4 mm diameter, centered over the MEA. The neurons were tested with eight values of contrast (0, 4, 6, 8.5, 13, 26, 51, and 83%). Contrast was defined by the Michelson formula, 100% x (Lmax−L min )/ (L max + L min ), where L max and L min are the maximum and minimum illuminance levels of the sinusoidal grating. At each grating contrast, seven trials were presented. Each trial started with a 4 s presentation of a uniform field of the same mean illuminance as the grating. The drifting sinusoidal grating was then shown for 6 s. An interval of 20 s between trials was chosen to minimize possible effects of stimulation history.
The mGluR1 antagonist JNJ16259685 (Tocris Bioscience) and the GABA C R antagonist TPMPA (Tocris Bioscience) were added to the bath at 0.5 μM and 100 μM, respectively, using a calibrated syringe pump, as described previously . Only one drug per retinal preparation was used to avoid possible long-term changes caused by the drug. The effects of a drug were examined only after the drug was bath applied for ~10 min to ensure stable responses.
Sorted spikes from RGCs were imported into Neuroexplorer software (Nex Technologies) to create post-stimulus time histograms (PSTHs) with a 10 ms bin width, averaged across 7 repetitions of the same contrast. After discarding the first second at the beginning of each histogram (since cells often responded to the onset of the grating), each histogram was Fourier transformed with OriginPro10 software (OriginLab Corp.) to obtain the amplitude of the fundamental stimulus frequency (F1). The response amplitude of each cell was obtained by subtracting the baseline (F1 amplitude) response determined with 0% grating contrast from the F1 amplitude obtained at each contrast level. The response amplitudes were used to construct a contrast response function, which was fitted with the hyperbolic ratio function also known as the Hill equation R = R m a x x C n / ( C 50 n + C n ) where R is response amplitude, R max represents the maximum response amplitude, C is the stimulus contrast, C 50 represents the contrast that produces R max /2, and n is a fitting exponent that determines the shape of the contrast response function. Group comparisons of response amplitudes to various grating contrasts between drug-treated and control (pre-drug tested) were conducted with a two-tailed Student’s t-test. P values were corrected for multiple comparisons using the Holm-Bonferroni method. Holm-corrected P values < 0.05 were deemed significantly different. Medians are used to report contrast threshold data since for some cells the contrast threshold value was immeasurable (i.e., exceeded the highest contrast stimulus tested). Group comparisons of contrast thresholds were conducted with either the Wilcoxon signed-rank test or the Mann-Whitney U test, as appropriate. P values < 0.05 were considered statistically significant.
Behavioral experiments to evaluate contrast sensitivity in rats commonly present sinusoidal gratings of various contrasts [ – ]. Only one study to my knowledge has reported on the performance of rat RGCs to a drifting sinusoidal grating that varied in contrast. I will therefore begin by describing the contrast response functions of RGCs in SD and P23H rat retinas before describing the effects of the GABA C R antagonist TPMPA and the mGluR1 antagonist JNJ16259685 on responses of the RGCs to the same grating stimuli. Contrast response functions of SD and P23H rat RGCs Effects of GABA Effects of GABA Many SD and P23H rat RGCs were modulated by a full-field drifting sinusoidal grating (spatial frequency: 1 cycle/mm, temporal frequency: 2 cycles/s). However, 20 to 40% of recorded RGCs were unresponsive to the grating, even at high contrast; these cells were not included in the data analyses. Contrast response functions were obtained from 116 SD rat RGCs (9 retinas) and 69 P23H rat RGCs (9 retinas). For SD rat retinas, the mean illuminance of the grating, which varied in contrast (0, 4, 6, 8.5, 13, 26, 51, and 83%), was held constant at 15 lux. With increasing contrast, SD RGCs showed a monotonic increase in response amplitude. Many SD RGCs (n = 57) showed response saturation at high contrasts. This is illustrated for one SD RGC in . For this cell and other cells in this study, data were fitted with the hyperbolic ratio function (see ), which provided an excellent fit of the data as indicated by adjusted R 2 values greater than 0.99. Many other SD RGCs (n = 59) did not show evidence of response saturation. This is illustrated for one SD RGC in . Based on the fit of the hyperbolic ratio function, RGCs were arbitrarily subdivided into two populations: saturating and non-saturating cells. Saturating RGCs included those cells whose value at 83% contrast was within 10% of the calculated plateau value; all other cells were categorized as non-saturating RGCs. shows the contrast response function averaged from the population of saturating RGCs, and shows the contrast response function averaged from the population of non-saturating RGCs. Saturating RGCs were very sensitive to changes in low contrast but not to changes in high contrast. Non-saturating RGCs on the other hand exhibited roughly a linear growth with increasing contrast. From the fitted hyperbolic ratio function, the contrast threshold of each cell could be determined. Contrast threshold was taken as a response amplitude of 2 spikes/s. The contrast threshold data are displayed as two box plots in . For the population of saturating SD rat RGCs, the median contrast threshold was 4.95%. For the population of non-saturating SD rat RGCs, the median contrast threshold was 14.8%. The difference between the medians was statistically significant (P < 0.001). In experiments with P23H rat retinas, data were collected with the mean illuminance of the grating set at 15 lux and at 60 lux. Many cells were not very responsive to the sinusoidal grating at 15 lux mean illuminance but gave robust responses at the mean stimulus illuminance of 60 lux. Even at this higher mean stimulus illuminance, some RGCs did not exhibit modulation of spike activity to the grating. Of 84 cells that did respond to the grating at this higher mean stimulus illuminance, 5 cells gave a response only to the highest contrast (83%) tested and 10 cells responded only to the two highest contrasts (51% and 83%). These cells were not included in the data analysis. Of the 69 P23H rat RGCs analyzed, only 10 cells showed response saturation. shows the contrast response function averaged from these saturating RGCs. Compared with saturating RGCs in the SD rat retina ( ), the P23H rat RGCs were less responsive to the drifting grating and less sensitive to changes in low contrast. shows the contrast response function averaged from the non-saturating RGCs (n = 59). The contrast response function was similar to that of non-saturating SD rat RGCs ( ). Contrast thresholds were determined for both saturating and non-saturating P23H rat RGCs. The data are displayed as two box plots in . For the population of saturating P23H rat RGCs, the median contrast threshold was 21.8%. For the population of non-saturating P23H rat RGCs, the median contrast threshold was 18.7%. The difference between the medians was not statistically significant (P = 0.878).
C R and mGluR1 antagonists on contrast response functions of SD rat RGCs Of the 116 SD rat RGCs that were examined in the previous section, 43 cells (3 retinas) were treated with the GABA C R antagonist TPMPA and 38 cells (3 retinas) were treated with the mGluR1 antagonist JNJ16259685. Of the cells treated with TPMPA, 15TPMPA significantly reduced the response amplitudes to contrasts ranging from 6 to 13% by 36–51% and increased the response amplitude to 83% contrast by 38TPMPA significantly increased the response amplitude by 33% to the highest contrast (83%) tested4.01% before application of TPMPA and 4.369). For the population of non-saturating SD rat RGCs, the median contrast thresholds were 17.0% before application of TPMPA and 16.1% after application of TPMPA to the retina. The difference between the medians was not statistically significant (P = 0.487). Of the cells treated with JNJ16259685, 1Of the cells treated with JNJ16259685, 25 cells were non-saturating RGCs. shows the contrast response function averaged from these non-saturating RGCs before and after application of JNJ16259685. JNJ16259685 had no statistically significant effect on the responses at any contrastSD rat RGCs, respectively. For the population of saturating SD rat RGCs, the median contrast thresholds were 3.36% before application of JNJ16259685 and 2.86% after application of JNJ16259685. The difference between the medians was not statistically significant (P = 0.100). For the population of non-saturating SD rat RGCs, the median contrast thresholds were 11.0% before application of JNJ16259685 and 10.9449).
C R and mGluR1 antagonists on contrast response functions of P23H rat RGCs I previously found that both TPMPA and JNJ16259685 increase the sensitivity of P23H rat RGCs to brief flashes of light, shifting the intensity-response curves to the left . A leftward shift of the intensity-response curve is equivalent to removing a neutral density filter in front of the light source. I hypothesized that in the presence of either TPMPA or JNJ16259685 the responses of P23H rat RGCs to the drifting grating would be similar to that of increasing the mean illuminance of the grating (i.e., removing a neutral density filter in front of the light projector). I therefore tested the effects of TPMPA and JNJ16259685 on the responses of P23H rat RGCs with the mean illuminance of the grating set at 15 lux, which is the same mean illuminance that was used in examining the effects of TPMPA and JNJ16259685 on SD rat RGCs. Of the 84 P23H rat RGCs that were described previously, 35 cells (4 retinas) were treated with TPMPA and 49 cells (5 retinas) were treated with JNJ16259685. The 10 RGCs that were identified previously as saturating RGCs (based on the cells’ responses at 60 lux mean illuminance) were analyzed separately from the other cells, considering the finding that TPMPA had a differential effect on these cells in SD rat retinas (see ). Of the 35 RGCs examined with TPMPA, 7TPMPA. Bthe averaged response amplitudes increased by 13–182% in the presence of TPMPA. Statistically significant increases were obtained only with grating contrasts of 26% and 51%. shows the contrast response function averaged from non-saturating RGCs (n = 28) before and after application of TPMPA. Again, before and after application of TPMPA, no response was observed from any cell at 4% contrast. At higher contrasts, TPMPA increased the response amplitudes on average by 35–300%. Statistically significant effects were observed with contrasts from 13% to 83%. Box plots in summarize the effects of TPMPA78.7% before application of TPMPA and 25.3= 0.047). For the population of non-saturating P23H rat RGCs, the median contrast thresholds were 63.2% before application of TPMPA and 30.5% after application of TPMPA to the retina. The difference between the medians was statistically significant (P < 0.001). Of the 49 RGCs treated with JNJ16259685, only 3 cells were saturating RGCs. shows the contrast response function averaged from these saturating RGCs before and after application of JNJ16259685. Before and after application of JNJ16259685, no response was observed from any cell at 4% contrast. Before the application of JNJ16259685, no response was observed from any cell at either 6% or 8.5% contrast, and only one cell showed responses in the presence of JNJ16259685. At higher contrasts, JNJ16259685 increased the response amplitudes by 158–508%. Statistically significant increases were obtained46) before and after application of JNJ16259685. At 4% contrast, no cell showed a response before addition of JNJ16259685 to the bathing solution and in the presence of JNJ16259685 only 1 cell elicited a response. At higher contrasts (6% to 83%), JNJ16259685 significantly increased the response amplitudes by 132–388%. Box plots in summarize the effects of JNJ16259685 on the contrast thresholds of saturating and non-saturating P23H rat RGCs, respectively. For the population of saturating P23H rat RGCs, the median contrast thresholds were 53.2% before application of JNJ16259685 and 13.2% after application of JNJ16259685 to the retina. However, the difference between the medians was found not to be statistically significant (P = 0.250). Clearly data on more cells are needed since this very small sample size (n = 3) has a reduced chance of detecting a true effect. For the population of non-saturating P23H rat RGCs, the median contrast thresholds were 54.0% before application of JNJ16259685 and 28.4% after application of JNJ16259685 to the retina. The difference between the medians was statistically significant (P < 0.001).
In this study I examined the effects of the GABA C R antagonist TPMPA and the mGluR1 antagonist JNJ16259685 on the responses of both SD and P23H rat RGCs to a drifting sinusoidal grating of various contrasts. Consistent with previous observations in the primate retina, some RGCs clearly show response saturation at high contrasts whereas others do not . As in the primate retina, those RGCs in the SD rat retina that show response saturation are more sensitive to low contrast–the median contrast threshold of saturating SD rat RGCs is about 3-fold lower than the median contrast threshold of non-saturating SD rat RGCs. In behavioral investigations conducted on rats, Keller et al. and Furtak et al. found contrast thresholds to be 12–15%, whereas McGill et al. and Douglas et al. found contrast thresholds to be close to 5%. Differences in methodological approaches may explain the variation in contrast thresholds. In the present study, taking contrast threshold as response amplitude of 2 spikes/s, I found that the median contrast threshold of saturating SD rat RGCs is ~ 5%. P23H rat RGCs respond poorly to the drifting sinusoidal grating at the mean stimulus illuminance (15 lux) that was used to collect data from SD rat RGCs. P23H rat RGCs did respond better when the mean stimulus illuminance was increased 4-fold to 60 lux. This finding is perhaps not surprising, given the loss of photoreceptors and diminished light sensitivity of remaining cone photoreceptors in these animals. Even at the higher mean illuminance, the median contrast threshold of saturating P23H rat RGCs is still (~4-fold) higher than that found for saturating SD rat RGCs. In fact, the median contrast thresholds of saturating and non-saturating P23H rat RGCs are similar (~ 20%). Interestingly, fewer saturating P23H rat RGCs were recorded in the present study. Whereas 49% of SD rat RGCs showed response saturation at high contrasts, only 14% of P23H rat RGCs showed response saturation at high contrasts. In the primate retina, Purpura et al. reported that M cells (RGCs that project to the magnocellular layers of the lateral geniculate nucleus) show response saturation to a drifting sinusoidal grating whereas P cells (RGCs that project to the parvocellular layers of the lateral geniculate nucleus) do not. In the present study, I did not differentiate between M-like and P-like cells. Rats differ from primates in that the great majority of RGCs projects to the superior colliculus rather than the lateral geniculate nucleus [ – ]. Recent studies have shown that around 30 distinct types of RGC may exist in the rat retina [for review, see ]. Which specific cell types exhibit response saturation will need to be determined in future experiments. It is noteworthy that Purpura et al. reported that the shape of the contrast response curve of primate M cells is sensitive to the mean grating illuminance. At low mean stimulus luminance the contrast response function of M cells rises less steeply at low contrast (i.e., contrast gain is reduced) and does not show response saturation. It is therefore possible that more cells in the P23H rat retina would have shown response saturation if a higher mean grating illuminance had been used. In P23H rats, I found that both JNJ16259685 and TPMPA increase the responses of saturating and non-saturating RGCs to all grating contrasts. Similar increases in responses are observed when the mean illuminance of the grating was increased from 15 lux to 60 lux. The effects of TPMPA and JNJ16259685 could be explained by an increase of the synaptic gain between (excitatory) bipolar cells and RGCs. GABA C receptors, which are ligand-gated chloride channels, are found predominately on axon terminals of bipolar cells . I previously hypothesized that in the degenerate retina there is an overstimulation of GABA C receptors . TPMPA would eliminate this GABA-mediated inhibition and thus the attenuation of light-evoked excitatory potentials in the axon terminals of bipolar cells. Previously, I reported that the effects of JNJ16259685 on the responses of P23H rat RGCs to flashes of light are similar to those of TPMPA . I had postulated that the effects of JNJ16259685 may be mediated through a reduction in release of GABA onto GABA C receptors. This mechanism would also explain the findings obtained with JNJ16259685 in the present study. In SD rats, I found that JNJ16259685 has no statistically significant effect on the contrast response function of RGCs. TPMPA also has no statistically significant effect on the contrast response function of non-saturating SD rat RGCs. The lack of effect of JNJ16259685 and TPMPA on contrast response functions could be explained by postulating that under my experimental conditions there is very little stimulation of GABA C or mGlu1 receptors. However, I found that TPMPA decreases the responses of saturating SD rat RGCs to low (6% to 13%) grating contrasts and increases the response to the highest contrast (83%) tested. In the presence of TPMPA, the shape of the contrast response function begins to resemble that of non-saturating RGCs. It is unclear at the present time why blocking GABA C receptors would preferentially affect saturating SD rat RGCs and decrease the responses of these cells to low contrast stimuli. Further studies will be needed to address this. In conclusion, the results suggest that either a GABA C R antagonist or a mGluR1 antagonist may improve contrast sensitivity in patients with retinitis pigmentosa and possibly other retinal diseases in which there is photoreceptor degeneration with concomitant remodeling of cells within the inner retina.
S1 Dataset This dataset contains the data points summarized in figures. Data for each figure are presented on separate sheets. (XLSX) Click here for additional data file.
|
Comparative Analysis of the Efficacy of Spinal Cord Stimulation and Traditional Debridement Care in the Treatment of Ischemic Diabetic Foot Ulcers: A Retrospective Cohort Study | d8a5c334-6044-4f93-89c9-3ba920b672e0 | 11219160 | Debridement[mh] | Patients Inclusion Criteria Exclusion Criteria Ethics Approval Interventions Clinical Evaluation Statistical Methods Two hundred six patients with ischemic DFU who underwent treatment with SCS or TDC between January 2018 and June 2022 were included in this study. To reduce data bias, 2 physicians used a blinding method to collect patient information and repeated the measurements multiple times to reduce random errors.
The inclusion criteria were (1) patients aged 18 years or older with a confirmed diagnosis of ischemic DFU; (2) the patient's foot was ulcerated but not amputated, stable glycemic control (glycated hemoglobin of 6.5%-10%) and blood pressure of less than 150/90 mm Hg; (3) the wound could not heal without conventional or systemic treatment; and (4) the patient provided the follow-up consent form and consent for surgical treatment in writing.
The exclusion criteria were (1) patients with gangrenous ulcers of the whole foot and above the foot, (2) patients with serious underlying diseases other than DFU such as malignant cancer and cardiovascular and cerebrovascular diseases, (3) patients who had contraindications for surgery or inability to tolerate surgical treatment, and (4) the patient or patient's guardian refused to participate in the clinical study.
The study was approved by the Institutional Review Board. All patients gave informed consent in writing. All procedures were conducted in accordance with the principles in the Declaration of Helsinki.
Before SCS and debridement care, we typically perform a thorough systematic risk assessment of the patient, including electrocardiogram, cardiac function, computed tomography of the chest and head, and a range of blood and urine tests. We also ask smokers to stop smoking for 2 weeks and use antimicrobials prophylactically in the perioperative period. SCS Group Traditional Debridement Care Group After the inclusion of patients, a DFU specialist performed standard debridement of foot ulcers daily, including the debridement of infected and necrotic tissues, until the wounds were clean.
In the first stage of electrode implantation, electrodes (SPECIFY2*8; Medtronic) were placed in the epidural space of segments T10-T12 under local anesthesia with the patient in the prone position. Intraoperative macroscopic stimulation was performed (voltage, 0.5 V; pulse width, 180–240 μs; frequency, 40 Hz). A temporary external stimulator was connected after satisfactory implantation. In the second stage, an implantable pulse generator was implanted 1 week later. DFUs were treated postoperatively with standard debridement by a specialist.
The primary end point was a comparison of amputations between the SCS and TDC groups. The secondary end points of this study were clinical hemodynamic test results, including transcutaneous oxygen pressure (PtcO 2 ), ankle-brachial index (ABI), and lower limb arterial ultrasound. Percutaneous Partial Pressure of Oxygen Ankle-Brachial Index Lower Limb Arterial Ultrasound Amputation Amputations were classified as minor or major, with minor amputation being below the level of the ankle joint and major amputation being above the level of the ankle joint.
Percutaneous PtcO 2 is a noninvasive measurement of tissue perfusion that reflects the microvascular status in the skin. Studies have demonstrated that it is strongly correlated with the healing of a DFU. The dorsal PtcO 2 of both feet was measured under the same conditions, and their average was calculated.
The ABI is the most common test for a diagnosis of peripheral vascular disease. Patients were placed in the supine position. The ratio of the systolic blood pressure at the ankle artery of both feet and the brachial artery was measured in triplicate, and the average value was calculated.
This test is one of the most common noninvasive vascular tests used in a clinic for determining the degree of stenosis in the lumen of the major lower limb arteries. Smaller diameters indicate more severe stenosis.
SPSS 22.0 software (IBM) was used for data analysis, and GraphPad Prism (GraphPad Software, Inc.) was used for plotting. Continuous variables are expressed as the mean ± SD. Categorical variables are expressed as percentages. The paired t test was used to compare changes in variables between and within groups before treatment and at 6 months and 12 months after treatment. Differences with P < .05 were statistically significant. Bilateral confidence intervals were provided for the outcome measure of interest to assess the difference between baseline and follow-up. Comparisons of amputations between the SCS and TDC groups were expressed as odds ratio (OR, 95% CI), with OR <1 indicating that SCS was protective against amputation compared with TDC.
One hundred sixty patients participated in the 6-month follow-up, and 141 patients participated in the 12-month follow-up. The baseline characteristics of the patient population and the follow-up process are presented in Table and Figure . Primary End Point Secondary End Points Amputation At the 6-month follow-up, there were 5 (5.95%) minor and 4 (4.76%) major amputations in the SCS group and 7 (8.14%) minor and 11 (12.79%) major amputations in the TDC group; the OR for minor amputations between the 2 groups was 0.71 (95% CI, 0.22-2.35), the OR for major amputations was 0.34 (95% CI, 0.10-1.12), and the OR for total amputations was 0.45 (95% CI, 0.19-1.08). At 12 months, there were 7 (8.97%) minor and 6 (7.69%) major amputations in the SCS group and 10 (5.87%) minor and 24 (38.10%) major amputations in the TDC group; the OR for minor amputations between the 2 groups was 0.52 (95% CI, 0.19-1.46), the OR for major amputations was 0.13 (95% CI, 0.05-0.36), and the OR for total amputations was 0.17 (95% CI, 0.08-0.37) (Table ).
PtcO ABI Lower Limb Arterial Ultrasound In the SCS group, the diameters of the femoral, popliteal, posterior tibial, and dorsalis pedis arteries were 7.34 ± 0.55, 4.56 ± 0.41, 1.85 ± 0.28, and 1.75 ± 0.33 mm, respectively, before the operation; 7.72 ± 0.50, 4.98 ± 0.46, 2.11 ± 0.33, and 1.92 ± 0.29 mm, respectively, at 6 months postoperation; and 7.79 ± 0.48, 5.03 ± 0.41, 2.16 ± 0.32, and 1.93 ± 0.27 mm, respectively, at 12 months postoperation. In the TDC41 ± 0.71, 4.56 ± 0.42, 1.86 ± 0.26, and 1.86 ± 0.39 mm, respectively, before the operation; 6.80 ± 0.72, 4.27 ± 0.37, 1.77 ± 0.27, and 1.77 ± 1.10 mm, respectively, at 6 months postoperation; and 6.41 ± 0.80, 3.94 ± 0.45, 1.73 ± 0.27, and 1.31 ± 0.28 mm, respectively, at 12 months postoperation (Figure ).
2 In the SCS group, the PtcO 2 was 38.02 ± 13.52 mm Hg before the operation, 51.27 ± 14.98 mm Hg at 6 months postoperation, and 51.80 ± 14.87 mm Hg at 12 months postoperation. In the TDC group, the PtcO 2 was 39.18 ± 13.12 mm Hg before the operation, 29.34 ± 12.74 mm Hg at 6 months postoperation, and 22.34 ± 11.03 mm Hg at 12 months postoperation (Figure A– C).
In the SCS group, the ABI was 0.58 ± 0.23 before operation, 0.86 ± 0.23 at 6 months postoperation, and 0.84 ± 0.23 at 12 months postoperation. In the TDC group, the ABI was 0.59 ± 0.25 before operation, 0.52 ± 0.22 at 6 months postoperation, and 0.48 ± 0.24 at 12 months postoperation (Figure D– F).
The purpose of this study was to investigate the effectiveness of SCS in the treatment of ischemic diabetic foot ulcers. The results of this study suggest a significant advantage of SCS over TDC in the treatment of patients with ischemic DFUs. This conclusion is supported by the primary and secondary end points at 6 months and 12 months postoperation. This study demonstrated significant improvements in lower limb PtcO 2 , ABI, and arterial ultrasound in patients as early as 6 months after SCS, with statistically significant differences maintained up to 12 months postoperation. These results demonstrated the robustness of SCS in improving the hemodynamics of major arteries and the microcirculation in the lower limbs. By contrast, no improvement occurred in the hemodynamics of the lower limbs in the TDC group. For the primary end point, SCS significantly averted the risk of amputation for DFU, and the difference became more pronounced over time, with an OR of 0.17 (95% CI, 0.08-0.37) between the SCS and TDC groups at 12 months. Decreased muscle sympathetic nerve activity is a proposed potential mechanism of action of SCS in improving circulation. Studies in animal models of SCS have also demonstrated that blood flow in the limbs can increase independently of sympathetic inhibition, depending on the different stimulation parameters used. Thus, decreased muscle sympathetic nerve activity is a factor in the SCS-mediated increase in blood flow to the lower limbs. Studies have shown that SCS can improve the healing of ischemic DFUs by modulating the vascular smooth muscle of the lower limbs and stimulating vasodilation to increase blood flow to the lower limbs, while alleviating clinical symptoms caused by DFUs by reducing the degeneration of peripheral nerves and blocking pain conduction, thus further contributing to the healing of DFUs. In this study, an SCS cohort and a TDC cohort were retrospectively analyzed. In addition to changes in the PtcO 2 , ABI, and arterial ultrasound of each group before the operation and at 6 months and 12 months postoperation, the differences in the therapeutic efficacy of the 2 ischemic DFU treatments were compared and analyzed at 6 months and 12 months postoperation. Finally, the rates of major and minor amputation with the different treatments were determined. The data showed that dorsal foot PtcO 2 and ABI were significantly improved at 6 months after SCS treatment, the diameters of the lower limb arteries were significantly increased, and the hemodynamics of the lower limbs were restored, thus promoting the healing of the ulcers (Figure ) and effectively reducing the amputation rate. By contrast, the condition of the patients in the TDC group deteriorated over time, which was consistent with previous data showing a 40% amputation rate of the diabetic foot. This study demonstrated that SCS promotes vasodilation of the lower limbs and promotes the formation of new collateral capillaries, thereby improving local microcirculation and reducing the rate of amputation due to ischemic ulcers secondary to vascular stenosis. In clinical practice, individual patient outcomes are very important. We provide the results for each patient implanted with the SCS system from our clinical trial (Figure C and F). This information provides physicians and their patients with a more direct understanding of the potential outcomes of selected treatments of ischemic DFUs. Strengths Limitations First, at the clinical level, we aim to conduct a larger prospective clinical trial with an extended follow-up period to validate the long-term results of SCS and TDC in improving lower limb ischemia DFUs. Second, at the experimental level, detailed animal studies will be conducted to fully understand the mechanism of action of SCS in improving blood circulation in the lower limbs and the cellular and molecular mechanisms of action of SCS in promoting ulcer healing.
To the best of our knowledge, this study is the first to evaluate the clinical direction of SCS in improving lower limb blood flow in patients with ischemic DFUs. Most previous studies have focused on qualitative improvement of lower limb pain and quality of life after SCS. - The design of this study included SCS and TDC groups and the evaluation of objective data such as changes in the lower limb PtcO 2 , ABI, and diameter of major lower limb arteries in the 2 groups at different time points, as well as the amputation rate. These results provide clinicians with a better quality of individualized options for patients with ischemic DFUs.
This study had some limitations. It was not a prospective randomized controlled trial, the follow-up period of only 12 months was short, the number of cases was small, only a single stimulation pattern (voltage, 0.5 V; pulse width, 180-240 μs; frequency, 40 Hz) was used, and no other stimulation patterns or waveforms were evaluated. Nevertheless, the findings of this study suggested that SCS is a potentially effective means of treating ischemic DFUs.
Compared with TDC, SCS significantly increased PtcO2, improved ABI and dilated limb artery diameter in limbs of patients with ischaemic DFU, ultimately reducing the risk of amputation.
|
A prospective, multicenter analysis of the integrated 31-gene expression profile test for sentinel lymph node biopsy (i31-GEP for SLNB) test demonstrates reduced number of unnecessary SLNBs in patients with cutaneous melanoma | d0754986-a97d-4368-9134-3594544e3381 | 11697456 | Biopsy[mh] | In patients with cutaneous melanoma (CM), sentinel lymph node biopsy (SLNB) provides prognostic information regarding the risk of recurrence and patient survival, but the procedure does not improve survival outcomes . Current National Comprehensive Cancer Network guidelines recommend foregoing SLNB when the likelihood of finding a positive SLN is less than 5% (T1a tumors with no other high-risk features), discussing and considering SLNB when the likelihood is 5-10% (T1a with at least one high-risk feature [T1aHR] or T1b tumors), and offering an SLNB when the likelihood is above 10% (T2-T4 tumors). However, the overall SLNB positivity rate is just 12% , and among patients with T1 tumors, 92–95% will have a negative node . Further, studies have found that 11% of patients undergoing SLNB will have a complication, suggesting that patients with T1 tumors may be more likely to have a complication from the procedure than to have a positive node . In addition, SLNB can cost approximately $25,000, representing a substantial cost to patients and the healthcare system . Thus, a tool to help clinicians select patients most likely to have a negative SLNB who may consider safely foregoing the procedure could significantly reduce the number of unnecessary surgical procedures, improving patient care and decreasing healthcare costs. The 31-gene expression profile (31-GEP) molecular risk stratification test for cutaneous melanoma is validated to provide a risk of tumor recurrence and the likelihood of having a positive SLNB as low (Class 1 A), intermediate (Class 1B/2A), or high (Class 2B) [ – ]. Vetto et al. demonstrated that the 31-GEP identified a group of patients with < 5% risk of SLN positivity who could forego the procedure (Class 1 A, T1-T2, ≥ 55 years old) , which was recently validated in the prospective study by Yamamoto et al. To refine SLNB prediction further, Whitman et al. used a neural network algorithm to integrate the 31-GEP continuous score with Breslow thickness, ulceration, mitotic rate, and age to provide a more precise and accurate likelihood of having a positive SLN (i31-GEP for SLNB), and was validated in an independent cohort of 1,674 patients from 30 sites . In the cohort by Whitman et al., the i31-GEP for SLNB had a high NPV (97.4%) and sensitivity (89.8%) in T1-T2 tumors . Importantly, however, because most patients in the cohort were tested with the 31-GEP before 2019, when the SLNB utility of the 31-GEP test had not been demonstrated, likely, this cohort did not use the 31-GEP for SLNB decision-making at that time. In this prospective, multicenter study, we assessed the accuracy of the i31-GEP for SLNB in predicting SLN positivity among patients with T1–T2 tumors, for whom SLNB guidance would be most impactful.
Patients enrolled in the prospective, multicenter DecisionDx-Melanoma Impact on Sentinel Lymph Node Biopsy Decisions and Clinical Outcomes (DECIDE) study with T1-T2 tumors who were being considered for an SLNB and had all necessary information to analyze using the i31-GEP for SLNB (i.e., 31-GEP continuous score, Breslow thickness, mitotic rate, age, and ulceration) were included in this report ( n = 322; enrolled March 2021–March 2023). The DECIDE study design and an analysis of the 31-GEP Class results have been previously reported . Briefly, patients diagnosed with T1a–T2b tumors for whom SLNB was being considered and who had the 31-GEP test ordered clinically were included. At visit one, patients who met inclusion criteria provided informed consent and were enrolled in the study. After reviewing all clinical data with the patient, including the i31-GEP test results, the decision to perform or avoid an SLNB was made with the patient. Post-treatment, the clinician recorded whether an SLNB was performed and which factors influenced the SLNB decision. Institutional review board approval was obtained from WCG-IRB and additionally at each participating institution where required by the institution . We performed a 1 to 1 propensity score-match using the nearest neighbor glm method (R, MatchIt package, version 4.5.4). Matching compared patients in the DECIDE study for whom 31-GEP was considered in SLNB decision-making with those in a separate cohort of patients for whom 31-GEP results were not included in SLNB decisions was performed . Patients in the current study ( n = 322) were matched to a non-overlapping cohort of patients included in Whitman et al. who represent a non-overlapping cohort of patients treated at primarily surgical centers for whom the 31-GEP was not used as part of the clinical SLNB decision-making process ( n = 322 for 1:1 matching out of 1,239 in total Whitman cohort), making the cohort an ideal comparison cohort for the current study . Matching variables included T-stage (Breslow thickness and ulceration status), age, and mitotic rate.
Patient demographics are reported in Table . One hundred fifty-eight patients were female (49.1%), and the median age was 63 (range 20–89). Most tumors were T1 ( n = 262, 81.4%), and the remaining were T2 ( n = 60, 18.6%). The median Breslow thickness was 0.8 mm (range 0.2–2.0 mm). SLNB was performed in 140 patients (43.5%), with a positivity rate of 6.4% (9/140). Propensity matching demonstrated a significant 18.5% reduction of SLNBs performed (43.7% vs. 62.2%. p < 0.001) in the current study compared to the comparison cohort for whom the 31-GEP was not used as part of the SLNB decision-making process (Table ; Fig. ). Thirty-five patients (25.0%) with known SLN status were predicted to have a < 5% risk of SLN positivity by the i31-GEP for SLNB. Of these patients, 0% (0/35) had a positive SLN (T1a, 0/11; T1b, 0/19; T2a, 0/4; and T2b, 0/1). SLNB performance rates could have been reduced by 32.4% (11/34) in T1a tumors, 28.4% (19/67) in T1b tumors, 12.9% (4/31) in T2a tumors, and 12.5% (1/8) in T2b tumors, without missing a positive sentinel lymph node.
The present study expands on the initial DECIDE study results, reporting data from 322 prospectively enrolled patients evaluated using the integrated 31-GEP (i31-GEP) for SLNB. Notably, there were no positive SLNB results among patients predicted to have < 5% risk of positivity by the i31-GEP for SLNB, indicating that if these patients had foregone the SLNB procedure, they would not have been harmed. Indeed, a previous study found that the i31-GEP had a better true negative to false negative SLNB ratio (30:1) than using the standard NCCN risk threshold of 5%, which assumes a 19:1 true negative to false negative ratio (i.e., 1/20 positive SLNs would be missed if 20 SLNBs were avoided) . An additional study of outcomes in patients reported by Yamamoto et al. found no recurrences among those with a Class 1 A 31-GEP result who did not undergo SLNB . Moreover, incorporating the 31-GEP into clinical decision-making resulted in a significant reduction in the SLNB performance rate compared with a propensity-matched comparison cohort of patients from the Whitman et al. i31-GEP validation study for whom the 31-GEP was not used to guide SLNB decisions . Recent studies have not found statistically significant differences in melanoma-specific survival between SLNB versus observation in thin and intermediate-thickness tumors . The primary use of SLNB is as a staging procedure and to select patients for adjuvant therapy. However, in the current era of immunotherapy, patients with thick, ulcerated tumors who have a negative SLNB (stage IIB–IIC) are eligible to receive adjuvant treatment, and SLNB may play a lesser role in these patients . Meanwhile, studies have found that certain patients who are SLNB positive (stage IIIA) have similar MSS rates as those with a negative SLNB, making it less clear if these patients derive benefit from adjuvant treatments, which come at significant cost and carry risks of adverse events, a portion of which can be permanent and severe . Further, the SLNB procedure has an 11.3% complication rate, including seroma (5.1%) and infection (2.9%) , and studies have found that the use of SLNB among patients with T1b tumors increases the cost of care up to 10-fold, with costs of the procedure reaching more than $25,000 . In contrast, the 31-GEP test is performed on tumor tissue from the initial biopsy; therefore, there are no additional procedures or associated risks. Further, health economic modeling suggests that incorporating 31-GEP guidance into the SLNB decision provides savings to healthcare payors . Thus, there remains a critical need for methods beyond current staging that identify patients who may or may not benefit from SLNB, and by integrating the 31-GEP with clinicopathologic factors, patients who may not benefit from SLNB can be identified to forego the procedure safely. In addition to SLNB decisions, the 31-GEP has been demonstrated in prospective, long-term follow-up studies to provide accurate prognostic information about the risk of recurrence, metastasis, and mortality for patients with stage I–III cutaneous melanoma [ , , ]. Critically, studies have shown that incorporating the 31-GEP into clinical use can aid clinicians in finding tumor recurrence earlier while at a lower tumor burden, ultimately improving patient outcomes, and that patients tested with the 31-GEP had a lower risk of melanoma-specific and overall mortality relative to patients without 31-GEP testing . Integrating the 31-GEP risk score with clinicopathologic features to stratify patients by their individual risk of recurrence (ROR), metastasis, or death (i31-GEP for ROR) may offer an additional, comprehensive tool to guide shared decision-making by the clinician and patient. Online nomograms to assess SLN positivity risk have been developed, but none incorporate molecular tumor information, and their utility is limited by a lack of necessary information and confidence intervals that can fall outside of clinical decision ranges. A previous study by Freeman et al. found that just 24% of patients could be analyzed by the Melanoma Institute Australia nomogram in their study using the National Cancer Database due to missing information, which is consistent with our data ( data not shown ) . Multiple recent studies have demonstrated that using nomograms to select patients for SLNB does not add net benefit and may be doing net harm [ – ]. In one study, among patients considered to have < 5% risk by the Melanoma Institute Australia nomogram, the actual positivity rate was 13.7%, demonstrating an underestimation of risk using clinicopathologic variables that could result in patient harm by missing patients who should undergo SLNB . The present study has some limitations, including the number of patients with SLNB results available; however, this was expected, given that the study’s primary objective is to incorporate molecular, clinical, and pathological information into SLNB decision-making to reduce unnecessary SLNBs. All patients in the study were tested with the i31-GEP; thus, there was not a separate prospectively enrolled cohort for comparing SLNB procedure rates. Additionally, the study allowed physicians and patients to choose whether patients received an SLNB, with patient preference as the greatest influence, potentially introducing variability into clinical decision-making. However, this mirrors real-world SLNB decision-making across multiple US centers, where multiple factors and shared decisions between clinician and patient are integrated into clinical decision-making. In contrast, the study’s prospective nature minimizes potential bias and is a major strength. In summary, the current study confirms the performance and clinical utility of the i31-GEP for predicting SLN positivity and that no patients predicted to be at low risk of SLN positivity by the i31-GEP (< 5% risk) had a positive node, further evidence that 31-GEP-guided SLNB decisions do not harm patients with T1-T2 cutaneous melanoma. These results indicate the i31-GEP can improve risk-aligned patient care and demonstrate a significantly reduced SLNB performance rate when the 31-GEP is incorporated into clinical decision-making.
|
Reduced graphene oxide loaded with tetrahedral framework nucleic acids for combating orthodontically induced root resorption | bb827d99-99c9-4723-abd0-c72dbbf2f647 | 11559230 | Dentistry[mh] | Tooth movement (TM) is a mechanics-guided remodeling process in biology, characterized by local aseptic inflammatory response involving the tooth root, alveolar bone, and periodontal ligaments (PDLs) . Currently, with the improvement of the public aesthetic awareness, numerous people undergo orthodontic treatment to improve occlusal function and coordinate facial aesthetics. However, patients are prone to orthodontically induced root resorption (OIRR) during and even after treatment because of the complexity of oral and systemic conditions, the ambiguity of PDLs remodeling, and the inefficiency of TM . As one of the common complications under orthodontic force, OIRR main featured by tooth tissue destruction, second only to enamel leukoplakia. It not only restricts the function of teeth, but also causes the instability of orthodontic effects, thereby severely deteriorates patients’ life quality. Previous studies have shown that, approximately, 90% of patients in orthodontic treatment experience a certain degree of extra-apical root resorption, with the incidence of severe OIRR (exceeding 4 mm) reaching up to 14% . However, owing to the complexity and variability of orthodontic biomechanical systems, high individual differences in PDLs, there is hardly any curative strategy to impede OIRR . Therefore, seeking effective therapy to alleviate or even reverse the pathological changes in OIRR, to achieve efficient, stable, and healthy orthodontic treatment, urgently need to be explored. Recently, a remarkable breakthrough has been made in the advance of nanoscale microstructure from two-dimension (2D) to three-dimension (3D). The research of new nanostructured biomaterials in various diseases has also attracted much attention . Typically, tetrahedral framework nucleic acids (tFNAs) are known to be strategically positioned for its low cytotoxicity, biocompatibility, and membrane penetration in the advancement of cutting-edge fields such as immunotherapy and nano-regenerative medicine . Growing evidence suggests highly complex bone remodeling functions as an indispensable role in the regulation of root resorption . Conversely, dynamic changes in bone are inextricably accompanied with root resorption . Bsp −/− mice exhibit impaired bone healing and extensive tooth root resorption. Skeletal defects observed in Bsp −/− mice was associated with a lack of acellular cementum, which played a central role in the periodontal breakdown and exacerbated pathological changes in periodontal tissue . Root resorption caused by heavy force with severe alveolar bone resorption was related to the increased level of Th17 cells and IL-6 closely . Notably, the tFNA/Pue complex (TPC) restored osteogenesis dysfunction and attenuated the apoptosis of BMSCs in high glucocorticoid microenvironment via the Akt/Bcl-2 and hedgehog pathways . Furthermore, some studies reported that tFNAs facilitated the proliferation of odontogenic stem cells such as dental pulp stem cells , possessed anti-inflammatory and anti-oxidation properties via the inhibition of MAPK signaling in periodontitis . Especially, it ameliorated alveolar bone absorption . Advances have been made to apply nanoscale tFNAs to bone regeneration, however, there are few reports on whether tFNAs affect OIRR. To date, tFNAs have mostly been directly applied to lesions and exert therapeutic effects through diffusion . Nevertheless, the realities of the complex microenvironment around the tooth root, frequent multi-directional mechanical intervention during tooth movement, and continuous updating of gingival crevicular fluid increase the risk of tFNAs being cleared, which becomes a bottleneck for its application. Recently, the combination of nucleic acids with nanomaterials, such as graphene-based nanomaterials, has been substantially advanced over, achieving astonishing functionality, thereby showcasing attractive potential applications . Zhang et al. constructed a highly sequence specific biosensor based on nanocomposites, which are used to detect the specific nucleic acid sequences of SARS-CoV-2 in graphene oxide nanosheets via aggregation-induced emission luminogens (AIEgen)‐labeled oligonucleotide probes . As 2D nanomaterials composed of carbon atom SP2 hybrid orbitals, graphene oxide (GO) shows increasingly broad application prospects in diversified domains of biomedicine . Reduced graphene oxide (rGO) is an important nano-sized derivative of GO. Compared to the GO, the surface of rGO has fewer oxygen-containing functional groups, resulting in a significant improvement in solubility and making the surface exceptionally lively . Previous studies have shown that rGO is easy to functionalize or modify . On the one hand, rGO itself serves as a highly promising biomaterial for disease screening and treatment , on the other hand, it can easily accommodate bioactive molecules such as nucleic acid fragments to improve the delivery efficiency . Hao fabricated a gelatin-alginate rGO hydrogel combined with platelet-derived extracellular vesicles and investigated the possibility of it as a diabetic wound dressing. It modulated immune responses and angiogenesis in the wounds of diabetic rats, and accelerated healing of chronic wounds . Although extensive studies have begun to clarify the biological regulation and therapeutic mechanism related to rGO, to date, whether the rich functional groups of nano-sized rGO can carry tFNAs to reduce their risk of early clearance, achieve sustained release of tFNAs, and ultimately improve OIRR, remain to be elucidated. During the process of mechanical force driving root resorption, there is accompanied by the appearance of periapical bone remodeling, which relies on the organized and orderly coordination of various cells in PDL, such as periodontal ligament cells (PDLCs), osteoclasts, and immune cells . PDLCs have been identified as mesenchymal stem cell reservoir for superior potential to differentiate into osteoblasts in diverse physiological and pathological activities around the roots, including OIRR . Besides, as a mechanical sensitivity effector cell, it directly senses mechanical stimuli in the compressed PDL. Growing evidence indicated that PDLCs facilitated the process of OIRR through producing IL-1β to maintain inflammatory microenvironment and to upregulate osteoclastic activity . However, to date, little is known regarding whether PDLCs is involved in rGO delivery complexes carrying tFNAs to alleviate OIRR. Meanwhile, there is often a negative damage caused by phagocytic cells cleaning of local necrotic tissue during the continuous root absorption process. Macrophages are also mechanically sensitive immune cells, and it has been suggested that their status is a vital link in the occurrence of OIRR . Under force stimulation, the activation of NLRP3 inflammasomes in M1 macrophages by PDLCs would aggravate root resorption . He D revealed that the balance of macrophages affected OIRR. Further studies have shown that the CXCL12/CXCR4 signaling axis mediated OIRR via the upregulation of M1/M2 ratio in PDL . A multifunctional tFNAs system enhanced the crosstalk between schwann cells (SCs) and macrophages, amplified the ability of SCs to recruit macrophages, and facilitated the formation of pro-healing M2 phenotype for nerve regeneration . In Wang’ study, they developed a pioneering DNA nano-drug (TSOs) with the knockdown the expression of SPAK and OSR1. It effectively inhibited the switch of macrophages from M0 to M1 phenotypic and prevented the onset of hydrocephalus . So far, the functions of macrophages in the improvement of OIRR by tFNAs-rGO delivery complex and its specific molecular regulation are poorly understood. Thus, in this study, to clarify whether and how the nanoscale tFNAs-rGO delivery complex affects the root resorption, we used an OIRR model to explore the dynamic changes of root morphology and periapical tissue, osteoclastic activity, and macrophages phenotype in PDLs. Further we established a microenvironment for differentiation induction or polarization induction in vitro to further validate the osteogenesis-enhancement of PDLCs and the polarization-facilitation of M2 phenotype macrophages in ameliorating OIRR mechanistically. Our findings provide further insight into PDLCs and macrophages cell populations in the reconstruction regulation of roots and periodontium and offer a promising strategy for dual-functional nano tFNAs-rGO delivery complex in root resorption therapy.
Synthesis and characterization of tetrahedral framework nucleic acids (tFNAs) Synthesis of reduced graphene oxide (rGO) and rGO loaded with tFNAs (tFNAs-rGO) CCK-8 cell viability assay Animal experiments Morphometric analysis Micro-computed tomography Histology and immunofluorescence staining Alkaline phosphataseAlizarin red staining Quantitative real-time polymerase chain reaction (qRT-PCR) Western blotting assay Migration assay Statistical analysis The synthesis of tFNAs was performed as the described in previous studies . The four well-defined DNA nucleic acids single strands (S1, S2, S3, S4) (1 OD DNA) in four tubes were centrifugated, deposited at the tube bottom and then dissolved. The four DNA nucleic acids strands were mixed with equimolar quantities (500 nM, 2 µL) into TM buffer (10 mmol/L Tris-HCl, 50 mmol/LMgCl 2 , pH = 8.0), tFNAs were synthesized by complementary pairing, which were denatured at 95 °C for 10 min and rapidly lowered to 4 °C for 20 min. All DNA nucleic acids sequences are listed in Table . The successful formations of tFNAs were identified by agarose gel electrophoresis. The surface properties of tFNAs were characterized by atomic force microscope (AFM) using an SPM-9700 instrument (Shimadzu, Kyoto, Japan). Transmission electron microscopy (TEM) was used to observe the morphological structure of tFNAs. Additionally, the zeta potential of the sample was evaluated using by dynamic light scattering (DLS) (Nano ZS, Malvern, England).
According to Kang’s research, the reduction of GO into rGO using vitamin C (VC) as reducing agent . Briefly, graphene oxide (GO) nanosheet (XFNANO, China) was uniformly dispersed in distilled water under ultrasonic action. And 10 mg VC were mixed into 10 mL GO (0.1 mg/mL) nanosheet dispersed suspension. The reduction process was carried out at 95 °C. Removed residual impurities by rinsing, vacuum-dried at 60 °C and collected the rGO. Using self-assembly technology, 100 µL tFNAs (500 nM) were mixed with 100 µL rGO (0.1 mg/mL) in phosphate buffered saline (PBS) and reacted on a fully automatic flip frame at 4 ℃ for 48 h to synthesize tFNAs-rGO delivery complex. Scanning electron microscope (SEM) was used to observe the morphology characteristics of rGO or tFNAs-rGO. Zeta potential was evaluated by DLS. Element composition, molecular properties and crystal structure were analyzed by X-ray photoelectron spectroscopy (XPS, Thermo Kalpha) and X-ray diffraction (XRD, X’Pert PRO MPD).
Periodontal ligament cells were isolated from human periodontal membrane tissue as described previously . The cells were routinely cultured in α-MEM medium (Gibco) supplemented with 10% fetal bovine serum (FBS, BioInd, Kibbutz, Israel) and 1% penicillin-streptomycin under 37 °C in a humidified incubator with 5% CO 2 and 95% humidity. When the cells grew to 70% confluence, the cells were passaged by a 25% trypsin with EDTA (Gibco). PDLCs at passage 3–6 were used. RAW264.7 (Shanghai Institutes for Biological Sciences of the Chinese Academy of Sciences, China) were routinely cultured in high glucose DMEM medium (Gibco) containing 10% FBS and 1% penicillin-streptomycin at 37 °C in a humidified incubator with 5% CO 2 and 95% humidity. PDLCs and RAW264.7 were plated in 96-well plates at 3500 cells per well, cultured with medium containing 1 nM, 10 nM, 50 nM, 100 nM rGO or tFNAs-rGO for 48 h. After the treatments, the PDLCs and RAW264.7 incubated in 100 µL of culture medium supplemented with 10 µL of CCK-8 reagent (MCE) at 37 °C for 2.5 h to tests the cell viability. The absorbance was measured at 450 nm using the microplate reader (Synergy H1; BioTek).
All procedures were approved by the ethics committee, Fujian Medical University under document number IACUC FJMU 2024-Y-0609. C57BL/6 mice (8 weeks old, male, weighing an average of 20 g) were purchased from Wushi Experimental Animal Limited Company (Fuzhou, China). All the animals were bred and maintained under specific-pathogen-free (SPF) conditions on a 12-/12-h light/dark cycle, with free access to food and water. Animals were anaesthetized with pentobarbital sodium (100 mg/kg, injected intraperitoneally). According to the previous research , 40 g force was applied in orthodontically induced root resorption (OIRR) model induced by the excessive stress. One end of a nickel-titanium coil spring (wire size, 0.2 mm, diameter, 1 mm, Smart Technology) was ligated in the neck of the maxillary first molar with a 0.1 mm stainless steel, and the other end was ligated in the incisor neck (Fig. ). Approximately, the nickel-titanium coil spring provides 40 g of tension force between the maxillary right first molar and incisors. Soft food was feed for 2 days after the operation, and the loading devices were checked daily. If the devices have fallen, they were promptly reloaded. For experiments assessing the effect of the rGO or tFNAs-rGO on OIRR, animals were randomly allocated to four groups: Sham, OIRR, OIRR + rGO, and OIRR + tFNAs-rGO. After local cleaning and drying, periodontal ligament injections were administered to the maxillary first molar using a microinjector . Mice in the rGO or tFNAs-rGO group received periodontal ligament injection of 20 µL rGO or tFNAs-rGO dissolved in PBS every other day. Animals in the Sham and OIRR were treated with equal volume PBS. Animals were sacrificed according to the schematic diagram after different interventions for analyses (Fig. A).
Animals were sacrificed by anesthetic overdose, the maxillae were separated, fixed by 4% paraformaldehyde solution for 48 h and stored in 70% ethanol at 4 °C before being processed. The morphology of the maxillary molars and the distance of tooth movement were measured under a stereomicroscope (Nikon, SMZ18).
Micro-CT scanning of the maxillae of mice was performed by µCT 80 system (filter Al 0.2 mm, 70 kV, 114 µA, SCANCO Medical AG, Switzerland) with the resolution of 5 μm to detect the change of alveolar bone and tooth root. Three-dimensional (3D) images were reconstructed by Mimics and Geomagic Wrap 2017 for morphological assessment. Root resorption areas was represented by red. The imaging system of Micro-CT built-in software was applied for the analysis of resorption lacunae. According to the control side, resorption areas were filled for the analyses of resorption volume.
The maxillae of mice were decalcified using 0.5 M EDTA for 3 weeks, dehydrated, embedded in paraffin, and sectioned at 5 μm used for staining. According to the manufacturer recommendations, the sections were stained with hematoxylin-eosin staining (HE) or tartrate-resistant acid phosphate staining (TRAP). TRAP + cells against root surface with three or more nuclei/cell were identified as odontoclast cells. Observed the absorption of roots and alveolar bone, as well as the osteoclast activity of root surface under the microscope. For immunofluorescence analysis, after antigen retrieval and blocking, frozen sections and fixed RAW264.7 were incubated at 4 °C overnight with primary antibodies against CD86 (ABclonal, 1:200), iNOS (ABclonal, 1:200), CD206 (ABclonal, 1:200), and CD68 (Santa Cruz, 1:50). The secondary antibodies included donkey anti-mouse Alexa Fluor 488/555/647 and donkey anti-rabbit Alexa Fluor 488/555/647 (all from Thermo Fisher Scientific). The nucleus was counterstained with DAPI (Sigma, D9542), the images of the samples were observed by confocal microscope and analyzed with ImageJ software. Sections were blinded and scored by two experienced researchers, and the average scores were used in statistical analyses7 and 14 days of osteogenesis-induction, the cells were fixed. According to the instructions, ALPPDLCs were seeded in 6-well plates in osteogenesis-inducing medium with rGO or tFNAs-rGO. After 21 days of osteogenesis-induction, the cells were fixed. According to the instructions, alizarin red staining was performed (Beyotime Biotechnology), and stained plates were photographed and inspected with a light microscope.
After the treatments, total RNA was extracted with Trizol (Takara, Tokyo, Japan). The first-strand cDNA was prepared using HiScript II Q RT SuperMix for qPCR (Vazyme Biotech co., China). qRT-PCR was performed using SYBR Premix Ex Taq II (Vazyme Biotech co., China) in CFX96 Real-Time System (Bio-Rad). GAPDH RNA was used as a housekeeping control. The primer sequences are listed in Table . The relative levels of genes expression were normalized to GAPDH and calculated using the 2 −ΔΔCt .
The PDLCs or RAW264.7 lysates were extracted using RIPA buffer for 30 min (4℃). (Beyotime, P0013B) containing phenylmethanesulfonyl fluoride (PMSF, Beyotime, ST505). The concentrations of the Proteins were detected by BCA protein assay kit (Beyotime, #P0012). The samples were heated at 100 °C for 5 min in sample buffer, separated on 10% SDS-polyacrylamide gels, and transferred to PVDF membranes (Bio-Rad). The membranes were blotted with 5% BSA and incubated with primary antibody at 4 °C overnight. The membranes were washed in TBST solution and incubated with the HRP-conjugated secondary antibodies. The antibody-antigen complexes were visualized with ECL reagents (Millipore, WBKLS0100). The following primary antibodies were applied: ALP (Cell Signaling Technology, 1:1000), BSP (Cell Signaling Technology, 1:1000), RUNX-2 (Cell Signaling Technology, 1:1000), OCN (Cell Signaling Technology, 1:1000), COL-1 (Cell Signaling Technology, 1:1000), CD86 (ABclonal, 1:1000), iNOS (ABclonal, 1:1000), CD206 (ABclonal, 1:1000), and GAPDH (ABclonal, 1:5000).
The effect of rGO or tFNAs-rGO on the migration of RAW264.7 was evaluated using transwell chambers with 8.0 μm pore size (Corning, USA). The lower chamber contained 500 µL of DMEM containing 0.1% FBS with rGO or tFNAs-rGO, and the upper chamber contained RAW264.7 in 200 µL of DMEM containing 0.1% FBS. The chambers were incubated for 48 h (37℃). The RAW264.7 that migrated to the lower chamber were fixed, stained with crystal violet, and counted in three randomly selected microscopic fields per chamber in a blinded manner.
Data are presented as the mean ± standard error of at least three independent experiments. Statistically significant differences were evaluated using two-tailed Student’s t tests for comparison between two groups or by one-way analysis of variance followed by the Tukey’s test for multiple comparisons. All statistical analyses were conducted using GraphPad Prism 8. P < 0.05 was considered statistically significant.
Preparation, synthesis, and identification of tFNAs and tFNAs-rGO tFNAs-rGO alleviates OIRR and downregulates periapical osteoclast activity tFNAs-rGO boosts the ability of osteogenesis in PDLCs tFNAs-rGO reverses OIRR-enhanced M1/M2 macrophage ratio in mice tFNAs-rGO suppresses M1 and promotes M2 polarization in vitro tFNAs-rGO shows no obvious organ toxicity and does not affect the tooth movement efficiency in vivo successful synthesis of nano tFNAs is the foundation for building the delivery complex of reduced graphene oxide (rGO) carrying tetrahedral framework nucleic acids (tFNAs). First, we synthesized the four specifically designed isometric single-strand DNAs (ssDNA) strands and ssDNA strands were easily assembled into stable tFNAs through an annealing process. To prove that we successfully obtained tFNAs, some of the main characteristics of tFNAs, such as the morphology and average potential, were detected. As illustrated by the polyacrylamide gel electrophoresis (PAGE) shown in Fig. A, the result demonstrated the formed S1, S2, S3, S4, and tFNAs, respectively. It was obvious that the migration rate of tFNAs was slower than that other ssDNA strands. Additionally, tFNAs were effectively formed and displayed clear and highlighted band. The morphology of tFNAs were characterized by atomic force microscopy (AFM) and transmission electron microscopy (TEM). The nanoparticle-like framework morphology of tFNAs was about 10–20 nm in a dry state (Fig. B). As we observed from TEM imaging, the shape of the tFNAs could be recognized, manifesting circular or approximately triangular particles with well-dispersed phenomenon (Fig. C). The results aligned with that of previous studies . Dynamic light scattering (DLS) tests were conducted to explore the zeta potential of tFNAs. Similarly, Fig. D showed tFNAs were negatively charged and the average zeta potential was approximately − 1.49 mV. Altogether, the results indicated that tFNAs were efficiently and successfully assembled. Second, we employed self-assembly technology to attach reduced graphene oxide (rGO) with the synthesized tFNAs to build the delivery complex under co-incubation at 4 °C. The morphology feature of nano sized tFNAs-rGO was analyzed by scanning electron microscope (SEM) (Fig. A). The outcomes revealed that rGO appeared as layered flakes, possessed flowing and smooth surface with a thickness of approximately 1–3 nm. It showed that tFNAs nanoparticles were evenly distributed the surface of the rGO flaky-like structure in delivery complex. Subsequent analyses using DLS were performed to explore the zeta potential of rGO and tFNAs-rGO. As shown in Fig. B, both were negatively charged, with values of -3.62 mV and − 2.21 mV, respectively. To further verify the successful synthesis of tFNAs-rGO, X-ray photoelectron spectroscopy (XPS) was carried out to detect the composition, chemical and electronic states of various elements in rGO or tFNAs-rGO (Fig. C). XPS analysis indicated the differences in chemical composition of rGO and tFNAs-rGO. Full spectra and peaks fitting of the samples are shown in Fig. C. Compared with the rGO group, there was C-N bonds in the tFNAs-rGO group (Fig. C-a and C-b). Meanwhile, P-O bonds and C = N were also detected in the tFNAs-rGO group (Fig. C-c and C-d). Besides, according to atomic quantitative analysis results, the proportion of phosphorus element in the tFNAs-rGO group (3.35%) was higher than that in the rGO group (0.71%), partly suggesting the effective introduction of tFNAs. The loading of tFNAs on the surface rGO nanosheets were confirmed by powder XRD and are in Fig. D. The diffraction pattern of rGO centered at 28.68° (0 0 2). In addition to the peak of rGO, another peak appeared at 42.5°, which indicated that tFNAs successfully loaded onto rGO nanosheets during the self-assembly process. In addition, CCK-8 experiments showed that rGO and tFNAs-rGO had no effects on the proliferation or survival of PDLCs and RAW264.7 cells at concentrations ranged from 1 to 100nM (Fig. E). It indicated that the tFNAs-rGO delivery complex had biosafety and low toxicity. Taken together, these findings indicated that tFNAs was loaded into rGO and we successfully built nanoscale tFNAs-rGO delivery complex.
To explore the effect of tFNAs-rGO delivery complex on mechanical force induced root resorption (Fig. A), we performed 3D reconstruction of the tooth roots using Micro-CT and examined the dynamic changes of the root resorption area upon force application and after applying tFNAs-rGO. The Micro-CT analysis on the 14th day showed that remarkable root resorption appeared after force application, which was consistent with previous research . However, after the treatment of rGO and tFNAs-rGO, root resorption volume decreased significantly (Fig. B, S2). Notably, compared with rGO, tFNAs-rGO acted enhanced therapeutic efficacy in OIRR. Moreover, we applied HE staining to evaluate the area of root resorption at the histological level. Consistently, as Fig. C shows, the root absorption area in the OIRR group was prominently increased. However, the level of root absorption in the group exposure to rGO or tFNAs-rGO was sharply abolished compared with that of the OIRR group. As expected, the root absorption area in the tFNAs-rGO group was lower than that in the rGO group. The osteoclast is the central player of bone formation and resorption. The knowledge of regulation mechanisms of osteoclast activity is the basis for understanding the whole huge and complex network for root-bone remodeling . Further, the osteoclastogenic activity around the tooth roots was measured using TRAP staining. As seen in Fig. D, the osteoclast activity markedly upregulated and the number of osteoclasts gradually increased in the periapical tissue of OIRR mice. Nevertheless, rGO and tFNAs-rGO abrogated the OIRR-induced enhancement of osteoclast activity. And whether on Day 7 or Day 14, the tFNAs-rGO group had the lowest number of osteoclasts among the three groups of modeling. Collectively, these findings suggested that tFNAs-rGO delivery complex relieved root resorption and downregulated the level of periapical osteoclast activity.
To further elucidate the precise mechanism by which tFNAs-rGO delivery complex ameliorates OIRR, we first examined the effect of tFNAs-rGO on the osteogenic differentiation ability of PDLCs. On the one hand, the effects of tFNAs-rGO on the osteogenic differentiation ability of PDLCs were assessed through morphological staining. The ALP staining results showed that the obvious blue precipitates appeared in the cytoplasm of PDLCs treated with rGO and tFNAs-rGO. Besides, among the three groups, the number of positive blue stained PDLCs was the highest in the tFNAs-rGO group (Fig. A). In line with the ALP staining data, alizarin red staining assays also showed that, after 21 days of osteogenic induction, both the rGO group and the tFNAs-rGO group showed significant dark red compounds compared to the Con group. Consistently, there were more red mineralized nodules with deeper staining in the tFNAs-rGO group compared to that in the rGO group (Fig. B). On the other hand, we validated the enhancement of PDLCs osteogenic differentiation by tFNAs-rGO at the molecular level using quantitative real-time PCR (qRT-PCR) and western blot. As shown in Fig. C, after tFNAs-rGO was introduced into PDLCs, the mRNA level of osteogenesis related genes, such as Ocn , Opn , Osterix , and Runx-2 , elevated sharply. And Alp mRNA level was significantly higher in the tFNAs-rGO group than that in the other two groups at 21 days ( P < 0.05 ). Moreover, compared to the rGO group, the tFNAs-rGO group showed a more significant improvement on the expressions of osteogenic genes. According to results from the western blot, tFNAs-rGO led to remarkable induction of osteogenic differentiation, as the protein expressions of the ALP, BSP, RUNX-2, and OCN were notably enhanced following the addition of tFNAs-rGO. Furthermore, compared to that in the rGO groups, the protein level of ALP, BSP, RUNX-2, and OCN were markedly upregulated in the tFNAs-rGO group (Fig. D). These data demonstrated that tFNAs-rGO boosted the ability of osteogenesis in PDLCs.
Existing studies have shown that macrophages are also implicated as important effector cells in OIRR . Mechanical force triggered M1/M2 macrophage ratio in the PDL, which participated in the occurrence and progress of OIRR . To further clarify whether the tFNAs-rGO delivery complex had an impact on the biological status of macrophages, we performed immunofluorescence staning to detect and quantify the changes of macrophage polarization status in the periodontal ligament of mice after intervention. As shown in Fig. A and B, CD68 + iNOS + macrophages drastically accumulated, coupled with a decrease in the number of CD68 + CD163 + macrophages, after force application. However, these changes were blunted by in situ injection of rGO or tFNAs-rGO (Fig. A, B). Consequently, the results implied that the number of M1 pro-inflammatory macrophages was upregulated in the periodontal ligament of mice, while M2 anti-inflammatory macrophages reduced during the mechanically induced root resorption. It is worth noting that after the treatment of rGO or tFNAs-rGO, the number of CD68 + iNOS + M1 macrophages decreased. Meanwhile, the number of CD68 + CD163 + M2 macrophages in the rGO or tFNAs-rGO group was elevated significantly compared to OIRR mice during force application. Expectedly, it showed that the blockage of M1 macrophages accumulation and the promotion on M2 macrophage recruitment were more distinct in the tFNAs-rGO group compared with the rGO group (Fig. A, B). Next, we quantitatively analyzed the proportion of M1 and M2 macrophages in the periodontal ligament. OIRR led to a pronounced increase in the ratio of M1/M2 macrophages. But the treatment of tFNAs-rGO reversed the OIRR-triggered rise of the M1/M macrophage ratio (Fig. C). These results revealed part of the mechanism by which tFNAs-rGO alleviated OIRR by reversing OIRR-enhanced M1/M2 macrophage ratio in mice.
To further clarify the mechanism by which tFNAs-rGO delivery complex affected the polarization state of macrophages, we assessed the expression of M1 and M2 markers in RAW264.7-derived macrophages after rGO or tFNAs-rGO stimulation using WB and qRT-PCR analysis in vitro. Specifically, in line with the in vivo data, in vitro assays also showed that RAW264.7 stimulated with LPS differentiated into M1-like pro-inflammatory macrophages, as evidenced by increased mRNA levels Tnf-α and Inos (Fig. A). However, the raised levels of Tnf-α and Inos were rescued by rGO or tFNAs-rGO application (Fig. A). Additionally, rGO and tFNAs-rGO could still obviously downregulate the expression of Tnf-α , Inos , and Il-1β when LPS was not introduced into the culture system (Fig. A). These results indicated that tFNAs-rGO could effectively reduce the expression of the inflammatory factors in macrophages stimulated by LPS. Compared with the Con group and the IL-4 group, the expression of M2-like anti-inflammatory macrophages related-genes in RAW264.7 cells, such as Arg-1 , Tgf-β , and Cd206 , sharply increased in the IL-4 + tFNAs-rGO group. Besides, there was higher mRNA levels of Tgf-β and Cd206 in RAW264.7 cells of IL-4 + tFNAs-rGO group compared to IL-4 + rGO group (Fig. B). Meanwhile, compared to the Con group, the expression of M2 macrophage polarization related genes, such as Cd206 and Tgf-β but not Arg-1 , strikingly increased in the tFNAs-rGO group (Fig. B). Western blot was used to detect changes in RAW264.7 cells at the protein level. As presented in Fig. D, the upregulated expression of iNOS and CD68 in LPS-treated RAW264.7 cells was markedly reversed after rGO or tFNAs-rGO administration in vitro. Conversely, CD163 and CD206 expression in the tFNAs-rGO group was significantly higher than that in the IL-4 group ( P < 0.05 ). Moreover, it showed a greater promotion in the protein levels of CD163 and CD206, markers of M2 macrophages, in the tFNAs-rGO group compared to the rGO group (Fig. D). To further verify the postulation that tFNAs-rGO prevented M1 macrophages differentiation and induced M2 macrophage polarization, immunofluorescence staining was utilized. Consistently, tFNAs-rGO treatment was observed to decrease the numbers of iNOS + M1 macrophages in the LPS + tFNAs-rGO group significantly compared to that in the LPS group (Fig. E). Most notably, compared with rGO, tFNAs-rGO exhibited a more prominent inhibition on the formation of iNOS + M1 macrophages. Furthermore, tFNAs-rGO boosted the number of CD163 + M2 macrophage induced by IL-4, whereas there was no significant enhancement on the CD163 + M2 macrophage number in the rGO group compared with IL-4 group (Fig. F). The administration of tFNAs-rGO induced a reduction of iNOS + or CD86 + M1 macrophages, but a rise of CD163 + or CD206 + M2 macrophages (Fig. C, D). We performed transwell migration assay to test the chemotactic ability of rGO or tFNAs-rGO to attract macrophages in vitro. Interestingly, tFNAs-rGO or rGO significantly increased the number of RAW264.7 cells in the lower chamber. In addition, the promotion effect on migration ability of RAW264.7 cells was more pronounced in the tFNAs-rGO group compared to the rGO group (Fig. E). Collectively, these findings identified tFNAs-rGO affected macrophage status via the suppression M1 and the promotion M2 polarization for orthodontically induced root resorption remission.
To investigate whether the tFNAs-rGO delivery complex had toxic effects on animals during application (Fig. A, S1), we employed HE staining to evaluate the changes in the cellular morphology and tissue structure of organs such as liver, kidney, and heart. HE staining showed that the structures of liver lobular could be clearly seen in these four groups, with liver cells arranged radially around the central vein. Besides, hepatocytes were intact and had abundant cytoplasm, and the nuclei were circular and located in the center, without significant enlargement (Fig. A). As shown in Fig. B, the size and morphology of the glomerulus are normal with clear boundaries in the OIRR + rGO group and OIRR + tFNAs-rGO group. Similarly, the brush inside the renal tubules is intact without significant expansion after rGO or tFNAs-rGO treatment. There were large amounts of proximal tubules composed of cubic epithelial cells with nuclei located at the base and regular distal tubules. Compared to the Sham group, there was no obvious infiltration of inflammatory cells in the renal interstitium in the OIRR + rGO group and OIRR + tFNAs-rGO group. In all groups, the muscle fibers were slender and cylindrical, connected to each other to form a network in the longitudinal section. The muscle fibers were short cylindrical or polygonal in shape, with similar sizes in the transverse section. There was a small amount of connective tissue and capillaries between the muscle fibers. These data implied that rGO and tFNAs-rGO had no significant organ toxic effects on OIRR mice. To further clarify the effects of rGO and tFNAs-rGO delivery complex on the efficiency of tooth movement induced by orthodontic force, we measured and analyzed the distance of tooth movement in different groups using a stereomicroscope and Micro-CT (Fig. B). Compared to the Sham group, the OIRR group showed a significant increase in tooth movement distance. It is worth noting that the distance of tooth movement did not significantly change with the application of rGO and tFNAs-rGO on the 7th and 14th days. The above data indicated that tFNAs-rGO exhibited no effect on the tooth movement efficiency of OIRR in vivo.
The advantages of nanoscale system in disease therapy lie in their desirable characteristics, including nanoscale size, excellent biocompatibility, targeted applicability, and controlled release and more. By using an in vivo mechanical force-induced root resorption model and in vitro an osteogenic induced microenvironment or cells polarization system, our study uncovers the pivotal roles and the mechanisms of the dual-functional tFNAs-rGO delivery complex at the nanoscale in alleviating orthodontically induced root resorption (OIRR) and PDLCs or macrophages function during root tissue reconstruction (Fig. ). Root resorption is the pathological loss of dental hard tissue and is triggered by odontoclastic action. It may appear on the outer aspect of the root or within the root canal, however, regardless of its region, it is irreversible and may result in a decline in chewing function for patients, and in severe cases, may even cause tooth loss . Clinically, root resorption is a long-lasting issue due to its ambiguity and complexity of etiology. Previous studies demonstrated that canal filling techniques and regenerative endodontic procedures were able to arrest the resorption, reduced the size of the resorptive area and ultimately improved the longevity and restorability of teeth with internal root resorption . However, it is frustrating to note that there are almost no effective treatment strategies for external surface root resorption. The external surface root resorption is usually an adverse outcome of orthodontic tooth movement . It has been reported that the incidence of orthodontically induced root resorption (OIRR) is extremely high in orthodontic patients . To date, only by the management of appropriately eliminating etiological factors, such as suspending orthodontic treatment, can the root absorption be slowed down. Therefore, there is an urgent need to explore new strategies for treating OIRR. In recent years, nanoscale materials have been applied in the diagnosis and treatment of various diseases, including the stomatology . A mitochondrial complex biomimetic nanozyme (MCBN), with biological properties like natural complexes and nanoscale size, demonstrated anti-inflammatory effect and relieved alveolar bone loss by targeting the mitochondrial electron transport chain homeostasis of macrophages to inhibit the activation of NLRP3 inflammatory vesicle and NF-κB pathway . Recent evidence has brought that nanoscale topographic modification of titanium substrates played a critical role in osseointegration . Data from Chauhan’s study had also suggested that there was a faster osseointegration with the strong interface on the surface of simvastatin-loaded titanium nanotubes implant surface . The above data thus provide a hint concerning the ‘nano scale-based’ work pattern of biomaterials in alleviating OIRR. Here, in our study, we successfully built a delivery complex, tFNAs-rGO, consisting of a 2D reduced graphene oxide (rGO) nanosheet loaded with a 3D nanoscale tetrahedral framework nucleic acid (tFNAs) (Fig. ). The synthesis of tFNAs-rGO using self-assembly technology is simpler and highly reproducible with a yield of approximately 90% compared with other materials . Most notably, the complex is relatively stable because the DNA tetrahedral structure is not easily degraded. Moreover, compared to other delivery platforms, rGO nanosheets not only have more outstanding carrying capacity of tFNAs due to their active surface properties, but their superior mechanical properties also make the delivery complex less susceptible to be damaged or removed under the intervention of mechanical force during tooth movement . In the Table S3, we discussed and summarized the characteristics of the tFNAs-rGO complex with previously published biomaterials. First, we validated the successful synthesis of tetrahedral tFNAs with negative charges through experiments such as AGE, AFM, and TEM. And tFNAs was uniformly loaded onto rGO nanosheets. Then, it confirmed that the tFNAs-rGO delivery complex had no significant biological toxicity by cell viability experiments in vitro. Moreover, the results from HE staining implied that tFNAs-rGO delivery complex would not influence the cellular morphology and tissue structure of organs. It was shown that the tFNAs-rGO delivery complex significantly reduced the region of root resorption and downregulated osteoclast activity in the peri-root tissue. Interestingly, we observed that the complex did not decrease the efficiency of tooth movement under stereomicroscope and 3D reconstruction of Micro-CT. The above results suggested that the tFNAs-rGO delivery complex could alleviate OIRR without affecting tooth movement. This means that, after the transformation and application of the tFNAs-rGO delivery complex, it is highly likely to solve the problems of orthodontists, such as having to suspend orthodontic treatment when root resorption occurs . Certainly, further work is needed to verify the feasibility and fully reveal the mechanism by which the tFNAs-rGO delivery complex may ameliorate OIRR. Previous studies demonstrated that root resorption like orthodontically induced root resorption (OIRR) was a complex and orchestrate pathological activity underlying a non-physiologic cellular activation and close crosstalk of numerous cells . So far, numerous in vitro and in vivo studies have delved into the molecular mechanisms of OIRR. Abnormalities of clastic cell adhesion, fusion and activation exacerbated the root resorption by the specific pathways such as RANKL/RANK/OPG signaling . Cementum had a reparative ability to mineralized tissue and was commonly regarded as an anti-resorptive barrier because of its mineral-remodeling process deficiency . Salman found that, as a “defense factor” for oxidative stress in cementum maintenance, superoxide dismutase 3 (SOD3) as a “defense” from oxidative stress strongly expressed in cementocytes and cementoblasts by proteomics . It was noteworthy that SOD3 was immunolocalized around apical third cementocytes and in cervical root cementoblasts, which were sensitive to root resorption. Based on the recent reports, the molecular pathway leading to root resorption is not likely to be one. Certainly, it is not dependent on a single cell population. There are several effector cells in the root resorption process, specifically, PDLCs, odontoclasts, osteoclasts, and macrophages . The current study identified that the enhanced Wnt5a-Ror2 signaling in PDLCs promoted the release of RANKL and partly led to odontoclast activity upregulation and final root resorption . The Chen’s lab reported that orthodontic force-induced BMAL1 in PDLCs was closely involved in controlling osteoclastic activities in alveolar bone remodeling . Here we showed that the tFNAs-rGO delivery complex boosted the ability of osteogenesis in PDLCs under in vitro osteogenic induction circumstance. Furthermore, compared with rGO, the tFNAs-rGO delivery complex exhibited a stronger promoting effect on osteogenic differentiation of PDLCs, illustrating in its more outstanding promotion of the mineralized nodules formation and upregulation of the expression of osteogenic related molecules. Consistent with Elhakim’s results, icariin was demonstrated potent pro-osteogenic property that promoted functional healing and reduced resorption of denuded roots in molar replantation rat . Therefore, we speculate that the tFNAs-rGO delivery complex may abate OIRR partly by promoting osteogenic differentiation of PDLCs around the roots. The critical influence of macrophages in root resorption has been widely recognized . Macrophages are mainly divided into M1 type and M2 type. M1-type macrophage is pro-inflammatory type and releases various inflammatory mediators, and drives osteoclastogenesis, whereas M2-type macrophages are anti-inflammatory cell subpopulation and support the deposition and mineralization of bone . Notably, our data also revealed that the tFNAs-rGO delivery complex reversed OIRR-enhanced M1/M2 macrophage ratio in mice and suppressed M1 and promoted M2 polarization in vitro. Meanwhile, our study showed that tFNAs-rGO evidently promoted the migration of RAW264.7 cells in transwell migration assay. Fang et al. proposed that the inflammatory monocytes homed in a CXCL12/CXCR4 axis-dependent manner and undergone M1 polarization in orthodontic root resorption, which were blunted by intraperitoneal injection of CXCR4 antagonist AMD3100 . Based on the above results, we hypothesized that there was a possibility that the tFNAs-rGO delivery complex facilitated the migration of macrophages to the site of root resorption, which then polarized to M2-phenotype protected macrophages. Concurrently, tFNAs-rGO could also inhibit the polarization of macrophages in the root resorption area towards M1 anti-inflammatory type, ultimately alleviating OIRR. Above all, we outlined a scheme for PDLCs, and macrophages mediated dual-functional tFNAs-rGO delivery complex remission of orthodontically induced root resorption. There are several recognized limitations to our study. First, in this study, we have demonstrated that the tFNAs-rGO delivery complex exerted dual functions of the osteogenic differentiation of PDLCs and the M2 phenotype polarization. However, we have not yet fully revealed the specific mechanism by which the tFNAs-rGO delivery complex affected these two cells, or whether there is crosstalk between PDLCs and macrophages in the OIRR treatment. Single-cell profiling will help to uncover the coordinated cell-cell interaction and the underlying regulation pathways. This will be a vital direction of future study. Second, there is a great unmet medical need for a nanoscale biological reagent to combat OIRR. Although we have preliminary confirmed that the feasibility of applying the tFNAs-rGO delivery complex to treat OIRR, it is just the first step toward the translation of our study into clinical application. Further investigation is needed on the appropriate tFNAs-rGO delivery complex dosage and administration using large-animal models.
In conclusion, we successfully synthesized a promising nanoscale delivery complex, dual-functional tFNAs-rGO, which was applied to alleviate orthodontically induced root resorption by regulating PDLCs and macrophages (Fig. ). The prepared tFNAs were well-distributed onto the surface of rGO nanosheets. The dual-functional tFNAs-rGO delivery complex promoted the differentiation of PDLCs towards osteogenic direction and switched the macrophage polarity to the classically anti-inflammatory M2 phenotype. This study is the first to introduce nanosheets loading 3D nucleic acids for in OIRR therapy. Our findings provide further insight into a new tFNAs upgrading strategy for maintaining the biological functions and suggest a potential therapeutic candidate in the treatment of OIRR.
Below is the link to the electronic supplementary material. Supplementary Material 1
|
Evaluation of immunohistochemical expression of stem cell markers (NANOG and CD133) in normal, hyperplastic, and malignant endometrium | 352b98a4-682f-4265-9114-7cfc8bf83770 | 8852636 | Anatomy[mh] | Endometrial carcinoma is the most frequent malignancy of the female genital tract in developed western countries . Worldwide, it is the second leading cause of gynecological malignancy . Previous studies suggested the presence of tumor stem-like cells to influence the major processes of tumor progression . These cells can induce and self-renew the tumor and express several genes with pluripotent features . Therefore, these cells were named cancer stem cells identified in many different organ cancers and regarded as a potential cause for recurrence, metastasis, and resistance to different therapeutic modalities like hormonal, radiotherapy, and chemotherapy . Several markers have been identified in cancer stem cells like CD133, CD 166, CD44, CD40, and NANOG . NANOG is an important stem cell transcription factor that participates in normal cell development and tumorigenesis . NANOG regulates embryonic and fetal development and has a crucial role in the preimplantation development phase, with a progressive decrease during embryonic stem cell differentiation. After birth, a limited number of human tissues show a low level of expression in some cells in organs like the testis, ovary, and uterine glands, but most of the tissue is undetectable . Re-expression of NANOG has been detected during carcinogenesis. Many studies identified that NANOG expression is already present in precancerous lesions, with rising levels in high-grade dysplasia. Therefore, it can be used as a diagnostic marker, distinguishing between true dysplasia and reactive lesions . NANOG enables cancer cells to acquire stem-cell-like properties like self-renewal and immortality, leading to growth expansion, tumor maintenance, metastasis formation, and tumor relapse. Cancer showing high NANOG expression is usually associated with high grade, advanced stage, worse overall survival, and resistance to treatment . CD133 is a 97 kDa pentaspan transmembrane glycoprotein. Its function in normal tissue and its role in carcinogenesis remains elusive. Its localization in microvilli and membrane protrusion suggests its role in membrane organization. Subcellular localization of CD133 allows it to connect with lipid rafts involved in the signaling cascade . Studies on both normal cells and cancer stem cells demonstrated that CD133 expression is dependent on the cell cycle . Recently, CD133 has been used as a marker for stem cell identification in several tissues like haematopoetic, brain, and prostate. It has been suggested that CD133 can be used to identify cancer stem cells in variable solid tumors like tumors of the prostate, brain, ovary, colon, liver, and ovary. CSCs with high expression of CD133 show high ability in self-renewing and have a high potential to proliferate to make tumors histologically similar to solid parent tumors after transplanting them in immune-deficient mice . This study aimed ( ) to investigate the immunohistochemical expression of cancer stem cell markers NANOG and CD133 in endometrial hyperplasia and endometrial carcinoma and their correlation with different clinicopathological parameters and ( ) to compare the expression of these markers in normal endometrial tissue and apparently normal tissue around the tumor.
This retrospective study was conducted on tissue samples from 93 female patients admitted to the obstetrics and gynecology department who have undergone hysterectomy to treat endometrial cancer, leiomyoma treatment, and dilation and curettage (D&C) to treat abnormal uterine bleeding. Samples Immunohistochemistry Scoring of immunohistochemical staining Statistical analysis Data analysis was performed using SPSS (version 24). Qualitative variables were analyzed using percentage, mean, and range. Qualitative variables were statistically analyzed using the Pearson Chi-square test and Fisher exact test to obtain the significance of the relationship between clinicopathological characteristics and NANOG and CD133 expression. A p-value of less than 0.05 was considered statistically significant.
A total of 93 archival formalin-fixed paraffin-embedded tissue blocks were retrieved from AL-Yarmouk hospital from January 2018 until June 2020. The clinical information was collected from the available records. All hematoxylin and eosin-stained slides were reviewed to confirm the initial diagnosis and determine the tumor type, grade, depth of myometrial invasion, and lymphovascular invasion (LVI). All cases showed endometrioid histology. The control group consisted of 20 normal proliferative phase endometrium tissue samples collected from patients who underwent hysterectomy for leiomyoma. The tissue specimens were divided into three groups. The normal endometrial group includes 20 patients with an average age of 25–47 years. The second group of patients with endometrial hyperplasia included 30 patients with an average age of 30–52 years diagnosed with abnormal uterine bleeding. 18 samples showed simple hyperplasia without atypia, while 12 cases demonstrated complex hyperplasia with atypia. The third group of patients, called endometrial cancer, included 43 patients diagnosed with endometrial cancer, average age 44–57 years. 23 were grade I, 13 were grade II, and grade III was diagnosed in 7 samples. The mass size was less than or equal to 4 in 25 cases. 18 cases were more than 4 cm. Regarding the depth of myometrial invasion, 23 cases were superficial (less than 50%). The deep invasion (more than or equal to 50%) was found in 20 cases.
For each case, 3 consecutive sections were obtained with a thickness of 4 μm. One slide was stained with H&E to check and confirm the diagnosis, while the other 2 sections were placed on a positively charged glass slide and stained according to standard staining protocol. Primary anti-CD133 antibody (Abnova, Entrez GeneID 8842, Code PAB12663, Rabbit antihuman polyclonal antibody), and primary anti-NANOG antibody (Abnova clone 60CT77.1.1 Catalog no. MAB12279, mouse antihuman monoclonal antibody). The dilution was 1:200 for CD133 and 1:50 for NANOG using antibody diluent solution (Abcam ® USA, code ab64211). Then secondary detection kits for CD133 (Abcam ® USA, code ab64261 rabbit-specific HRP/DAB) and NANOG (Abcam ® USA, code ab80436 mouse-specific HRP/DAB) were used based on labeled streptavidin-biotin technique.
Anti-CD133 expression depicted a membranous and/or cytoplasmic brown staining, while anti-NANOG expression showed a distinct nuclear brown staining. Marker immunostaining was scored using the extent of staining (proportion or percentage of stained cells) and staining intensity. CD133 scoring: We calculated the extent of positive stained epithelial cells and classified them into 4-point scale as the following: No staining=0%, 1=1–10%, 2=11–25%, 3=26–50% and 4=51–100%. Staining intensity was categorized into three groups: weak (+1), moderate (+2), and strong (+3). The final score was calculated by multiplying the extent score by staining intensity. The combined immunohistochemical score ranged from 0 to 12. The cut-off point of 10 was used to segregate cases with weak to moderate expression from cases with strong immune expression. The final categorization, according to the above, divided cases into 3 groups: 0 = (Absent), Immunohistochemical stain (IHS) ≤10 and IHS >10 (Strong) . Regarding NANOG that show a predominant nuclear expression, the staining intensity for positive cells was calculated and classified into 4 scores: 0 – (No Staining), 1 – (Weak Staining), 2 – (Moderate Staining), 3 – (Strong Staining). The percentage of positive tumor cells was also scored as: 0 = (None of the tumor cells), 1 = (1–50% of positive tumor cells), 2 = (51–100% of positive tumor cells). Then the total score with the formula (percentage of positive tumor cells X staining intensity) was obtained; the total score ranged from 0–6. The results were classified into low (0–3) and high (4–6) .
NANOG expression was detected in 3 out of 20 (15%) of the normal proliferative endometrium. All of them were low expressions (as shown in ). This expression increased in cases with hyperplasia; NANOG was positive in 18 out of 30 (60%), 8 out of 18 (44.44%) were positive in hyperplasia without atypia (6 with low expression and only 2 cases with high expression). In cases with hyperplasia with atypia 10 out of 12 were positive (2 low and 8 high), as shown in . NANOG expression was higher in hyperplastic conditions with atypia than those without atypia, and the difference was statistically significant (P-value 0.005). NANOG expression was detected in 38 out of 43 (88.37) endometrial carcinoma cases (as shown in ). summarizes the results. High NANOG expression was significantly correlated with high grade, deep myometrial invasion, lymph node metastasis, and high stage with p values (0.009, 0.005, 0.014, and 0.003, respectively) ( ). Regarding CD133, only one case (5%) of normal endometrium was positive for CD 133 with low expression. Normal endometrium showed less expression of CD133 than hyperplasia and endometrial carcinoma, with a highly statistically significant difference (p less than 0.0001). Hyperplastic cases with atypia express higher CD133 than those without atypia (6 out of 12 versus 3 out of 18). The staining in atypical endometrial hyperplasia is demonstrated in . However, this difference was not statistically significant (p-value 0.111) ( ). CD133 was positive in 33 out of 43 (76.74%) endometrial carcinoma cases, as shown in and . It showed a significant correlation with deep myometrial invasion, positive lymph node, positive lymphovascular invasion, and high stage (p-value 0.003, 0.001, 0.003, and 0.013, respectively) ( ).
Although derived from a single clone, tumors consist of a heterogeneous cell population . Recently, cancer stem cell (CSC) theory has postulated that a small subset of cells possesses some unique characteristic features like self-renewal and initiation of tumor and maintenance of its attitude with multi-lineage capacity. Some of these cells are responsible for resistance to different cancer treatment modalities like chemotherapy and radiotherapy . Accumulating evidence indicates that these CSCs can be detected using a large number of markers like CD 44, CD 117, CD55, CD133, and NANOG in different tumors . The present work examined the expression of two CSC markers, NANOG and CD133, in 43 endometrial carcinoma cases and compared them to 20 normal and 30 hyperplastic conditions. The expression of these 2 markers was analyzed to identify the impact of NANOG and CD133 on the behavior of endometrial tumors and their carcinogenesis. Our study revealed that NANOG was expressed in 88.37% of endometrial carcinoma cases; this expression was higher than that in the normal and hyperplastic groups (15% and 60%, respectively). Moreover, the expression in hyperplasia with atypia is higher than those with atypia (83.33%) than hyperplasia without atypia (44.44%). The difference was statistically highly significant (p-value less than 0.0001). Interestingly, the expression of NANOG in apparently normal tissue around the tumor was higher than the normal proliferative endometrium, although it was not statistically significant. Other studies also show the high expression of NANOG in tumor samples more than in normal tissue like oral squamous cell carcinoma, salivary glands (mucoepidermoid), and glioma . This result agrees with other studies demonstrating that NANOG is expressed in various precancerous lesions like laryngeal dysplasia, oral dysplasia, cervical intraepithelial neoplasia, and gastric dysplasia colonic adenoma . These results suggest the possibility of using NANOG expression as a diagnostic marker to distinguish dysplasia from reactive atypia. In our study, NANOG expression significantly correlated with bad prognostic signs like high grade, deep myometrial invasion, positive lymph node, and high stage. This implies that NANOG affects endometrial carcinoma oncogenesis, especially well-differentiated in its early stages, and their overexpression may facilitate earlier diagnosis of endometrial carcinoma. Similar results were obtained in other studies in different tumors like breast ductal carcinoma , colorectal carcinoma , and other solid tumors of the ovary, lung, kidney, esophagus, stomach, pancreas, and liver . These studies have linked high expression of NANOG with poorly differentiated tumors, advanced stage, and poor survival. Furthermore, other studies correlate NANOG expression with resistance to chemo and radiotherapy. This is supported by experimental studies that show inhibition of NANOG leading to inhibition of tumor initiation, suggesting the role of NANOG in cancer development . NANOG was found to be expressed in endometrial carcinoma. Transcription factors like OCT4, transcription factor 3 (TCF3), and SOX2 that regulate NANOG expression were found in endometrial CSCs and related to the potential ability for self-renewal . CD 133 expressed in 76.74% of endometrial carcinoma compared to only 5% of normal tissue and 10 out of 18 in hyperplasia without atypia and 10 out of 12 in hyperplasia with atypia. Shorky et al. found that CD133 expression in normal and hyperplastic endometrium was nearly the same percentage as in our study . High expression of CD 133 was positively correlated with bad prognostic signs like a deep myometrial invasion, positive lymph node metastasis, lymphovascular invasion, and high stage. Our study agrees with other studies which found that CD133 expression is highly associated with lymph node involvement and directly associated with tumor grading and tumor depth in gastric adenocarcinoma . Also, this study agrees with another study that shows that CD133 is directly associated with tumor stage, lymph node, and distant metastasis in colorectal carcinoma . Moreover, CD133 expression was correlated with bad prognostic factors like a capsular invasion, lymph node, and high stage in medullary thyroid carcinoma . These results agree with Maeda et al. , who found that CD133 high expression in the pancreatic tumor was associated with lymph node metastasis .
The cancer stem cell markers NANOG and CD 133 are expressed in a high percentage in endometrial carcinoma compared to normal and hyperplasia and their expression is positively correlated with the aggressive behavior of the tumor. Furthermore, high expression of these two markers was noted in apparently normal tissue around the tumor and in hyperplastic conditions with atypia, suggesting the possibility to use the expression of these markers as a diagnostic marker to distinguish dysplasia from reactive atypia. Therefore, inhibition of these markers can be a promising method to stop the progression of early cancers.
Conflict of interest The authors declare no conflict of interest. Ethical approval Consent to participate Written informed consent was obtained from all participants in the study. Personal thanks Authorship MMA collected the samples and the archival material, data collection acquisition, and diagnosis of the slides. KN participated in immunohistochemistry staining. A-JAR performed the statistical analysis. All the authors contributed to study design, collection of references, slides scoring, writing the original draft and critical revision, editing of the manuscript, and technical and financial support.
The authors declare no conflict of interest.
The study was approved by the Ethical Committee of Mustansiriyah University in cooperation with Al-Yarmouk teaching Hospital (approval number 150/17 from 20-10-2017).
Written informed consent was obtained from all participants in the study.
The authors would like to thank Mustansiriyah University (www.uomustansiriyah.edu.iq) Baghdad, Iraq, for support in publishing the present work.
|
Conceptualization of patient‐centered care in Latin America: A scoping review | 0f39b8e3-1a39-4592-a0ff-ba4d8d8cff54 | 10485332 | Patient-Centered Care[mh] | INTRODUCTION In an effort to improve population health, the global community has worked towards the development and advancement of health care systems around the world. Health outcomes have globally improved, leading to an aging population over the past decades. An aging population brings about novel challenges to health care systems, for example, increasing prevalence of chronic noncommunicable diseases. These developments were complemented by a retraction from the paternalistic approach to health care and the emergence of alternative concepts as patient‐centered care (PCC). In a paternalistic health care setting, the health care professional (HCP) is an authority who applies objective criteria to determine the treatment plan and informs the patient about the chosen intervention. PCC proposes a shift towards balanced power in the relationship between HCP and patient, towards patient empowerment, active participation of the patient in the health care process, as well as a focus on individual patient needs, values, and preferences. , Arguments in favor of PCC are of ethical, moral, and scientific nature. To treat all patients equally, respectfully, and recognize their autonomy are standards of medical ethics and promoted by PCC. It is emphasized that essentials to health care are, among others, cultural appropriateness, provision of information, recognition of individual circumstances and needs, and access to care without discrimination. These standards are supposed to decrease inequalities in access to health care. Research suggests an association between aspects of PCC and positive patient outcomes, for example, health status, treatment adherence, costs, health behavior, social support, quality of medical decisions, and self‐rated health. Thus, diverse lines of argumentation suggest PCC to be a desirable process and outcome in health care. The increasing number of scientific publications on PCC has brought about diverse definitions of PCC in the international literature. Scholl et al. saw a need for a coherent conceptualization of PCC which would provide common ground for future scientific and health policy work on PCC. To address this need, Scholl et al. developed the integrative model of patient‐centeredness (henceforth “integrative model”) by a systematic synthesis of diverse definitions of PCC described in the international literature, mainly from North America and Europe, but none from Latin America. The model proposes 15 dimensions of PCC (Supporting Information: Appendix ) and has since been used in research on PCC, for example, in the development of a patient‐reported experience measure of PCC and came to close a gap in the international conceptualization of PCC. , , Research on and implementation of PCC have not been uniform around the world. PCC has been widely described and investigated in the global north. In contrast, in regions where accessibility to health care and social inequalities remain an issue, as in Latin America, there has been comparably little research on PCC. The socioeconomic, political, and economic structures of Latin American countries are diverse. After the end of colonialization, military dictatorships undermining human rights were implemented in many countries, which lead to socioeconomic and health inequalities in Latin America. Social movements achieved the restatement of civilian rule in some countries. These political changes as well as economic growth were precursors for health system reforms that have been implemented in Latin American countries to achieve universal health coverage and decrease poverty over the past decades. For example, in Chile, health system reforms have led to a health coverage of about 95%. However, health systems in many Latin American countries constitute a mixture of the public and a private sector, which promotes health inequalities and could enhance the continuation of a paternalistic style in health care. In 2018, a survey conducted by the Organization for Economic Co‐operation and Development (OECD) indicated that the spread and degree of health care coverage are less uniform in 21 Latin American countries in comparison to other OECD countries. With regard to PCC, in 2003, the Pan American Health Organization declared strategies to implement the principles of “equity, solidarity, and the right to the highest possible standard of health” in Latin American health care systems. In line with this, access to care has successfully been improved in Mexico by the introduction of a program, which provides affordable health care to uninsured individuals. Another example is Chile, where PCC has been declared as one of the fundamental principles of the health system in 2006. Thus, health policymakers in Latin America have recognized the need for PCC and claimed the intention to establish PCC in routine care. , , In 2016, the OECD implemented a Latin America and the Caribbean Network of Health Systems to “identify effective policies to ensure the financial sustainability of health systems” (OECD‐LAC Regional Policy Networks). Latin American research on PCC shows little coherence in the conceptualization of PCC. For example, Guanais et al. conducted a secondary analysis of a public opinion survey on the health care system which had been conducted in six Latin American countries. They chose the following variables as being related to PCC for analysis: contact with primary care clinic (access), time spent with HCP, patient‐HCP communication, technical quality and problem solving, and health care coordination. In contrast, in another analysis of patient‐reported experience with health care in four Latin American countries, variables that were considered to be associated with PCC were easy access, coordinated care, good HCP–patient communication, provision of health‐related information and education, and emotional support. The difference between variables considered to be associated with PCC in the two studies represents variations in the conceptualization of PCC in Latin American research. Moreover, it is unclear how the concept of PCC has evolved in Latin America. As Scholl et al. have recognized before, a clear concept describing PCC is necessary to compare research results and to implement PCC. Despite the advances in health care and research on PCC, researchers from Chile have shown that thorough implementation of PCC is still missing. Patients reported a lack of opportunities for active participation in medical decision‐making in primary care and a disbalance in the distribution of power between HCPs and patients. Moreover, patient satisfaction with public health care significantly decreased from 2010 to 2015. In a survey carried out in six Latin American countries, more than 80% of participants indicated that their health care system required substantial changes. One main issue recorded by these surveys was access to care, which is an aspect of PCC. One reason for the lack of implementation of PCC in practice could be that clear guidelines on how to put patients at the center of care and let them participate in decision‐making are missing. In line with that, Bravo et al. suggest that a clear operationalization of PCC in the Latin American context is needed. Thus, the aim of this scoping review is to analyze how PCC is conceptualized in Latin America. To date, there is no coherent definition of PCC in Latin America. Therefore, the declared aim to implement PCC in Latin America can hardly be achieved. The research question of this review is: How does the conceptualization of PCC in Latin America differ from the integrative model? The integrative model will be used as a point of reference because it is internationally established and based on international literature except in Latin America. It is thus suited for comparison and potential extension by the results of the scoping review. The comparison fosters the development of one joint conceptualization of PCC in Latin America and internationally. This enables comparability and therefore also communication and collaboration in research as well as in implementation. The result of this review can thus support the declared aim to implement PCC in Latin America.
METHODS To address the research question, a scoping review was conducted following the framework of Peters et al. 2.1 2.2 2.3 2.4 2.5 Search strategy We developed a protocol following Peters et al. and defined the population as the general population in Latin America, the concept as PCC, and the context as health care in general. The protocol can be received from the authors upon request. Two reviewers (A. K. and A. M.) conducted the electronic literature searches in MEDLINE, EMBASE, PsycINFO, CINAHL, Scopus, Scielo, and Web of Science between April and May 2021. Articles were included if they were published between January 2006 and December 2021. We limited the search to 15 years, considering milestones in Latin American countries on the implementation of PCC (e.g., health reform in Chile in 2006). Articles were included if published in the regions' official languages: English, Spanish, French, and Portuguese. We carried out a secondary literature search by asking Latin American experts in PCC for relevant references. Finally, a gray literature search was conducted on the webpages of the ministries of health of each country in Latin America.
Eligibility criteria In the initial search, we included articles that contained one of the following terms in the title and abstract: patient‐centered, person‐centered, family‐centered (each with four spelling variations) and patient‐focused (with two spelling variations). In addition, titles and abstracts of the records had to contain either the term Latin America or the name of one of the 27 Latin American countries. In addition to scientific articles, opinion articles, discussion articles, editorials, letters to the editor, statements, and books were included. There was no exclusion criterion regarding the study design or setting. During the title‐abstract screening, studies and other records were excluded if they had not been carried out in Latin America or did not discuss their major content in the context of a Latin American country. Records were excluded if they did not discuss the key term, upon which they had been included in the initial search, in the context of health care. In the full‐text screening, records were only maintained, if they contained a definition of the key term.
Study selection process The identified records were imported into Endnote X9 and duplicates (1465) were removed. Two reviewers (A. K. and A. M.) conducted the title and abstract screening, and three reviewers (A. K., A. M., and C. T.) did the full‐text screening and data extraction. We randomly distributed the articles among the reviewers. Spanish articles were read only by two (A. M. and A. K.), and Portuguese articles were read by A. M. and reviewed by P. B. Each article was double‐screened and compared among the respective reviewers. The number of articles was balanced out between reviewers. Finally, A. M. and A. K. reviewed all the extracted information and codes and discussed discrepancies to reach an agreement. Doubts about terms and concepts were discussed by the team (A. K., A. M., I. S., and P. B.).
Data extraction The following data were extracted using a data extraction sheet including country of publication, the description of the main concept, study design, data acquisition, sample characteristics, health setting, and the conclusion drawn by the respective paper regarding the main concept. As we suggest that one joint model of PCC based on international research is desirable, we used the integrative model by Scholl et al. for the analysis of conceptualizations of PCC in Latin America. For every paper, it was extracted regarding whether the 15 dimensions of the integrative model were mentioned. This was done by the deductive coding. Aspects related to PCC that were mentioned in the selected literature, but not covered by the integrative model were extracted separately. A. K. and E. C. discussed whether these were new dimensions or could be subsumed into one of the 15 dimensions of the model. The references provided for the definitions of PCC and family‐centered care (FCC) were analyzed regarding repetition in the sample and their origin.
Synthesis and analysis To answer the research question of how the conceptualization of PCC in Latin America differs from the integrative model, the following descriptive information was analyzed for the selected literature: frequency of publication type, distribution of publication years per the central concept, frequency of publications per country in Latin America, and repetition of authors who published the included literature. The extracted main concepts were grouped based on content, and the resulting division was considered in all further analyses. To understand the origin of conceptualizations of PCC in the selected literature, the references provided for the definitions of PCC were analyzed with respect to their origin and repetition between articles. The results of the deductive coding regarding the 15 dimensions of the integrative model were analyzed by A. K. with respect to the occurrence and frequency of each dimension. We carried out a content analysis of the conceptual definitions of PCC and FCC, following these steps: (1) development of the research question; (2) selection of the categories of analysis; (3) collection of data in a predetermined coding agenda; (4) revision of categories and coding agenda into meaningful clusters (principles, activities, results); (5) final interpretation of the results. All analyses were done in Microsoft Excel.
RESULTS 3.1 3.2 Main concepts In the selected literature, PCC was discussed using diverse terms. These terms were grouped into PCC and FCC categories, which will be referred to as main concepts in the following. In most articles ( n = 22) the main concept was PCC. Twenty‐four different terms were used to refer to this main concept. One article described PCC in the context of the Biomedical Model of Care. FCC was the main concept of 10 articles and within these, four different terms were used to refer to FCC. For an overview of all the terms used to refer to the main concepts in the selected literature, see Table . 3.2.1 3.2.2 3.2.3 3.2.4 3.2.5 3.2.6 escriptive information The initial electronic literature search identified 3430 articles (1465 duplicates). Based on the secondary search and gray literature search, 18 articles were added. After the full‐text screening, 32 articles were included in the analysis. For the PRISMA 2020 flow chart, see Figure . The reasons for excluding articles during the title and abstract screening and full‐text screening were that articles turned out to be from outside Latin America. For example, studies were included based on the search term “Mexico,” but were later identified as studies from New Mexico, USA. Similarly, studies on PCC for Latin American immigrants in the United States of America, published by authors from the United States of America, were excluded. Another article was excluded because the study was completely conducted in Spain, even though one coauthor was affiliated with an institution in Latin America. At least 18 articles were excluded for missing a definition of the main concept the article was discussing (e.g., PCC). Papers focusing on person‐centered research methods instead of health care were also excluded. Finally, articles on the person‐centered therapy developed by Carl Rogers were excluded because the articles took a therapeutic perspective on PCC, instead of a system‐based perspective, which is of interest for this study. The selected literature comprises 29 research papers, , , , , , , , , , , , , , , , , , , , , , , , two health policy documents, , and one conference abstract. Most studies ( n = 27) were published between 2013 and 2021. For an overview of the publication year and the main concept of the included articles, see Figure . Almost half of the articles ( n = 15) were published by authors from Brazil, , , , , , , , , , , , , , , six were published by authors from Chile (two health policies, , four research articles , , ), and five by authors from Mexico (one conference abstract, four research articles , , ). Four papers were published by authors from several Latin American countries, , , , and one article was published by authors from Colombia and Honduras each. In the included literature on PCC, there were authors who repeatedly occurred, either as first‐, co‐authors, or last‐authors: Doubova, S. V. (5), Bravo, P. (3), Dois, A. (3), Martinez‐Vega, I.P. (2), Ministerio de Salud Chile (2). In the literature focused on FCC, each article was published by different authors.
imensions of the integrative model Each dimension of the integrative model was covered in the selected literature. For an overview of the dimensions of the integrative model, see Supporting Information: Appendix . Patient information was the dimension that was covered most often, with 31 articles mentioning it. The dimension covered the least was physical support , with four articles mentioning it. For an overview of the frequency by which the dimensions were covered in the selected literature (see Table ). In eight articles on FCC, the health context was either neonatal or pediatric care. In five of these articles, the covered dimensions of the integrative model were described as referring to the family, not only the patient. For example, in a study on FCC at neonatal intensive care units, the clinician–patient relationship naturally included the clinician–family relationship. Similarly, the family was included in the other dimensions. In the articles on PCC, the patient's family was referred to in a separate dimension, namely the involvement of family and friends , as done in the integrative model. The dimensions patient information and involvement of family and friends were covered by all articles on FCC. However, the dimension physical support was not covered by any article on FCC. The dimensions patient information, essential characteristics of the clinician, and patient involvement in care were covered by over 80% of the articles on PCC. For a detailed overview of the frequency by which the dimensions were covered in articles on either PCC or FCC (see Table ). For detailed overviews of the included literature and the coding of the dimensions of the integrative model, see Supporting Information: Appendices and .
Novel aspects of patient‐centeredness In the literature on PCC, the following aspects were mentioned that are not explicitly covered by any dimension of the integrative model proposed by Scholl et al. : “involvement of the local community,”(2) “patient as a multidisciplinary health care team member,” “acknowledgment of the family's potential.” In the literature on FCC, the following aspects were mentioned that are not covered by any dimension of the integrative model: “family as a care unit,”(9) “infrastructure to accommodate family members and to encourage their stay,”(2) “frequent reassessment of preferences as they may change over time.” These novel aspects could be used to extend distinct dimensions of the integrative model. However, we refrain from considering them aspects of PCC specific to Latin America.
Concept analysis We grouped the definitions of PCC and FCC into the following meaningful clusters: principles, activities, and results. Principles comprised autonomy, respect, collaboration, participation, and the form of care (coordinated and continuous). Activities included how PCC and FCC are implemented, for example, reviewing patient preferences, planning, evaluating, sharing information, and listening to the patient. As a result of the implementation of PCC and FCC, the impact on individuals and families stood out. For a complete overview of the concept analysis and the included definitions, see Supporting Information: Appendix .
Patient‐centered care The principles used in the definitions of PCC were dignity, respect, and participation. Autonomy and (co‐)responsibility were repeatedly named as well. PCC includes the following dimensions: biopsychosocial perspective; patient as a unique person; consideration of patient's values and beliefs; power and shared responsibility in care; therapeutic alliance to improve communication and participation in medical decision making; and the professional as a unique person . The named activities for implementation were observing the patient's preferences, needs and values, sharing information, and improving the communication for the continuity of care. The suggested results of these activities were that patients and their families feel encouraged to make joint decisions about their care, as well as increased patient satisfaction and self‐management.
Family‐centered care The principles standing out in the definitions of FCC were dignity, respect and participation of the patient and family, and collaboration with them. The central assumptions of FCC are dignity and respect, in which professionals should be able to listen to patients and their families, have respect for the knowledge and beliefs of the patient and his/her family, because these assumptions are included in care, shared information, active participation and collaboration . In addition, the family appeared as a subject of care and an essential source of support to the health care provider. The activity suggested for implementing FCC is sharing information with the family and the expected results are reduced anxiety and stress among the family members (Table ).
References given for the definitions of PCC and FCC In the selected literature, 89 different references were used to define PCC and FCC. The references were published by authors from 18 countries and one by the World Health Organization. Of 60 references that were provided for the definition of PCC, 37 were international (25 from the United States of America), and 23 were from Latin America. A total of 15 of 23 Latin American references were from Brazil and almost all of these (14) were cited by authors from Brazil. A total of 4 of the 60 references were cited twice. The analysis of the references provided for the definition of FCC showed that of 29 references were international (eight from the United States of America), and 10 were from Latin America, more specifically from Brazil. Three references were cited twice. Also, one reference was cited once in a definition of PCC and once in a definition of FCC.
DISCUSSION The aim of this study was to identify how PCC is conceptualized in Latin American countries. The analysis showed that two closely related but distinguishable concepts are discussed in the literature: PCC and FCC. Even though diverse terms were used to refer to PCC and FCC, an overlap between the provided definitions and the integrative model, and thus with international literature, was found. In most papers, international literature was cited to define PCC/FCC. There was little overlap in these citations, thus, no specific model of PCC was repeatedly used. Novel aspects not covered by the integrative model emerged as well. Most frequently mentioned was the identification of the family members as units of care. Most dimensions of the integrative model were covered by at least two‐thirds of the included literature. Despite the reported differences to international literature, this shows that the conceptualization of PCC in Latin America considerably overlaps with the conceptualization in the global north. Regarding the dimensions described in the integrative model, we found that sharing information and patient involvement are most often mentioned in the literature. Physical support , teamwork and teambuilding , and integration of medical and nonmedical care were mentioned least. These results might reflect priorities but also the needs of the current health care systems in Latin America, as patients might continue to be placed at a passive role in their care. Novel aspects not explicitly mentioned in the integrative model emerged from Latin American literature. The importance of infrastructure and the possibility for accommodation of family members were named. This is supposed to reduce anxiety and stress of family members. Latin America is a diverse region with different health care systems. The health care systems are built on postdictatorship neoliberal economic–political models, which explains that access and infrastructure are still not fully guaranteed in all the states of the region. Thus, in line with previous research, our results imply that health care infrastructure is one problem in Latin America that needs to be addressed to guarantee universal access to care and to enable a cascade of PCC activities. The involvement of the local community has emerged as another novel aspect to the integrative model. In contrast to the involvement of the family, this aspect has not been explicitly stated in the international literature the integrative model is based on. Potential explanations are that community involvement is, dependent on the region, difficult to implement and thus less intuitive than involvement of families for example. Involvement of the local community has been proposed with the aim to make use of all given resources to improve the health care of individual patients. Another reason for the emergence of the local community as an aspect of PCC might be of historical nature. In Brazil, the Unified Health System was promoted by a health reform because there was a regionalized and decentralized network of health services, focusing on community participation. Other reasons might be a cultural imprint towards collectivism or a lack of resources. The content analysis showed that the primary principles identified in PCC and FCC in Latin America are dignity, respect, and participation. These findings are in line with the conceptualization of PCC and FCC outside Latin America. , The overlap can be explained by the fact that the references used to define PCC and FCC were primarily from non‐Latin American countries, mainly from the United States of America. On the one hand, this is in line with the idea of a standard model of PCC. On the other hand, these results show that there is a scarceness of research groups specialized in PCC in Latin America, who have worked on proposing a conceptualization relevant for their own context, which is a contrast to North America, Europe, or Australia. An international scoping review suggested FCC to be a part of PCC with a stronger focus on patient and family values, preferences, and needs. In contrast, our analysis showed differences between the concepts. Firstly, in the case of PCC, the focus was the patients themselves while the patient's family was referred to separately. The emphasis was placed on the co‐responsibility of the patient, excluding other significant actors such as relatives. In the definition of FCC, the focus was on the family and a collaboration established between the family and health professionals. This can be explained by the fact that the literature associates FCC with caring for children, elderly, or ailing individuals (not able to consent), thus, with the need for collaboration between family and HCPs. This focus can also be observed in the Latin American context. Secondly, we found differences in the activities and results of the two concepts. In PCC, the focus is on the encouragement of patients to take part in the decisions of their care, and the patients' satisfaction and self‐management. Contrary to PCC, for FCC the analysis showed that sharing information with the family is one of the most important activities aiming at the reduction of anxiety and stress of the family members, without necessarily enhancing an active involvement of the family members in the decision‐making process. This article has some limitations. Firstly, following the recommendations of Peters, no quality appraisal of the included literature was conducted. However, there is literature arguing in favor of an assessment of quality in scoping reviews similarly to systematic reviews. Secondly, the gray literature search involved asking Latin American experts in PCC for relevant references. The experts were identified by the Latin American coauthors. This acquisition of experts might not have been exhaustive. In future studies, multiple independent researchers could be asked to achieve an exhaustive search of experts and thus of gray literature. Thirdly, numerous articles discussed PCC but failed to provide any meaningful definition of the concept. As we required an explanation of the concept for our concept analysis, we excluded these articles. Similarly, we excluded articles that only contained a keyword in the abstract but not in the title. Even though that was a considered decision, it may have caused a loss of information about the conceptualization of PCC in Latin America. Despite the limitation to articles with keywords in the title, the full‐text screening resulted in numerous articles missing a definition of PCC. Therefore, we propose that the scoping review provides a justifiably complete overview of the conceptualization of PCC in Latin American research and health policies. Aside from the limitations, the scoping review offers distinctive strengths. The coauthors involved in the scoping review are experts in the field of PCC in Latin America and in Germany. The team jointly developed a search strategy that identifies as many sources on PCC as possible, even though the terminology in Latin America is diverse. Another strength is the identification of the conceptualization by use of a scoping review methodology. The method offers a broad overview of terminology and definitions and the opportunity to draw connections between present studies. Thus, this scoping review adds to previous research not covering studies from all over Latin America. , Our study shows that research on PCC is limited to a few Latin American countries. A strategy to support research on PCC in multiple countries in Latin America could be transnational studies on PCC, involving researchers and data from more than one country. The results also imply that future studies should clearly define the concept they aim to investigate. These strategies can foster the development of a common conceptualization of PCC. Future research can expand the present findings by assessing the needs of Latin American health care systems regarding PCC and barriers of its implementation. Novel aspects of PCC emerged from the present study. An integration of these novel aspects into the integrative model, either as new dimensions or as elements of existing dimensions, should be investigated in future empirical studies. The results also showed that few health ministries in Latin America have published documents discussing PCC, even though PCC is a declared aim. Thus, the concept should be defined and specific aims regarding PCC should be described in health care policies. The definition of PCC should be based on empirical research.
CONCLUSION This scoping review synthesized and compared the conceptualization of PCC in 32 selected articles from Latin America published between 2006 and 2021. The analyses demonstrated a strong overlap between the integrative model and the definitions of PCC given in the literature. A conceptual distinction between PCC and FCC has been found. However, the results indicate a lack of standardization of the concept PCC in Latin America. The results will be used to develop a mixed‐methods study to understand the needs, barriers, and facilitators regarding PCC in Latin America. Based on the outcomes, the integrative model will be adapted to the Latin American context. The aim is to introduce a standard model for PCC that enables comparability of research, a transfer of outcomes between countries, and increasingly efficient communication on PCC in research, health policy, and clinical practice.
All authors contributed to the conception and design of the study. Anne Klimesch, Alejandra Martinez‐Pereira carried out the searches, and the interpretation of the results. Anne Klimesch, Alejandra Martinez‐Pereira, and Cheyenne Topf were involved in the data extraction. All authors contributed to the writing up of the manuscript and approved the submitted version.
Anne Klimesch, Alejandra Martinez‐Pereira, and Cheyenne Topf declare that there are no conflict of interest. Martin Härter, Isabelle Scholl, and Paulina Bravo declare that they currently are (Martin Härter, Paulina Bravo) or have been (Isabelle Scholl) members of the executive board of the International Shared Decision‐Making Society, which has the mission to foster the implementation of shared decision‐making and patient‐centered care. Paulina Bravo, Martin Härter, and Isabelle Scholl have no further conflict of interest.
Supporting information. Click here for additional data file.
|
Latest Developments in “Adaptive Enrichment” Clinical Trial Designs in Oncology | 0d8a026e-18ff-40e8-a111-162885b7f276 | 11530510 | Internal Medicine[mh] | As cancer has become better understood on the molecular level with the evolution of gene sequencing techniques, considerations for individualized therapy using predictive biomarkers (those associated with a treatment’s effect) have shifted to a new level. Traditional randomized trial designs tend to either oversimplify or overlook differences in patients’ genetic and molecular profiles, either by fully enriching eligibility to a marker subgroup or enrolling all-comers without prospective use of potentially predictive biomarkers. In the former case of marker enrichment, one cannot learn about a marker’s true predictive ability from the trial’s conduct (as marker-negative patients are excluded); in the latter case ignoring the biomarker, the end result may be a “washing out” of the treatment effect when a predictive marker truly does exist within the sampled patient population. In the last decade or so, randomized “adaptive enrichment” clinical trials have become increasingly utilized to strike a balance between enrolling all patients with a given tumor type, versus enrolling only a subpopulation whose tumors are defined by a potential predictive biomarker related to the mechanism of action of the experimental therapy (see for example ). On a high level, adaptive enrichment designs take the form of a clinical trial that begins by randomizing participants to a targeted versus a control therapy regardless of marker value, then adapts through a series of one or more interim analyses to potentially limit subsequent trial recruitment to a marker-defined patient subpopulation that is showing early signals of enhanced treatment benefit. In this review article, we first discuss the “traditional” presentation of both enrichment and adaptive enrichment designs and their decision rules and describe statistical or practical challenges associated with each. Next, we introduce innovative design extensions and adaptations to adaptive enrichment designs proposed during the last few years in the clinical trial methodology literature, both from Bayesian and frequentist perspectives. Finally, we review articles in which different designs within this class are directly compared or features are examined, and we conclude with some comments on future research directions. Enrichment Trial DesignsAdaptive Enrichment Trial DesignsTo motivate discussion of adaptive enrichment designs and why they are useful, it is helpful to first understand enrichment trial designs , or designs that focus only on a subset of the patient population from the beginning. Design Details: In the setting of targeted therapies with strong prior evidence or clinical rationale supporting efficacy only within a biomarker-selected subgroup, “marker-enriched” or enrichment trial designs are used to confirm signal or efficacy only in that selected subgroup. In these types of trials, patients are screened and classified into prespecified marker positive and negative subgroups at or prior to enrollment, with only marker positive patients eligible to remain on study and receive protocol-directed targeted therapy. This usually takes the form of a small, single-arm phase II study without a randomized comparator, but in some settings, comparisons against a randomized non-targeted standard of care therapy might be made (see Fig. ). Example: An example of a clinical trial with an enrichment design is the Herceptin Adjuvant (HERA) trial. The HERA trial is a phase III, randomized, three-arm trial that studied the efficacy of 1 year versus 2 years of adjuvant trastuzumab versus control (no additional treatment) in women with human epidermal growth factor receptor 2 (HER2)-positive early breast cancer after completion of locoregional therapy and chemotherapy . HER2 is overexpressed in 15–25% of breast cancer and trastuzumab, a monoclonal antibody, binds the HER2 extracellular receptor . The primary outcome was disease-free survival and using an intention-to-treat analysis, significant treatment benefit was demonstrated for 1 year of trastuzumab compared to the control arm. Limitations: One important limitation of enrichment designs is that a marker’s predictive ability to select patients for treatment is assumed to already be known and cannot be validated from the trial itself. It is theoretically possible that a pre-defined marker-negative subgroup might also benefit from the targeted treatment, but that knowledge won’t be updated with an enrichment design. For example, a pre-clinical study found that trastuzumab can decrease cancer cell proliferation in HER2 negative and HER2 phosphorylation at tyrosine Y877 positive breast cancer cell lines, which is comparable to the drug effect in HER2 positive breast cancer cell lines, showing that the HER2 negative subpopulation may also benefit from trastuzumab . Around the same time, however, the randomized study B-47 conducted by the National Surgical Adjuvant Breast and Bowel Project (NSABP) group showed no effect of trastuzumab in HER2-low patients . Another limitation of enrichment trial designs is the necessity of establishing predefined subgroups during the study planning phase, which becomes complicated when dealing with biomarkers that are measured on a continuous scale, like expression levels or laboratory values. Determining an appropriate threshold to divide patients into “positive” and “negative” groups is not always straightforward, validated, or effective in distinguishing the effect of the targeted treatment. Selecting an incorrect threshold during trial design can result in an ineffective or underpowered study, and revising the decision once the trial has begun accrual is not advisable. Adaptive enrichment trial designs, on the other hand, are an attractive solution to the inherent weaknesses of a fully enriched trial design. Design Details: An adaptive enrichment trial design initially enrolls patients with any marker value(s) and randomizes them to experimental targeted versus standard (non-targeted) therapy. As the trial progresses, accrual may be subsequently refined or restricted to patients with certain marker values according to those showing initial efficacy on the basis of one or more interim analyses. This design is randomized out of necessity, so that treatment-by-marker interactions may be computed, and adaptations based on differential treatment effects by marker subgroups can be facilitated. At the interim analyses, according to pre-specified decision rules, a trial may stop early for futility or efficacy, either overall, or within a marker-defined subgroup. If the biomarker of interest is not naturally dichotomous, the same interim analyses may also be used to select or revise marker cutpoints (see Fig. ). Example: One real-world example of an adaptive enrichment design is the Morphotek Investigation in Colorectal Cancer: Research of MORAb-004 (MICRO), which is an adaptive, two-stage, phase II study assessing the effect of ontuxizumab versus placebo in patients with advanced metastatic colorectal cancer . Ontuxizumab, a monoclonal antibody treatment targeting endosialin function, was expected to be more effective in patients with endosialin-related biomarkers. Since the biomarkers were continuous in nature and the optimal cutoffs were unknown, the study included an assessment for determining the best cutoffs at an interim analysis, where progression-free survival (PFS) served as the primary endpoint. Initially, the goal was to demonstrate the treatment effect of ontuxizumab either overall or within subgroups defined by biomarkers. However, the interim analysis revealed that none of the biomarkers had a predictive relationship with treatment outcome. Consequently, the design shifted to a non-marker-driven comparison. Additionally, the interim analysis showed early futility for ontuxizumab compared to placebo overall, terminating the trial early due to lack of efficacy. In summary, this adaptive enrichment design concluded both the biomarker assessment and the evaluation of the therapy early, and additional resources and patients were spared. However, it is worth noting that it may have been underpowered to identify modestly-sized interaction effects, had they been present. Limitations: Adaptive enrichment trial designs do have some statistical challenges, including limitations faced in the design of the MICRO trial. These include estimation of subgroup-specific treatment effects, particularly when the marker prevalence is low, as a sufficiently large sample size is required to have enough patient-level information at interim analysis for informative subgroup selection. As a practical consideration, the primary endpoint must be quickly observed relative to the pace of accrual, to allow time for impactful adaptations based on observed outcomes relatively early in the trial. Another challenge is how exactly one should select cutpoints for adaptation of accrual. In the MICRO trial, at the interim analysis, a series of Cox proportional hazards models were fit over a grid of possible cutpoints, and the significance of a marker by treatment interaction term was evaluated. A pre-specified level of statistical significance for the interaction, along with a clinically meaningful effect in the marker “positive” group defined by the interaction, would warrant potential accrual restriction; however, this approach treated truly continuous biomarkers as binary in its implementation, which results (at least theoretically) in a loss of information and potential loss of power. Several groups have attempted to extend or modify the standard adaptive enrichment trial design in various ways to address statistical shortcomings or tailor the strategy to various applications. The remainder of this paper provides an overview of some of these recent developments. While we admit such designations are rather arbitrary, we present this work separately by Bayesian and frequentist approaches, so that structural similarities among them may be readily described and compared. Bayesian ApproachesFrequentist ApproachesXu et al. proposed an adaptive enrichment randomized two-arm design that combines exploration of treatment benefit subgroups and estimation of subgroup-specific effects in the context of a multilevel target product profile, where both minimal and targeted treatment effect thresholds are investigated . This adaptive subgroup-identification enrichment design (ASIED) opens for all-comers first, and subgroups identified as having enhanced treatment effects are selected at an interim analysis, where pre-set minimum and targeted treatment effects are evaluated against a set of decision criteria for futility or efficacy stopping for all-comers or possible subgroups. A Bayesian random partition (BayRP) model for subgroup-identification is incorporated into ASIED, based on models proposed by Xu et al. and Guo et al. . Due to the flexibility of the BayRP model, biomarkers can be continuous, binary, categorical, or ordinal, and the primary endpoint types can be binary, categorical, or continuous. Per the authors, extensions to count or survival outcomes are also possible. BayRP was implemented due to its robustness, but other Bayesian subgroup identification methods could be used as well, like Bayesian additive regression tree (BART) or random forests for larger sample sizes . A tree-type random partition of biomarkers is used as a prior and an equally spaced k-dimensional grid constructed from k biomarkers is used to represent possible biomarker profiles. The operating characteristics of ASIED as a trial design was evaluated by simulations with 4 continuous biomarkers, a total sample size of 180, an interim analysis after 100 patients were enrolled, a minimum desired treatment effect of 2.37 and target treatment effect of 3.08 on a continuous score scale. ASIED’s recommendations were close to the expected results. However, the number of simulated trials was only 100, which could yield lower precision of the estimated operating characteristics. Another limitation is that the partition of the biomarker profile was limited to at most four biomarker subgroups due to the small sample size in each partition. Another Bayesian randomized group-sequential adaptive enrichment two-arm design incorporating multiple baseline biomarkers was proposed by Park et al. . The design’s primary endpoint is time-to-event, while a binary early response acts as a surrogate endpoint assisting with biomarker pruning and enrichment to a sensitive population at each interim analysis. Initially, the study is open for all-comers and the baseline biomarkers can be binary, continuous, or categorical. The first step at each interim analysis is to jointly select covariates based on both the surrogate and final endpoints by checking each treatment by covariate interaction. The second step is to recalculate the personalized benefit index (PBI), which is a weighted average posterior probability indicating patients with selected biomarkers who benefit more from the experimental treatment. The refitted regression from the variable selection step will redefine the treatment-sensitive patients, and only patients with PBI values larger than some pre-specified cutoff continue to be enrolled to the trial. The third step is to test for futility and efficacy stopping by a Bayesian group sequential test procedure for the previously identified treatment-sensitive subgroups. In simulations, AED was compared with group sequential enriched designs called InterAdapt and GSED, an adaptive enrichment design and all- comers group sequential design . The maximum sample size considered was 400, and patients were accrued by a Poisson process with 100 patients per year. Two interim analyses took place after 200 and 300 patients enrolled, and 10 baseline biomarkers were considered. Across each of the seven scenarios, prevalence of the treatment-sensitive group was set to be 0.65, 0.50, or 0.35. While nearly all the designs controlled the nominal Type I error to 0.05, AED had higher probabilities of identifying the sensitive subgroup and correctly concluding efficacy than other designs. Also, 1000 future patients were simulated and treated by each design’s suggested treatment, and AED had the longest median survival time overall. One stated limitation of this work was its inability to handle high dimensional baseline biomarker covariates, as the authors suggest considering no more than 50 baseline covariates in total. Also, biomarkers in this design are assumed to be independent, though selection adjustment for correlated predictors is mentioned. It is worth noting that early response (as used by this design) has not been validated as a good surrogate for longer-term clinical endpoints. To address the scenario of a single continuous predictive biomarker where the marker-treatment relationship is continuous instead of a step function, Ohwada and Morita proposed a Bayesian adaptive patient enrollment restriction (BAPER) design that can restrict the subsequent enrollment of treatment insensitive biomarker-based subgroups based on interim analyses . The primary endpoint is assumed to be time-to-event, and the relationship between the biomarker and treatment effect is assumed to increase monotonically and is modeled via a four-parameter change-point model within a proportional hazard model. Parameters are assumed to follow non-informative priors, and the posterior distributions are calculated using the partial likelihood of the Cox proportional hazard model. At each interim analysis, decisions can be made for a subgroup or the overall cohort. In addition, treatment-sensitive patients can be selected based on a biomarker cutoff value, which is determined by searching over the range of biomarker values and picking the one with the highest conditional posterior probability of achieving the target treatment effect. Simulations were conducted to compare the proposed method against both a similar method without enrichment and a design using a step-function to model marker-treatment interaction effects without enrichment. The maximum sample size considered was 240 with two interim analyses, and the assumed target hazard ratio was 0.6. The results show that the proposed BAPER method decreases the average number of enrolled patients who will not experience the targeted treatment effect, compared to designs without patient selection. Also, BAPER has a higher probability of correctly identifying the cutoff point that achieves the target hazard ratio. However, BAPER has certain restrictions: the biomarker cannot be prognostic, as the main effect for the biomarker is excluded from the proportional hazard model. Also, the design does not consider the distribution of the biomarker values themselves, so a larger sample size is required when the prevalence of the treatment sensitive (or insensitive) population is small. Focusing on an optimal decision threshold for a binary biomarker which is either potentially predictive or both prognostic and predictive, Krisam and Kieser proposed a new class of interim decision rules for a two-stage, two-arm adaptive enrichment design . This approach is an extension of Jenkins et al.’s design but with a binary endpoint instead of a time-to-event outcome . Initially, their trial randomizes all patients from two distinct subgroups (i.e., a binary biomarker), assuming one subgroup will have greater benefit, and the sample size is fixed per stages by treatment group. At the first interim analysis, the trial might stop early for futility, continue enrolling to only the marker-positive group, or continue enrolling the full population, while using Hochberg multiplicity- corrected p-values for these decisions. When the full population proceeds to the second stage, it remains possible that efficacy testing will be performed both overall and in the treatment-sensitive subgroup if the biomarker is found to be predictive or prognostic, or only within the total population if the biomarker is not predictive. The critical boundaries for subgroup decisions minimize the Bayes risk of a quadratic loss function by setting the roots of partial derivatives as optimal thresholds, assuming the estimated treatment effects follow bivariate normal distributions with design parameters from uniform prior distributions. A relevance threshold for the effect size, which serves as the minimal clinical meaningful effect, also needs to be prespecified. Optimal decision threshold tables are presented for a biomarker that is predictive, both predictive and prognostic, or non-informative, with sample sizes ranging from 20 to 400 and subgroup prevalence values of 0.1, 0.25 and 0.5 considered. In their simulations, the sample size is 200 per group per stage (for a total trial sample size of 800), the treatment effect (response rate) in one of the subgroups is 0.15, and the biomarker is both predictive and prognostic. Optimal decision rules with three different assumptions for the biomarkers (predictive, predictive and prognostic, non-informative) and subgroup prevalence are compared with a rule just based on relevance thresholds. Power is increased under the proposed decision rules when the correct biomarker assumption is made. Since the decision thresholds incorporate sample size and subgroup prevalence information, one major limitation is that knowledge about the biomarkers must be strong enough pre-trial to prespecify the required parameters. Nesting frequentist testing procedures within a Bayesian framework, Simon and Simon proposed a group-sequential randomized adaptive enrichment trial design that uses frequentist hypothesis tests for controlling Type I error but Bayesian modeling to select treatment-sensitive subgroups and estimate effect size . The primary endpoint in their models is binary, and multiple continuous biomarkers are allowed, comprising a vector of covariates for each patient. Patients are sequentially enrolled in a total of K blocks, and enrollment criteria for the next block are refined by a decision function, which is built on the block adaptive enrichment design by Simon and Simon . The final analysis is based on inverse normal combination test statistics using data from the entire trial. A prior for the response rate in each arm needs to be prespecified, which is based on both the biomarker covariates and a utility function. Different utility functions can be applied according to the trial’s goal, and the one adopted here is the expected future patient outcome penalized by accrual time. Using the conditional posterior for the previous block’s information, simulations are conducted to find the optimal enrollment criteria based on the utility function. The expected treatment effect given covariates can be estimated by the posterior predictive distribution for the response rate at the end of trial. In the presented simulation study, there are two continuous biomarkers and 300 patients accrued in two or three enrollment blocks, with three logistic and three cutpoint models for the biomarker-response relationships. An unenriched design and an adaptive enrichment strategy with prespecified fixed cutpoints are compared with the proposed design. The two adaptive enrichment designs have higher power than the unenriched design to detect a treatment sensitive subgroup, and the enrichment designs have higher power when there are three versus two enrollment blocks. Compared with the fixed cutpoint enrichment method, the proposed design generally correctly identifies the treatment-sensitive subgroup while avoiding non-ideal pre-determined cutoff points for the following enrollment criteria. Though the effect size estimation is biased under the proposed design, the bias is more severe under the unenriched design. Graf et al. proposed to optimize design decisions using utility functions from the sponsor and public health points of view in the context of a two-stage adaptive enrichment design with a continuous biomarker . Similar to Simon and Simon’s method, the proposed design’s decisions are based on frequentist hypothesis tests, while the utility functions are evaluated under the Bayesian approach. In this design, patients are classified into marker positive and marker negative groups at enrollment, and decisions can be made with respect to the full population or the marker positive subgroup only. Closed testing procedures along with Hochberg tests are used to control the family wise type I error rate. Parameters called “gain”, which quantify the benefit rendered by the trial to the sponsor and society, need to be pre-specified. The utility function under the sponsor view is the sum of the gain multiplied by the probability of claiming treatment efficacy in the full population or a marker-positive group, respectively. In addition to gain and success probabilities, the public health utility function also considers the true effect sizes in subgroups, and safety risk as a penalization parameter. Prior distributions are used to model treatment effects in each subgroup to account for uncertainty, but the authors assume that only the marker negative group can be ineffective, and only point priors are used, which leads to a single probability that the treatment is effective in just the marker positive subgroup or the full population. This optimized adaptive design is compared with a non-adaptive design when the total sample sizes are the same. The adaptive design provides larger expected utility in both utility functions only when the values are intermediate in gain from treatment efficacy and the prior point probability. One limitation is that those utility functions can only compare designs with the same total sample size and the cost of running a trial is not included. Serving as an extension of Graf et al.’s work by incorporating a term for the trial cost in utility functions, Ondra et al. derived an adaptive two-stage partial enrichment design for a normally distributed outcome with subgroup selection and optimization of the second stage sample size . In a partial enrichment design, the proportion of the marker-positive subjects enrolled does not need to be aligned with the true prevalence. At interim analysis, the trial can be stopped for futility, or continued in only the marker-positive population or the full population. The final analysis is based on the weighted inverse normal function with Bonferroni correction. Utility functions used for optimization are from societal or sponsor perspectives. Expected utility is calculated by numerical integration on the joint sampling distribution of two stage-wise test statistics, with the prior distributions for the treatment effect in each subgroup. The optimal sample size for the second stage maximizes the conditional expected utility given the first stage test statistics and sample size used, and the optimal first stage sample size maximizes the utility using the solved optimal number for the second stage. The optimization function is solved recursively by dynamic programming, and the optimal design in terms of the sample size is obtained. The optimized adaptive enrichment design is compared with an optimized single- stage design for subgroup prevalence ranging from 10 to 90%, with both weak and strong predictive biomarker priors considered. Expected utilities are higher in both sponsor and societal views in the adaptive design. Also, even if the prior distribution for the effect size used in the design differs from the true distribution, the proposed adaptive design is robust in terms of expected utilities when the biomarker’s prevalence is high enough. One limitation is that the endpoint needs to be observed immediately, which might be addressed by a short-term surrogate endpoint—though to date, validated short-term endpoints are rare in oncology. Fisher et al. proposed an adaptive multi-stage enrichment design that allows sub-group selection at an interim analysis with continuous or binary outcomes . Two subpopulations are predefined, and the goal is to claim treatment efficacy in one of the subpopulations or the full population. The cumulative test statistics for the subgroups and the full population are calculated at each interim analysis and compared against efficacy and non-binding futility boundaries. To control the family-wise Type I error rate (FWER), two methods for constructing efficacy boundaries are presented. One is proposed by Rosenblum et al. that spends alpha based on the covariance matrix of test statistics by populations (two subpopulations and the full population) and by interim stages . Another is the alpha reallocation approach . The design parameters, including sample size per stage, futility boundaries, etc., are optimized to minimize the expected number enrolled or expected trial duration using simulated annealing, with constraints on power and Type I error. If the resulting design does not meet the power requirement, the total sample size will be increased until the power requirement is met. The optimized adaptive design is compared with a single-stage design, optimized single-stage design, and a multi-stage group sequential design with O’Brien-Fleming or Pocock boundaries using actual trial data from MISTIE and ADNI . For the MISTIE trail, the proposed designs are optimized by the expected number enrolled, which is lower than for the optimized single-stage design and group-sequential design, but the maximum number enrolled is still lower in the simple single-stage design. In the ADNI trial, when the expected trial duration is optimized, the proposed design has a slightly shorter expected duration but a longer maximum duration than the optimized single-stage design. Similar to the aforementioned Bayesian approaches without predefined sub-populations, Zhang et al. proposed a two-stage adaptive enrichment design that does not require predefined subgroups . The primary outcome is binary, and a collection of baseline covariates, including biomarkers and demographics, is used to define a treatment-sensitive subgroup. The selection criteria are based on a prespecified function modeling the treatment effect and marker by treatment interaction using first stage data. The final treatment effect estimate is a weighted average of estimates in each stage. To minimize the resubstitution bias from using first stage data in subsequent subgroup selection and inference, four methods for estimating the treatment effect and variance for the first stage are discussed: naive approach, cross-validation, nonparametric bootstrap, and parametric bootstrap. To compare those estimation methods, ECHO and THRIVE trial data are used for the simulation with a total sample size of 1000. The first stage has 250, 500 or 750 subjects, and the function used to simulate outcomes is the logistic regression model. The results show that the bootstrap method is more favorable than both the naive estimate (which has a large empirical bias) and the cross-validation method (which is overly conservative). The weight for each stage and first stage sample size need to be selected carefully to reach a small root mean squared error (RMSE) and close-to-nominal one-sided coverage. Though a trial can stop due to inability to recruit to a subset resulting from restricted enrollment, the proposed method does not include an early stopping rule for futility or efficacy. In order to reduce sample size while assessing the treatment effect in the full population, Matsui and Crowley proposed a two-stage subgroup-focused sequential design for time-to-event outcomes, which could extend to multiple stages . In this design, patients are classified into two subgroups by a dichotomized predictive marker, with the assumption that the experimental treatment is more efficacious in the marker-positive subgroup. The trial can proceed to the second stage with one of the subgroups, or the full population, but treatment efficacy is only tested in the marker-positive group or the full population at the final analysis. Choices of testing procedures are fixed-sequence and split-alpha. At the interim analysis, a superiority boundary for the marker-positive subgroup and a futility boundary for the marker-negative subgroup are constructed. The superiority boundary is calculated to control the study-wide alpha level, while the futility boundary is based on a Bayesian posterior probability of efficacy with a non-informative prior. The required sample sizes for each subgroup are calculated separately, and the hazard ratio for the marker-positive subgroup is recommended to be 0.05–0.70 under this application. The proposed design is compared with a traditional all-comers design, an enriched design with only marker-positive subjects, a two-stage enriched design, and a traditional marker-stratified design. Different scenarios are considered including those with no treatment effect, constant treatment effect in both groups with hazard ratio (HR) = 0.75, a nearly qualitative interaction with HRs = 0.65 and 1, and a quantitative interaction with HRs = 0.7 and 0.8. The marker prevalence is set to 0.4, and the accrual rate is 200 patients per year. When using the split-alpha test, the proposed design has greater than 80% power to reject any null hypothesis in the alternative cases, but the traditional marker-stratified design also provides enough power under all cases. The number screened and the number randomized are reduced for the proposed design compared to the traditional marker stratified design, but the reduction is only moderate. To determine whether the full population or only the biomarker-positive subgroup benefit more from the experimental treatment, Uozumi and Hamada proposed a two-stage adaptive population selection design for a time-to-event outcome, an extension of methods from Brannath et al. and Jenkins et al. . The main extension is that the decision-making strategy at the interim analysis incorporates both progression-free survival (PSF) and overall survival (OS) information. Also, OS is decomposed into time-to-progression (TTP) and post-progression survival (PPS) when tumor progression has occurred, to account for the correlation between OS and PFS. The combination test approach is used for the final analysis based on Simes’ procedure . The hypothesis rejection rule for each population is a weighted inverse normal combination function with prespecified weights based on the expected number of OS events in each stage. At the interim analysis, a statistical model from Fleischer et al. under the semi-competing risks framework is applied to account for the correlation between OS and PFS . The interim decision rule uses the predictive power approach in each population, extending Brannath et al.’s method from single endpoint to multiple endpoints with a higher weight on PFS data due to its rapid observation. In the simulation, a dichotomized biomarker is used with a 50% prevalence. Four scenarios are considered, where hazard ratios in the marker-positive subgroup are always 0.5 and are higher in the marker-negative subgroup. For simplicity, the HR is the same for TTP, PPS, and death. FWER is controlled for all cases, but it is a little too conservative when the treatment is effective. The proposed design has a higher probability of identifying the treatment-sensitive population at the interim analysis, particularly when the PPS effect is large, those probabilities are similar between using OS or PFS alone or the combined endpoints when the PFS effect is small. One limitation of this design is that sample size calculations are not considered. Instead of a single primary endpoint, Sinha et al. suggested a two-stage Phase III design with population enrichment for two binary co-primary endpoints, which is an extension of Magnusson and Turnbull’s work with co-primary endpoints . The two binary endpoints are assumed to be independent, and the efficacy goal should be reached in both endpoints. With two distinct predefined subgroups, a set of decision rules stops the non-responsive subgroups using efficient score statistics. The futility and efficacy boundary values, which do not depend on the marker prevalence, are the same for both endpoints due to independence. The lower and upper stopping boundaries are calculated by alpha spending functions, and FWER is strongly controlled. Simulations were conducted assuming biomarker prevalences of 0.25 or 0.75 and weighted subgroup effect sizes of 0, 1, and 2 as the means of efficient score statistics under normal distribution. The results show that the proposed design can reduce false-negative results for heterogeneous treatment effects between subgroups. The authors state the possibility of extending the design to a bivariate continuous outcome, while an extension to bivariate survival would be more challenging. Kimani, Todd, and Stallard derived a uniformly minimum variance unbiased point estimator (UMVUE) of treatment effect in adaptive two-arm, two-stage enrichment design with a binary biomarker . Based on the Rao-Blackwell theorem, UMVUE for the treatment effect conditional on the selected subgroup is derived with and without prior information on maker prevalence. The proposed estimator is compared with the naive estimator, which is biased but with a lower mean squared error (MSE) when prevalence is known. The estimator is robust, with and without prior information on marker prevalence. Kimani et al. developed estimators for a two-stage adaptive enrichment design with a normally distributed outcome . A predictive continuous biomarker is used to partition the full population into a prespecified number of subgroups, and the cutoff values are determined at the interim analyses based on stage I observations. To estimate the treatment effect after enrichment for the selected subgroup, a naive estimator, uniformly minimum variance conditional unbiased estimator (UMVCUE), unbiased estimator, single- iteration and multiple-iteration biased-adjusted estimators, and two shrinkage estimators are derived and compared. Though no estimator is superior in terms of bias and MSE in all scenarios, UMVUE is recommended by the authors due to its mean unbiasedness. Tang et al. evaluated several proposed adaptive enrichment designs with a binary biomarker against the traditional group sequential design (GSD) for a time-to-event outcome . Type I error is controlled, and the subpopulation is selected by Bayesian predictive power. Adaptive design A selects the subgroup after considering futility and efficacy stopping decision. Design B selects the subgroup when the targeted number of events are observed in full population, which can be earlier than the interim analysis. Design C selects the subgroup only after the full population has reached a futility rule. Design D proceeds with the subgroup or full population by checking the treatment effect in the complementary subgroup, proposed by Wang et al. . When an enhanced treatment effect exists in the subpopulation, all of these adaptive designs could improve study power compared to GSD. Furthermore, Design C generally provides higher power across all scenarios among all the adaptive designs. Benner and Kieser explored how the timing of interim analyses would affect power in adaptive enrichment designs with a fixed total sample size for a continuous outcome and binary marker . Two subgroup selection rules are considered: the estimated treatment effect, or the estimated treatment effect difference between the subgroup and the full population (as opposed to the complement of the subgroup). When using the first selection rule, early timing increases power when the marker prevalence and marker cutoff values are low. However, the interim analysis timing’s impact on power is small when marker prevalence is high. If absolute treatment effect is used instead, earlier timing leads to power loss in general. Power depends more on the marker threshold, prevalence, and treatment effect size when interim timing is later than when half of the total sample size have observed outcomes. Kunzmann et al. investigated the performance of six different estimators besides maximum likelihood estimator (MLE) for a two-stage adaptive enrichment design for a continuous outcome . Those estimators are empirical Bayes estimator (EBE) , parametric bootstrap estimator , conditional moment estimator (CME) , and UMVCUE with MLE and CME as two hybrid estimators . The hybrid UMVCUE and CME estimator could reduce the bias across all considered scenarios, which the authors recommend, though with the cost of larger RMSE. In this review article, we have given an overview of traditional enrichment and adaptive enrichment designs, outlined their limitations, and described recent extensions and modifications to adaptive enrichment design strategies. Both Bayesian and frequentist perspectives in handling statistical issues of these designs were discussed in detail, along with important considerations for design parameters. Although the adaptive enrichment designs we have reviewed contain theoretical benefits such as early subgroup identification and early decision-making resulting in sample size reduction, we caution that selection and implementation of any of these designs requires acceptance of substantial additional trial complexity, and special consideration of the disease setting, endpoints, and markers at hand. For any of these trial designs to possibly have advantages over a simple randomized design followed by retrospective biomarker-focused analyses, the following should be true: the primary endpoint should be quickly observable relative to the pace of accrual; a sufficiently large sample size to detect moderately-sized subgroup effects of clinical interest must be achievable in a reasonable time frame, and the experimental treatment under study must have sufficiently strong preliminary evidence (e.g., from earlier phase studies) of a mechanism of action related to the candidate biomarker(s). If any of these criteria are not met, one runs the serious risk of conducting a study that is far less efficient than a standard design that is not biomarker-driven. In considering use of any design considered here, a trial biostatistician should meet with trial investigators and stakeholders to discuss the assumptions and requirements of different design options. The statistician should also prospectively understand and quantify the impact of any potential deviations from these assumptions while still in the trial planning stage (e.g., by using simulation studies). Each of the designs we discussed also have associated pros and cons, and are more suitable for application in different settings. To guide selection of a particular design for a particular context, we summarize design attributes (e.g., applicable primary endpoint types, number of biomarkers, decision rules, and other structural differences) as well as pros and cons in Table . For example, if there is no predefined biomarker subgroup and predictive biomarker discovery is required, Xu et al. and Zhang et al.’s proposed designs could be considered . Where Bayesian methods for estimation and interim decision-making using utility functions are desired but where final frequentist hypothesis testing is necessary, e.g., for regulatory purposes, the designs by Simon and Simon, Graf et al., or Ondra et al. may be appropriate . Where strong control of Type I error rate is required (e.g., in a later-phase application), designs by Matsui and Crowley, Fisher et al., and Uozumi and Hamada may be referenced . Overall, adaptive enrichment trial designs tend to increase study efficiency while minimizing subsequent study participation among patients showing a low likelihood of benefit based on early trial results . Biomarker-driven designs that reliably identify or validate predictive biomarker relationships and their thresholds with sufficient power to achieve phase II or III objectives continue to be of interest and warrant further development. Designs that make better use of truly continuous (versus dichotomous) marker-efficacy relationships are essential for future research. |
A novel Y-shaper and chopper for small pupil management in cataract surgery: The initial experience | 526acfec-2f6b-4200-9e16-c1086581ea64 | 10229943 | Ophthalmology[mh] | Description of the instrumentClinical applicationsThe instrument is made of stainless steel; at one end is a straight rod, and the other end has a curved chopper with a tip dimension of 0.8 mm which allows for easy entry through the side ports. In scenarios wherein we have moderately dilated pupil (5 mm) with any amount of iris billowing or miosis, it allows for simultaneous stretching of the floppy iris, thus enabling the phaco probe to hold a nuclear fragment and emulsifying the nucleus, whereas the curved chopper allows for easy nucleus chopping and allowing us to manage the small pupil and nucleus emulsification without the use of any pupil-expanding devices. Intra-operative miosis because of FIS: Intra-operative FIS is classified upon the presence of floppy iris stroma, which billows and ripples responding to phaco fluidics with progressive intra-operative miosis, independent of the use of mydriatic agents, and the iris stroma’s tendency to prolapse through the incisions. The Y-shaped chopper allows stretching the floppy iris to the periphery and preventing iris plugging in the phaco probe during aspiration and concomitantly performing nuclear fragmentation without risking anterior capsular tears or PCR and thus prevents the prolapse of the iris through the side ports or main wound . Small Pupil management: The Y-shaped chopper design allows for both capsular and iris stretching to enable the phacoemulsification in eyes in small pupils (<5 mm), allowing easy sculpting, nuclear fragmentation, and chopping of the nucleus fragments into smaller pieces. The rotation of the nucleus and epinuclear plate can also be achieved with ease, thus allowing the surgeon to sail through the cataract surgery without the use of pupil-expanding devices which can cause sphincter damage and post-operative photophobia to the patient . Successful surgical outcomes have been achieved with both mechanical iris dilation and iris retention devices, and these devices add to overall surgical cost and generally require more time in the operating theater than a mechanical pupillary stretch. One of the most important inventions in the history of mechanical pupil expansion was introduction of the iris hooks. Since the very first reports, the technique gained wide popularity all over the world. The advantages of this technique include ease of manipulations and wide availability of the hooks manufactured in different sizes, materials, and designs. However, there are chances of iris sphincter tears and risk of bleeding. It is generally recommended not to extend the pupil over 5.0 mm in size to decrease the chances of iris tissue over-stretching and in turn producing irregular and atonic pupils post-operatively, which can lead to post-operative photophobia and Haloes . Pupil stretching till date has been performed with the help of two instruments (spatulas, Kuglen hook, or similar) introduced through paracentesis incisions located contralateral to each other. However, pupillary stretching maneuvers are more traumatic to the iris and also possibly to the corneal endothelium, but this does not appear to detract from the surgical outcomes. The major drawback is that intra-operatively, the iris can get pulled into the phaco probe and in the corneal wounds, which makes it difficult to perform nuclear fragmentation and simultaneously salvage the iris. This problem gets alleviated with our Y-shaper chopper which allows us to manage the FIS and perform nuclear emulsification. In our initial experience, we have not observed any loss of iris tone or permanent damage or any patient complaining of post-operative photophobia. Significant variations in the ocular and systemic co-morbidities require the whole spectrum of pharmacological and surgical strategies to be in the armamentarium of the modern cataract surgeon. The easiness of manipulations and the final results vary significantly with different devices. Iris hooks and Malyugin Ring are the current standard of care for intra-operative mechanical pupil expansion in patients not responding to the pharmacological protocols. Some of these methods are associated with bleeding, loss of iris sphincter function, and an abnormal pupil shape post-operatively. The author’s initial experience with the Y-shaper and chopper minimizes the risk of intra-operative complications, yet enabling surgeons to aim at similar surgical outcomes. Further studies are needed to compare the various available techniques and assess the learning curve associated with this instruNil. There are no conflicts of interest. www.ijo.in |
Full vaccination coverage for children aged 12–23 months in Madagascar: Analysis of the 2021 Demographic and Health Survey | 099d1cfe-62db-4dfc-8555-403cd5cfb72a | 11846278 | Vaccination[mh] | Vaccination is among the most impactful and cost-effective public health interventions for combating childhood diseases . Annually, vaccination averts 3.5-5 million deaths caused by vaccine-preventable diseases such as diphtheria, hepatitis B, measles, mumps, pertussis, pneumonia, polio, rotavirus diarrhoea, rubella, influenza, and tetanus . In 2018, approximately 700,000 children died of vaccine-preventable diseases, with nearly 99% of these fatalities occurring in low-and middle-income countries (LMIC) . An estimated 20 million children globally did not receive one or more doses of the diphtheria-pertussis-tetanus (DPT) vaccine in 2022 , even though around 20 million infants (89%) worldwide had received the third dose in the same year . The disruptions caused by the COVID-19 pandemic strained health systems, exacerbating the situation and leading to 22 million children missing their routine first dose of the measles vaccine in 2022, a significant increase from 19 million in 2019 . Notably, the under-five mortality rate in Madagascar remains alarmingly high at 66.3 deaths per 1000 live births , significantly surpassing the global target of 37 per 1000 live births set for 2020 . Madagascar, an island country in sub-Saharan Africa, is inhabited by roughly 30 million people . The Expanded Programme on Immunization (EPI) officially began in the country in 1976, following the World Health Organization (WHO) guidelines to establish a vaccine schedule . Its primary objective has been to confer immunity to children against tuberculosis (Bacillus Calmette–Guérin), DPT, poliomyelitis, and measles. Over the past two decades, additional vaccines have been introduced, including yellow fever (introduced in 1998 for high-risk countries but adopted locally in 1998), hepatitis B – HepB (in 2007), Haemophilus influenzae type B – Hib (in 2008), pneumococcal conjugate vaccine – PCV (in 2012), rotavirus vaccine – RV (in 2014), inactivated polio vaccine – IPV (in 2015), measles (in 1976), and a second dose of measles-containing vaccine second – MCV2 (in 2012) . All vaccines, except the second dose of the MCV, should be administered to children before they reach one year of age . Madagascar’s vaccination program provides fixed and routine service delivery, augmented by two annual maternal and child health weeks introduced in 2006 . Despite these sustained efforts, recent evidence indicates that vaccine coverage in Madagascar has been notably low for all recommended vaccines . According to the WHO/UNICEF Estimates of National Immunization Coverage report, the coverage for the third dose of DTP/HepB and the second dose of MCV was 57% and 32%, respectively, in 2022 , placing the country among those with the greatest disparities in immunisation rates globally . Suboptimal vaccination coverage may result in larger-than-usual outbreaks, a phenomenon referred to as “post-honeymoon” epidemics . Following a prolonged period of minimal measles incidence, insufficient measles vaccination coverage (> 80% by 2017) resulted in an outbreak that affected all 22 regions of Madagascar in September 2018, with more than 100,000 reported cases resulting in approximately 1,000 deaths . Also, between 2014 and 2015, 12 cases of Vaccine-Derived Polio Virus (VDPV) were reported across seven regions of the country, confirming weak routine immunisation coverage . LMICs typically exhibit lower vaccination coverage rates than other countries . The low vaccine uptake rate in LMICs is linked to various factors, including insufficient political support for vaccination programs, limited access to healthcare facilities, diminished public awareness, and inadequate education and awareness about vaccines among healthcare workers and caregivers, particularly mothers [ – ]. Vaccine rate disparities also arise due to socioeconomic variations in geographical location, educational attainment, rural-urban residence, sex, cultural beliefs, misconceptions, and vaccine hesitancy that may affect parental decision-making regarding vaccination, among others [ , , , ]. In Madagascar, Clouston et al. found that insufficient infrastructure, the country’s fragmented nature owing to an underdeveloped road network, staffing shortages, lack of energy sources, and limited supply of vaccines appear to contribute to reduced immunisation coverage as well as parental education and wealth status. Limited research has been undertaken in Madagascar regarding the correlation between socio-demographic factors and full vaccination coverage. One of these studies included other countries , and another was conducted before the COVID-19 pandemic . The more recent study by Ramaroson et al. focused on structural, relational, and cultural constraints influencing vaccination coverage. Given the persistently low vaccination rates observed in previous years, falling below the WHO recommendation of at least 90% of all vaccine coverage , and the exacerbated decline in vaccination rates due to the COVID-19 pandemic, conducting a more specific post-COVID-19 analysis becomes essential to bolster vaccination rates. Therefore, this study aimed to identify factors associated with full vaccination coverage among children aged 12–23 months in Madagascar.
Data source, design, and sampling procedure Variables Statistical analysis Ethical approval and consent to participate We used data obtained from the 2021 Madagascar Demographic and Health Survey . The MDHS used a cross-sectional study design, which was carried out by the National Institute of Statistics in conjunction with the Madagascar Ministry of Health in 22 regions of the country. The MDHS is a nationally representative survey that collects data on fundamental health indicators, such as mortality, morbidity, family planning service utilisation, fertility, and maternal and child health services such as vaccinations. Data were derived from the measure DHS, program ( https://dhsprogram.com/data/dataset/Madagascar_Standard-DHS_2021.cfm?flag=1 ). The country’s survey comprises a variety of datasets, including data on men, women, children, births, and households. This study used the children’s record dataset (KR file). It comprises a women’s questionnaire that measures socio-demographic characteristics of the mothers’ information on reproductive health and service use behaviours, as well as information specific to childbirths in the past five years for women between the ages of 15 and 49. The MDHS used two stages of stratified sampling techniques to select respondents for the study. Enumeration Areas (EAs) were randomly selected in the first stage, households were identified in the second stage. This study included a weighted sample of 2,250 children aged 12–23 months in the final analysis. Figure shows the inclusion and exclusion criteria of the study sample.
Outcome variable Explanatory variables The outcome variable was the full childhood vaccination status of children aged 12–23 months. According to the WHO recommendation, “a fully vaccinated child is the one who has received the following vaccines; BCG vaccination against tuberculosis; three doses of DPT vaccine to prevent diphtheria, pertussis, and tetanus ; at least three doses of polio vaccine; and one dose of measles vaccine . Data on vaccination coverage were collected through vaccination cards or verbal reports from mothers. Mothers were asked to recall their child’s vaccinations if a vaccination card was unavailable or incomplete. The outcome variable had five response categories: no, vaccination date on the card, reported by mother, vaccination marked on the card, and don’t know. These were recoded into binary values: For mothers who responded ‘no’ were recorded as ‘’ 0’’ and labelled not received, whereas the other responses ‘’ vaccination date on the card, reported by mother, vaccination marked on the card were recorded together as “1” and labelled “received the vaccine”. Children who received no or partial vaccination were labelled “no,” while those who received all vaccines were labelled “yes.”
We included sixteen explanatory variables. The explanatory variables were selected from the MDHS dataset based on prior knowledge and published literature [ – , – ]. The variables include child sex (male and female), the birth order (1, 2–3, 4–5 and 6 and above), mother’s age (15–24, 25–34, and 35–49 years), mother’s occupation (not working and working), mother’s education (no formal education, primary education and secondary or higher), father’s education (no formal education, primary education and secondary or higher), number of children under the age of 5 years (0–1, 2 and 3 or more), wealth index (poorest, poorer, middle, richer and richest), number of living children (1 and 2+), number of antenatal care visits during pregnancy (no visits, less than eight visits (1-7 visits), and eight visits or more visits), place of delivery (home or other and health facility), postnatal care visits (no and yes), marital status (never married, married, cohabiting, widowed and divorced). Other variables included the place of residence (urban and rural), distance to health facility (big problem and not a big problem), and access to media—television, radio, and newspaper (not access and access). Table shows the coding scheme of the study variables.
Stata software version 13 was used for all the analyses. Descriptive statistics, including percentages, bar charts, and frequency tables, were used to describe the study respondents and to determine the proportion of full vaccination coverage by socio-demographic characteristics. Bivariate analysis was used to show the association between socio-demographic characteristics and full vaccination coverage. Variables were determined statistically significant at p -value < 0.25 during bivariate analysis, and few variables above the set significance level but had shown significant association in some studies were considered for adjustment in the multivariable logistic regression. Adjusted odds ratio (aOR) and 95% confidence interval (CI) were used to assess the strength of the association between full vaccination coverage and the explanatory variables. The statistical significance threshold was set at < 0.05. Sample weights were applied to compensate for the unequal probability of selection between the strata that have been geographically defined as well for non-responses. Details of the weighting procedure can be found in the MDHS 2021 (28). The “svy” command was used to weigh the survey data and to account for the complex nature of the DHS.
Ethical clearance was not sought for this study due to the public availability of the 2021 MDHS data. We submitted a project proposal to the DHS program. Afterwards, permission was obtained to download and use the children’s (KR) dataset. No names of individuals or household addresses are in the data files.
Descriptive characteristics of the respondents Proportion of children 12–23 months vaccinated with essential vaccines Bivariate analysis of full vaccination coverage among children aged 12–23 months Factors associated with full childhood vaccination coverage of children aged 12–23 months In Table , the multivariable analysis showed that male children were more likely to be fully immunized compared to females (aOR: 1.24; 95% CI: 1.01–1.53). Mothers aged 35–49 (aOR: 1.69; 95% CI: 1.08–2.64) were more likely to have their children fully immunized compared to women between 15 and 24 years. Children born to mothers with secondary or higher education were more likely to receive full vaccination than those whose mothers had no formal education. Mothers within the middle-class wealth index (aOR: 1.48; 95% CI: 1.04–2.12) were more likely to have their children fully vaccinated compared to the poorest category. Mothers who were working (aOR: 1.45; 95% CI: 1.06–1.98) were more likely to fully vaccinate their children compared to those who were not. Compared to mothers who delivered their babies at home or other places, mothers who delivered their babies at the health facility were more likely to vaccinate their children fully (aOR: 1.57; 95% CI: 1.22–2.02). Mothers who had less than eight antenatal care visits (aOR: 3.63; 95% CI: 2.30–5.72), and those with eight or more visits (aOR: 1.20; 95% CI: 1.35–6.51) were more likely to have their children vaccinated fully compared to those with no antenatal care visits. Mothers exposed to media were more likely to fully vaccinate their children than their counterparts (aOR: 1.65; 95% CI: 1.26–2.16).
A total of 2,250 women with children aged 12 to 23 months of age were included in the analysis. As shown in Fig. , the overall prevalence of fully vaccinated children in Madagascar in 2021 was 48.9%. The vaccination coverage for BCG, DPT3, Polio 3 and measles was 78.1%, 68.4%, 58.6% and 63.9%, respectively. Table presents the baseline characteristics of the study population. More than half (52.1%) of the children were identified as males, and more than one-third (39.6%) were in the 2–3 birth cohort. Regarding maternal and household characteristics, approximately half of respondents were between the ages 15 and 24 years and 45.0% of respondents had primary school education. Furthermore, almost half (44.9%) of the respondents’ partners also had primary school education. Most respondents (47.1%) had between 0 and 1 child under five years, and about two in three respondents (67.8%) were in marital union. Approximately a quarter was within grouped under the poorest wealth index. Regarding maternal care, a majority (87.2%) had less than eight contacts of ANC, and almost two-thirds (65.7%) had a postnatal check-up of their babies within two months of birth. Further, most respondents (84.2%) were living in rural residences, two-thirds indicated that distance to health facilities was not a big problem, and more than half (54.5%) had access to media.
Figure shows the vaccination coverage for children aged 12–23 months in Madagascar. Almost half 48.9% of the children had received full vaccination in Madagascar. There were variations in polio and DPT vaccination among the children. Vaccination coverage for BCG, third-dose polio, third-dose DPT, and measles was 78.1%, 58.6%, 68.4%, and 63.9%, respectively.
Similar proportions of male and female children had full vaccination coverage. Likewise, the vaccination coverage among mothers across different age groups as shown in Table . Full vaccination coverage was higher among mothers with secondary or higher education (61.9%). Partners with secondary or higher education had a greater proportion (60.2%) of full vaccination. Full vaccination coverage was higher among children who lived in urban areas (58.7%) and belong to richest wealth index (64.1%). Higher proportions of full vaccination were recorded among children whose mothers were never married (54.3%), those whose place of delivery was the health facility ( 61.1% ) and those with no big problem getting to the health facility ( 51.9% ). Also, 53.6% of the women had eight contacts or more antenatal care whereas 55.1% attended postnatal care. Additionally, a higher proportion of full vaccination was observed among mothers who had three or more children (41.4%) below the age of five years. Full vaccination coverage was high among women who had access to media (59.7%), those working (49.3%), and those who had only one child living (56.7%).
This study examined the full vaccination coverage and its associated factors among children aged 12–23 months in Madagascar. We found the proportion of full vaccination coverage among the children was 48.9%. Factors associated with full vaccination coverage were the child’s sex, mother’s age, mother’s education, place of delivery, number of antenatal care visits, wealth index, mother’s working status, and media access. In this study, the coverage of full childhood vaccination stood at 48.9%, indicating that most children (51.1%) did not receive all recommended childhood vaccinations. The proportion of full vaccination coverage observed in our study is similar to findings from studies conducted in Ethiopia , Uganda , and Mozambique . However, our results exceed previous studies, reporting 18.8% in Guinea and 33.3% in Ethiopia . Additionally, our findings are lower than those of other studies, reporting 85% in Burundi , 93.1% in China , and 85.6% in sub-Saharan Africa . Although our study did not investigate specific reasons behind the low vaccination rate among children, prior studies in Madagascar have linked this issue to vaccine stockouts, hindering access to vaccination services, and factors such as a shortage of adequately trained personnel to administer the vaccines and unreliable access to electricity for maintaining the cold chain . Ill-treatment of women by medical professionals, such as being hostile, demeaning, or violent, could have resulted in a decrease in the uptake of vaccination services. Evidence shows that health providers punished women who forgot their child’s vaccination card, skipped an appointment, or had a dirty or poorly dressed child . Mothers who experience this abuse from their providers may feel embarrassed and be discouraged from vaccinating their children. These beliefs may exacerbate postpartum depression in mothers, which could have detrimental effects on their health . Nevertheless, the differences in vaccination coverage between countries could be attributable to socio-cultural differences, changes in healthcare coverage and legislation, discrepancies in sample size, and differences in vaccination access among the study settings . The study found that male children were more likely than female children to have received all recommended vaccinations, possibly due to a preference for male children in some cultures . However, contrasting findings from a systematic review in Madagascar indicated similar vaccination coverage in both boys and girls. Nevertheless, three of the four articles noted slightly higher vaccination rates among girls than boys . As per the findings of Fang et al. there has been a decline or complete elimination of male and female discernment concerning health state and the general understanding of the notion of “male preference” in recent times. The results may differ based on the study setting, but gender discernment has a mixed impact on vaccination status. There has been some shift in male affection and a possible decrease in gender bias, particularly in households where only one child is permitted. However, we cannot ignore sex differences in vaccine access because, in some places, male preference still has a significant impact on children’s health . Mothers aged 35 to 49 were more likely to fully vaccinate their children than those aged 15 to 24. This is consistent with studies reported in Nigeria and East African countries . This phenomenon could be attributed to the increased accessibility of maternal health services as mothers age, including antenatal care visits, supervised deliveries, and postnatal care visits, which serve as platforms for introducing mothers to child vaccination . Furthermore, with advancing age, mothers tend to acquire a more comprehensive understanding of childhood diseases and the importance of vaccination in preventing them . The study revealed a significant association between full vaccination coverage and maternal educational attainment. Children of mothers with secondary or higher education had a higher likelihood of being fully vaccinated than those without formal education. This may be attributed to educated mothers being more informed about vaccination benefits, having access to relevant information, and ensuring their children adhere to the recommended vaccine schedule. Previous studies have shown that vaccination inclination increases with educational attainment . Previous research conducted in Nigeria , India , and Pakistan supports this finding. However, a study in Ghana found no association between full vaccination and the educational status of the mother . Mothers who were working demonstrated a significantly higher likelihood of fully vaccinating their children than those who were not. This finding aligns with existing literature in rural Nigeria, which suggests that employment may provide some psychological motivation to access optimal child care, including vaccination . It could also result from the fact that employment may enhance a mother’s financial capacity, increasing access to healthcare services, including vaccination programs. Employment may also promote greater exposure to health education and awareness campaigns, as workplaces can serve as channels for disseminating public health information. Amoah et al. reported in their study in sub-Saharan Africa that maternal employment, education, and decision-making capacity were positively associated with the full vaccination of children . Moreover, employed mothers might also experience social influences, such as peer encouragement, which can positively affect their health-seeking behavior, including vaccination uptake. To enhance vaccination coverage and reduce disparities, efforts should focus on addressing social determinants of health, ensuring access to quality healthcare, and removing barriers to maternal health-seeking behaviors . Our study found that middle-class mothers were more likely to fully vaccinate their children than the poor. This highlights the role of socioeconomic factors in improving vaccination uptake. Middle-class families may benefit from better access to healthcare services, education, and resources that facilitate maternal health-seeking behaviors . However, the association was not significant among the wealthiest categories, suggesting that factors like complacency, competing priorities, or misperceptions in health systems might influence their decisions . These findings emphasize the need for tailored public health strategies to address barriers across all economic groups. Similar to previous research [ , , ], our study also shows that complete childhood vaccination was more likely to be received by children born in a health facility than those born at home or in other locations. Likewise, a study in India found that home-born infants were either fully or partially unvaccinated . Also, in line with our findings, Pandey et al. demonstrated that approximately 75% of infants born in medical facilities had received all recommended vaccinations . Infants born in hospitals may have access to critical medical services necessary for vaccination programs . Additionally, mothers who give birth in a medical facility are more likely to receive vaccine instruction from medical professionals. As a result, their children are more likely to acquire subsequent vaccinations since they are more aware of the risks and benefits of immunisation . For example, children in a hospital have access to the first dose of the BCG vaccine, which is given shortly after delivery, and parents will be taught about subsequent vaccinations . Our study showed a significant association between the number of antenatal care visits and full childhood vaccination. In comparison to mothers who had no antenatal care visit, those who had fewer than eight contacts and eight contacts or more had a higher chance of having their children fully vaccinated. This finding is consistent with previous research conducted in Zimbabwe and Northwest Ethiopia , which also found that children whose mothers did not have antenatal care contact with medical personnel were less likely to have received recommended vaccinations. This is attributed to the fact that mothers who do not receive antenatal care are less likely to adhere to antenatal care practice guidelines, including the administration of required vaccines during pregnancy. Consequently, they exhibit reduced utilisation of health services and adherence to vaccination guidelines for their children. Conversely, children whose mothers have received antenatal care services, benefitting from their familiarity with healthcare systems and medical professionals’ support have a higher likelihood of receiving all recommended vaccinations . Another factor linked to full childhood vaccination coverage is media access. Compared to their counterparts, women exposed to the media had a higher likelihood of vaccinating their children in full. This result is consistent with research from East Africa , Zimbabwe , and Ethiopia , as well as a previous study in Madagascar . Media exposure is the most effective way to reach the population and encourage better healthcare-seeking behavior . According to Lee et al. , media exposure is essential for influencing public opinion and disseminating information regarding childhood vaccination programs. Exposure to mass media should be stressed due to its strong positive correlation with vaccination completion, as demonstrated in an earlier study . Based on the successful outcomes of preceding studies, information and sensitisation campaigns and messages should be carefully designed and distributed through radio, newspapers, and television stations .
This research has a lot of strengths. First, the study’s data came from the most current, nationally representative, and large population-based survey sample covering every area and administrative division in Madagascar. Secondly, there is a typical design across the DHS surveys, with standard variables allowing cross-context comparisons. The results could therefore be relevant to other developing countries. It is important to note the limitations of this study’s conclusions. First, the analysis employed potential predictor elements from the 2021 MDHS. However, factors like the quality of vaccination services not included in the DHS data set are probably important drivers of children receiving all recommended vaccinations. Secondly, recall bias is possible because the data on childhood full vaccination was collected retrospectively (via vaccination cards and maternal self-report). Third, drawing inferences about causal relationships between explanatory variables and a comprehensive childhood vaccination is made more difficult because the analyses were based on data from a cross-sectional survey. As a result, it is essential to confirm the validity of the observed connections using data collected longitudinally across time.
The overall prevalence of full childhood vaccination was 48.9% in Madagascar. Predictors such as child’s sex, maternal age, maternal education, place of delivery, number of antenatal care visits, wealth index, mother’s working status and media access were significantly associated with full childhood vaccination. Our findings indicated that full childhood vaccination coverage falls short of the WHO’s EPI coverage objective of at least 90%. It is recommended that the Ministry of Health in Madagascar collaborate with regional health departments and local administrative levels to achieve the recommended coverage. To address systemic challenges in vaccination coverage in Madagascar, key actionable points include improving vaccine supply chains through better logistics and infrastructure, enhancing training for healthcare workers to ensure respectful care to caretakers during vaccination sessions, and increasing access to media for public health campaigns that raise awareness about vaccination importance. Strengthening antenatal care services with integrated vaccination education, promoting the use of immunization cards, and engaging community leaders to mobilize support for vaccinations are essential. Additionally, conducting regular assessments of vaccination programs can help identify barriers and tailor interventions to improve uptake and public health outcomes. They should also inform mothers about the various vaccination plans and the need to keep the vaccination cards through media channels, such as radio, television, community information systems and even phone calls.
|
Teaching of manual cataract extraction in ophthalmic surgical training programmes | 4f40b26f-7bb7-4893-844b-be211a6b4fb9 | 11449948 | Ophthalmology[mh] | Cataract, which is opacification of the crystalline lens in the eye, is the most common cause of visual impairment globally . It may eventually lead to blindness if not treated and carries a significant visual impairment burden for patients. Treatment options are exclusively surgical, involving extraction and replacement of the native lens with an intraocular lens implant. This is a permanent solution and offers restoration of vision. Cataract surgery is the most commonly performed procedure in ophthalmology and is one of the most cost-effective surgical interventions in terms of quality-of-life improvement . Extraction of a cataract can be manual, which involves removal of the cataract material without emulsification, or it may be removed by emulsification of the lens material (phacoemulsification). Over time, methods of cataract extraction have evolved in synchrony with developing technology. Surgical training in cataract surgery techniques has changed to mirror this shift. Therefore, newer trainees tend to know newer techniques exclusively. For several decades, phacoemulsification has evolved as the predominant method for cataract extraction worldwide, particularly in more economically developed countries. Phacoemulsification involves the creation of a self-sealing wound and use of an ultrasonic probe to divide, emulsify and aspirate the cataractous lens. The advantages of this method are numerous and inarguable. They include safer and faster cataract extraction, reduced risk of post-operative astigmatism , lower rates of infection and wound dehiscence and speedier visual recovery for patients. Phacoemulsification is considered the safest and preferred method of cataract surgery in the developed world. Prior to phacoemulsification, a method of manual cataract extraction called extracapsular cataract extraction (ECCE) predominated and was therefore the method taught to surgical trainees in the past. ECCE typically involves a 12-mm incision with expression of the cataractous lens, leaving an intact posterior capsule and placement of an intraocular lens. It necessitates the use of sutures to close the wound and requires more operative time when compared to phacoemulsification, and post-operative visual recovery is longer. It does not rely on ultrasonic instrumentation and expensive technology unlike phacoemulsification surgery. Another technique for manual cataract extraction is MSICS (manual small incision cataract surgery) which is a modification of ECCE. The incision is smaller, self-sealing and does not require sutures. Extraction is manual with expression of the lens through the wound, similar to ECCE. MSICS is a technically more challenging procedure and is widely used in developing countries where the removal of sutures proves challenging as patients are often remote from ophthalmic centres . Both phacoemulsification and manual cataract extraction have excellent visual outcomes . In cases of patients with very dense nuclear cataracts, corneal opacification, significant zonular loss or dialysis, or lens subluxation, a planned manual cataract extraction method may be indicated. In cases of intraoperative capsulorhexis complications, ruptured posterior capsules, and lens dislocation into the vitreous, a conversion from phacoemulsification to manual cataract extraction may be necessary. Successful conversion at the time of the primary surgery will avoid the necessity for a second operative procedure. With the advent of phacoemulsification, surgical training programmes largely exclude manual cataract extraction from the formal curriculum. We aimed to evaluate the current exposure, experience and opinions of trainees and trainers in Ireland regarding manual cataract extraction and its place in formal surgical training programmes. We developed a survey to assess the status of manual cataract extraction including ECCE and MSICS exposure amongst Irish surgical trainees. We wished to assess the extent to which ophthalmologists consider ECCE/MSICS an important skill for trainees to acquire. A survey was designed and tailored for consultants and trainees. After development, its content was approved by three consultant ophthalmologists. An electronic version was distributed to Irish ophthalmologists via the Irish College of Ophthalmologists mailing list which included approximately 340 members of the college. Our survey primarily focused on the manual cataract extraction technique of ECCE with a single question in either survey directed at MSICS specifically. Nineteen of 33 (57%) ophthalmic surgical trainees on programme and 29 of 55 (55%) consultant ophthalmic surgeons completed the survey. Of the trainees surveyed, 12 of 19 (63%) had seen an ECCE procedure performed, 5 of 12 (42%) trainees assisted and 7 of 12 (58%) trainees were the primary surgeon. Out of the trainees who witnessed an ECCE procedure, 9 of 12 (75%) were planned, and 3 of 12 (25%) were not. Five of 19 (26%) trainees had exposure to ECCE procedures in wet lab scenarios. Fourteen of 19 (74%) stated they would not feel confident converting from phacoemulsification to ECCE independently if required. Sixteen of 19 (89%) of trainees believe that ECCE procedure training should be included in their formal surgical training. MSICS exposure occurred in 3 of 19 (15%) trainees, with 17 of 19 (89%) trainees stating they believe it should be included in formal training. Of the consultants surveyed, 15 of 29 (52%) have over 15-year experience as a consultant. Twenty-one of 29 (72%) had performed an ECCE procedure as the primary surgeon, and as a consultant, 6 of 29 (20%) had performed an ECCE as a trainee. In total, 2 of 29 (7%) consultants had not performed an ECCE procedure at any stage in their career as the primary surgeon. Of the 27 consultants remaining who have performed ECCE previously, 9 of 27 (32%) have performed ECCE over 10 years ago (32%). Eight of 27 (30%) consultants have performed an ECCE within 1 to 3 years. The remaining 10 of 27 (37%) consultants performed the surgery with 3 to 10 years. Twenty-one of 29 (72%) stated they would feel confident converting to ECCE if required, and 27 of 29 (93%) stated the operating theatres they worked in would be equipped to convert from phacoemulsification to ECCE. Nine of 29 (31%) consultants have supervised a trainee in performing an ECCE or MSICS procedure. Six of 29 (21%) consultants have previously performed an MSICS procedure. Nineteen of 29 (65%) consultants believe some form of formal training in ECCE should be part of the surgical training programme. Our survey highlights the mirrored shift in training from manual cataract extraction training with the evolution of phacoemulsification dominating cataract surgery. While phacoemulsification has undoubtedly been a great advance in modern cataract surgery, it is important to recognize the continued relevance of manual cataract extraction techniques such as ECCE and MSICS particularly in certain surgical situations. Our results mimic training programme trends in the United States. In a similar survey of trainees, Henderson et al. showed that 91% of their responders believed there are instances where ECCE may be the preferred procedure and reported a 40% decrease in ECCEs being performed by residents from 2005 to 2010. The US residents in this study performed approximately three to four ECCE procedures throughout their training in 2010 . A more recent report of ophthalmic training programmes in the US showed that in 92% of programmes, a wet laboratory component was part of the surgical curriculum . Our survey highlights the lack of experience trainees have regarding ECCE or MSICS and reveals a potential gap in ophthalmic surgical training opportunities. Embracing a comprehensive approach to cataract surgery training should allow for tailored interventions and ensure that end-of-training graduates possess the skillset to deal with less common intraoperative scenarios at the time of cataract surgery. In practice, planned formal teaching for conversion to ECCE is impractical as the need is unpredictable and relatively rare. Therefore, there may be a potential value in wet lab simulated training for trainees. As with all surveys, this study may not accurately reflect the practice of all ophthalmologists in Ireland or all programme trainers. However, it does reflect the majority opinion that some experience of manual cataract extraction techniques retains importance in ophthalmic surgical training. |
A Comparative Analysis of the Liver Retraction with Long Surgical Gauze in Three-Port Sleeve Gastrectomy and the Four-Port Nathanson Retractor Technique | c763b5f1-c6a4-4943-a2d7-0b8e35113b7c | 11836130 | Laparoscopy[mh] | Laparoscopic sleeve gastrectomy (LSG) has become one of the most commonly used surgical methods for the treatment of obesity worldwide [ – ]. One of the key challenges in laparoscopic surgery is the safe management of surrounding organs and tissues. To achieve this, surgeons typically use an assistant or a fixed retractor to carefully retract nearby organs . In obesity surgery, retracting the left lobe of the liver is essential to provide a clear view of the diaphragm and upper stomach region, especially during dissection near the angle of His, a crucial area for reducing the risk of surgical complications. However, an enlarged liver can frequently obstruct this region, making the surgery more difficult . One commonly used method for liver retraction is the Nathanson retractor (NR) (Cook® Medical, USA). However, this technique requires an additional incision and trocar entry and can lead to complications such as liver damage, hematoma, necrosis and epigastric pain due to prolonged pressure on the liver [ – ]. This study investigates a three-port sleeve gastrectomy technique using a long surgical gauze (SurG) (50 × 5 cm) for liver retraction instead of the NR. The primary aim of this study is to develop a technique that eliminates the need for an additional incision and trocar while reducing liver-related complications and postoperative elevations in liver enzymes. We hypothesized that this technique, which requires fewer trocars and incisions, would result in less postoperative pain, earlier mobilization, and a reduced need for analgesics on the first day compared to the standard four-port Nathanson retractor method. Additionally, we expect that the use of a SurG for retraction will decrease postoperative pain levels and accelerate the recovery process for patients. In conclusion, this study aims to contribute to reducing retractor-related complications in obesity surgery and make the surgical process smoother with fewer trocars and incisions. This approach has the potential to enhance patient comfort and support faster postoperative recovery, offering surgeons a minimally invasive alternative.
Study Population and Setting Study Groups Surgical Technique Data Collection Statistical Analysis Outcome Measures is retrospective study was conducted at Tinaztepe University Galen Hospital and Egesehir Hospital between January 2023 and December 2023; patients who underwent LSG were divided into two groups based on the method of liver retraction used. We retrospectively analyzed data collected prospectively from the patient populations in which we applied these two methods. Our clinical practice has routinely used the NR for liver retraction in LSG until May 2023. However, we hypothesized that we could get the needed retraction with a combination of a SurG and positional changes. Since June 2023, we have been using the SurG technique for retraction of the liver. Inclusion criteria for the study were patients over the age of 18 with complete data who provided informed consent. Exclusion criteria included patients with missing data, who underwent other bariatric procedures and those under 18 years of age. This study was conducted in full compliance with the ethical standards and protocols approved by the Ethics Committee of the tertiary health institution (1764.1784).
Patients were divided into two groups according to the method of liver retraction used during LSG. One group underwent liver retraction using the NR, while the other group utilized SurG for liver retraction. Both groups were compared for postoperative outcomes, including pain levels, liver enzyme changes, and recovery. Both groups followed Enhanced Recovery After Surgery (ERAS) protocols.
All patients were evaluated preoperatively with gastroscopy. Sleeve gastrectomy procedures were performed using a laparoscopic approach. Initially, trocars were placed in the abdominal region. In our clinical practice, for the first trocar entry, the camera port is placed 10–12 cm below the xiphoid process and to the left of the midline. After the incision, dissection is performed with a finger down to the fascia. The fascia is held with two towel clamps, and after insufflation of the abdomen with a Veress needle, a 12-mm trocar is inserted. In the three-port technique, a 10-mm trocar was inserted above the umbilicus, a 12-mm trocar in the right upper quadrant, and a 5-mm trocar in the left upper quadrant. After obtaining an adequate view of the abdominal cavity, the patient was placed in a reverse Trendelenburg and right-tilted lateral decubitus position. This position helps rotate the liver to the right, improving the visibility of the surgical field (Fig. ). Then, a long surgical gauze was shaped into a ball and placed between the liver and stomach, approximately 2 cm away from the hiatus, to retract the liver (Fig. ). The gauze is inserted through a 12-mm trocar and then shaped into a ball. In the four-port technique, an additional 5-mm trocar was inserted in the subxiphoid area, and the Nathanson retractor was used for liver retraction. In both techniques, the operative view scoring system, ranging from 1 to 5, was used to evaluate the visualization of the gastroesophageal junction (GEJ), angle of His, lesser curvature (LC), and greater curvature (GC), with 1 representing the poorest view and 5 representing optimal visibility . In both methods, the intra-abdominal manipulations proceeded similarly, and the stomach was carefully resected using staplers. Along the staple line, continuous sutures were placed using 3/0 absorbable barbed sutures to secure the staple line (Fig. ). The fascia of the trocar site and camera port, where the stomach was extracted, was closed using an endoscopic fascia closure device.
Data collected included demographic details, body mass ındex (BMI), hepatosteatosis (assessed by ultrasound), comorbidities (diseases), abdominal operation history, duration of surgery, staple line leaks, staple line bleeding, deep tissue infections, deep vein thrombosis, instances of liver laceration, trocar-induced hemorrhage, subxiphoid trocar site infections, and liver enzyme levels both preoperatively and at 24 and 48 h postoperatively. The enzymes measured were aspartate transaminase (AST) with normal levels of 1–35 U/L, alanine transaminase (ALT) with normal levels of 1–34 U/L, C-reactive protein (CRP) with normal levels up to 5.0 mg/L, alkaline phosphatase (ALP) with normal levels between 40 and 130 IU/L, and gamma-glutamyl transferase (GGT) with normal levels between 5 and 45 U/L. The analysis was conducted with reference to established normal ranges for each enzyme. The length of hospital stay was also assessed for both groups to determine the average hospitalization duration. Pain levels were evaluated using the Visual Analog Scale (VAS) at various time points, including preoperatively, and at 6, 12, 24, and 48 h postoperatively, as well as on the 10th postoperative day. VAS scores, surgical hematomas, bleeding, and postoperative nausea and vomiting (PONV) are routinely evaluated independently of the surgical procedure. These parameters are systematically assessed for every surgery by clinical ward and outpatient clinic nurses and are documented in patient records.
Baseline clinical data were analyzed using the t -test or Mann–Whitney U test for continuous variables and the Fisher’s exact test or chi-square test for categorical variables. SPSS version 22.0 (SPSS Inc., Chicago, IL, USA) was used for data analysis. Descriptive statistics (mean, standard deviation, median, frequency, percentage, minimum, maximum) were applied to evaluate the data. The one-way ANOVA test was used to compare normally distributed quantitative variables between the groups. A p -value of less than 0.05 was considered statistically significant.
Primary Outcomes Secondary Outcomes The main endpoints of this study were to determine the reduction of liver-related complications, including ischemia, necrosis when using (SurG versus the NR). Other key outcomes were if or not SurG reduces post-operative pain as measured by the VAS at different times.
Secondary outcomes assessed the improvement of postoperative recovery, early mobilization, and lower consumption of analgesics. Cosmetic advantages regarding trocar incisions, postoperative complications such as liver lacerations, hematomas, and infections, and the quality of the operative view were also assessed. Lastly, the duration of surgery and duration of hospital stay were compared for both groups.
A total of 341 patients were analyzed, but some of these patients were excluded for various reasons. A total of 302 patients who underwent LSG between January 2023 and December 2023 were included in the study. The patient selection flowchart is shown in Fig. . The mean age was 35.65 ± 11.11 years in the NR group and 36.45 ± 10.39 years in the SurG group ( p = 0.524). The sex distribution showed no significant difference ( p = 0.645) with 75% female in the NR group and 79.5% in the SurG group. The BMI range for the operated patients was between 31 and 58.8. BMI was also comparable between the two groups (45.07 ± 5.35 kg/m 2 for NR and 45.98 ± 4.73 kg/m 2 for SurG; p = 0.121). There was no significant difference in the distribution of hepatic steatosis grades between the two groups ( p = 0.996). The presence of diabetes ( p = 0.278), asthma ( p = 0.563), hyperlipidemia ( p = 0.254), fibromyalgia ( p = 0.342), anxiety disorders ( p = 0.235), and HbA1c ( p = 0.347) was statistically non-significant. However, the incidence of hypertension was significantly higher in the SurG group (46.6%) compared to the NR group (34.6%; p = 0.023). No significant difference was found in the operative time between the two groups, with the NR group averaging 71.24 ± 9.73 min and the SurG group averaging 69.50 ± 9.03 min ( p = 0.110) (Table ). Postoperative complications, such as staple line bleeding and infections, were rare and did not significantly differ between the groups. Postoperative bleeding occurred in 0.6% of the NR group and 2.1% of the SurG group ( p = 0.287). Deep tissue infection occurred in 1.9% of the NR group and 1.4% of the SurG group ( p = 0.531). Liver lacerations, however, were significantly more frequent in the NR group (3.8%) compared to none in the SurG group ( p = 0.018). Additionally, retractor-related injuries, such as epigastrica superior artery-vein injury (7.1% in NR group, 0% in SurG group, p = 0.001) and subxiphoid trocar site hematomas (6.4% in NR group, 0% in SurG group, p = 0.001), were observed exclusively in the NR group. The mean operative view score was significantly higher in the NR group (4.31 ± 0.55) compared to the SurG group (4.10 ± 0.43) ( p < 0.001). The mean operation time was slightly longer in the NR group (71.24 ± 9.73 min) compared to the SurG group (69.50 ± 9.03 min), but this difference was not statistically significant ( p = 0.110). PONV were similar between groups (30.8% in NR and 33.6% in SurG, p = 0.346). However, postoperative mobilization occurred significantly earlier in the SurG group (5.63 ± 1.21 h) compared to the NR group (6.85 ± 1.90 h; p < 0.001). Regarding postoperative analgesic use, the SurG group required significantly less tramadol (105.82 ± 68.10 mg vs. 172.12 ± 70.53 mg, p < 0.001) and pethidine hydrochloride (5.48 ± 24.29 mg vs. 26.60 ± 43.78 mg, p < 0.001) in the first 24 h postoperatively. The length of hospital stay did not significantly differ between the groups (2.78 ± 0.05 days in NR vs. 2.66 ± 0.05 days in SurG, p = 0.129). Pain levels, measured using the VAS, showed that the SurG group had significantly lower pain scores at the 6th hour (2.63 ± 1.22 vs. 4.05 ± 1.28, p < 0.001), 12th hour (2.08 ± 0.78 vs. 3.20 ± 0.92, p < 0.001), and 24th hour (1.66 ± 0.79 vs. 2.01 ± 0.96, p = 0.001) postoperatively. There were no significant differences at the 48th hour and 10th postoperative days (Table ). Postoperatively, AST, ALT, and CRP levels were significantly lower in the SurG group compared to the NR group. At 24 h, AST levels were 30.65 ± 11.65 U/L in the NR group and 27.36 ± 14.50 U/L in the SurG group ( p = 0.030), while at 48 h, the levels were 25.85 ± 7.85 U/L in the NR group and 22.21 ± 13.21 U/L in the SurG group ( p = 0.004). ALT levels at 24 h were 31.31 ± 5.79 U/L in the NR group and 26.57 ± 5.30 U/L in the SurG group ( p < 0.001), and at 48 h, they were 28.70 ± 5.10 U/L in the NR group and 25.67 ± 5.35 U/L in the SurG group ( p < 0.001). CRP levels were also significantly lower in the SurG group at 24 h (21.24 ± 7.17 mg/L vs. 24.57 ± 6.32 mg/L, p < 0.001) and at 48 h (13.03 ± 6.26 mg/L vs. 21.24 ± 7.17 mg/L, p = 0.015). GGT and ALP levels did not differ significantly between the groups (Table ).
This study assessed the potential benefit of using SurG for liver retraction during LSG. In these operations, ensuring a clear view of the gastroesophageal junction and the angle of His is only possible through the safe retraction of the left lobe of the liver . For this purpose, mechanical retractors such as the NR are frequently used. However, it is known that the prolonged pressure of the NR on the liver can lead to complications such as hematoma, necrosis, and segmental atrophy . In this study, we examined whether the results of the LSG technique using a SurG instead of NR could reduce the risk of these complications. In our study, it has been demonstrated that the SurG offers several advantages over the traditional NR method. Liver retraction is of critical importance in bariatric surgeries and must be performed carefully to minimize complications. The SurG has been preferred due to its ability to avoid the need for an additional trocar and the lack of prolonged pressure on the liver. This technique has shown positive outcomes, such as reduced postoperative pain, decreased analgesic use, and lower postoperative liver enzyme levels. The findings indicate that the gauze method not only enhances patient comfort but also improves surgical outcomes. Studies using the NR have reported that continuous pressure on the left lobe of the liver can lead to complications such as hematoma and necrosis . In contrast, in our study, no such stress or complications were observed with the SurG. The SurG reduces the risk of complications by applying less pressure on the liver, and the absence of a need for an additional trocar not only simplifies the surgical procedure technically but also provides better cosmetic results for patients. This demonstrates that the method is a safe alternative that enhances patient comfort. In our study, it was found that postoperative pain scores were significantly lower, especially in the first 24 h, which resulted in reduced use of analgesic medications. The use of tramadol and pethidine was significantly less in the SurG group. Similarly, previous studies have reported that the NR method increases pain scores and that additional incisions raise the pain levels during the postoperative period . In a study conducted by Shinohara et al., it was reported that an alternative liver retraction method, which could replace NR, was effective in reducing postoperative pain . Likewise, Sakaguchi et al. noted that new liver retraction techniques contributed to better postoperative pain control . In our study, VAS pain scores were also found to be lower in the SurG group at 6, 12, and 24 h. These findings support that the gauze method is a more advantageous technique both in terms of patient comfort and the recovery process. In the literature, it has been reported that the use of NR leads to an increase in postoperative liver enzyme levels, which is attributed to the continuous pressure applied to the liver . In our study, AST, ALT, and CRP levels were found to be statistically lower in the SurG group on the 1st and 2nd days compared to the NR group. This finding indicates that the gauze method reduces the pressure on the liver and causes less liver damage. Similarly, in a study by Goel et al., it was noted that the use of NR increased liver transaminase levels and was associated with a higher risk of postoperative complications . However, additional research with larger sample numbers and extended follow-up is necessary to assess the wider clinical effects of these findings, including whether the observed reduction in liver enzymes has any clinical significance. It was also found that postoperative mobilization occurred more quickly in the SurG group, which is more compatible with ERAS protocols. Early mobilization accelerates the recovery process and reduces potential complications. Studies using the NR have reported prolonged mobilization times, leading to delayed recovery . Thorell et al. emphasized that faster mobilization, a key component of ERAS protocols, improves postoperative outcomes and reduces hospital stay . Similarly, Demirpolat et al. highlighted the importance of early mobilization in enhancing recovery and minimizing complications in bariatric surgery . These findings suggest that the gauze method is more suitable for ERAS protocols and improves patient care in the postoperative period. The gauze method has certain limitations. First, the patient’s position differs from the standard sleeve gastrectomy position; the right lateral decubitus position may require an adjustment period for the surgeon and the team. This position can narrow the surgical field, particularly during the dissection of the greater curvature and omentum. Additionally, the method may limit the field of view and could require some experience to manage effectively. In super-morbidly obese patients, anatomical factors could further restrict visibility, and the applicability of this technique may be limited. We do not yet have sufficient experience in this area. Nevertheless, while the operative view score in the SurG group was statistically lower than in the NR group, it still provided an adequate view for sleeve gastrectomy. The long-term results of this study included no assessment regarding both GERD and intrathoracic migration and a generally small sample that might weaken its generalizing power. Furthermore, the study was conducted by the same surgical team, which may introduce potential limitations, including selection bias. The technique may not be suitable in cases of large left lateral liver lobes, BMI > 50 kg/m2, gastric bypass, or concurrent hiatal hernia repair. Although no incident of retained gauze was encountered in this study, several such cases have been reported in the literature and must be considered a potential risk. Similarly, the number of cases required to reach proficiency with this technique remains to be established .
In sleeve gastrectomy, the long surgical gauze method used in our study demonstrated a lower risk of complications and a faster recovery process compared to the Nathanson retractor. This method is particularly advantageous in reducing postoperative pain and minimizing pressure on the liver. However, further studies are needed to evaluate its effectiveness in other surgical procedures.
|
Aegrescit medendo: orthopedic disability in electrophysiology - call for fluoroscopy elimination—review and commentary | bee5edbb-bc74-4fd4-ab1b-3e6b46a621d9 | 9236987 | Physiology[mh] | Aegrescit medendo, the remedy is worse than a disease, was first described in book XII of the Aeneid . Fluoroscopy has been a necessary evil for the interventional electrophysiologist. The use of lead aprons to mitigate rare fatal cancers has created an epidemic of orthopedic disability. The rapid ascent and technological progress in the field of electrophysiology have resulted in increased diagnostic precision, improved procedural success rates, and improved patient survival. Electrophysiology (EP) researchers and industry must align in their efforts to harness that innovation and prioritize the health of ourselves and our staff, while maintaining safe and effective patient procedures. We provide a review of interventional cardiology radiation/fluoroscopy exposure and then a step-wise approach to completely eliminate fluoroscopy during electrophysiologic ablation (EPA) procedures and the implantation of new cardiac rhythm management (CRM) devices. Fluoroscopy is a continuous live x-ray imaging technique utilizing ionizing radiation that passes through the patient to visualize internal body structures. Following the first transvenous method to implant pacing devices in 1963, fluoroscopy had been the primary cardiac imaging tool to complete these procedures . Two categories of risk reduction are described for the emitted radiation during fluoroscopy. These include methods to decrease either detrimental stochastic effects (DSE, future cancers) or detrimental deterministic effects (DDE, immediate dose-dependent cellular damage) to the patient or lab personnel . One of these risk-mitigating strategies is that all laboratory personnel must wear heavy lead aprons. The consequential orthopedic injury risk from the continued use of lead garments is brought to light. We categorize this risk as detrimental orthopedic effects (DOE). The donning of lead aprons during these daily and long procedures has resulted in the rapidly progressing prevalence of severe musculoskeletal disorders among electrophysiologists. Because DOE has a much greater prevalence and hazard to EP physicians and staff (Fig. ), DOE is prioritized and appropriately placed alongside DSE and DDE. Major advances in arrhythmia mapping technology by both electroanatomic mapping (EAM) and intracardiac echo (ICE) have provided the ability to eliminate fluoroscopy completely in all forms of cardiac ablation . We call upon the EP communities, societies, training programs, and industry to reach freedom from dependency upon fluoroscopy by 2030. The ultimate aim is to eliminate fluoroscopic ionizing radiation use during ablation and implant procedures, eliminate all radiation risks to patients and staff, and thereby eliminate secondary occupational DOE risks of wearing the protective heavy lead aprons each day.
The benefits The risks—detrimental effects Metrics—procedural radiation reduction and the missing lead “apron-time” To reduce procedural risks while treating cardiac arrhythmias, imaging methods have gradually evolved from direct surgical visualization to virtual reconstruction of 3D cardiac chambers and their conduction pathways. In 1968, the first open-heart surgery provided direct vision to sever an accessory pathway of a patient with Wolff-Parkinson-White syndrome . Severe congestive heart failure was a common comorbidity in patients suffering from medicine refractory tachycardias, thus making them a too high risk to undergo surgical treatment. This gave rise to closed-chest procedures that, in 1982, proved that wire catheters could deliver high-energy electrical shocks that could provide the similar desired permanent disruption of a rapidly conducting atrioventricular node . Fluoroscopic visualization of the placement of these temporary catheters alongside less traumatic radiofrequency energy quickly propelled the EP field to confront almost all forms of cardiac arrhythmias. Concomitantly, rapid advances were being made in pacemakers and defibrillators, both of which required fluoroscopic imaging for the precise placement of their leads. Within 10 years, concerns surfaced on the effects of accumulating exposure to harmful radiation in both patients and physicians . General guidance was provided to limit fluoroscopic times to less than an hour and limit total procedure time to less than 5 h . Fluoroscopy use in the early years of EP procedures was accomplished by either being allowed shared time in a cardiac catheterization lab, often at the end of the day, or by having access to a procedure room with a portable C-arm. The construction of specialized EP labs evolved from single-plane fluoroscopy to biplane, and even rotational arms. By 2014, the demand for hospitals to construct new and complex EP labs led to a formalized consensus statement with a focus on safety, especially from fluoroscopy .
EP procedural and fluoroscopic times are a function of the complexity of the arrhythmia, the chamber location, the number of temporary catheters or permanent leads to be placed, the accuracy of the mapping system, the number of ablation lesions or lesion sets, and the method of ablation. Historically, the elimination of an accessory pathway with just a few discrete lesions could be completed within a relatively short total procedure time. Prior to the development of alternate imaging, fluoroscopy-guided elimination of an accessory pathway averaged 44 min . More complex arrhythmias, such as atrial fibrillation and ventricular tachycardia, that required more extensive and precise mapping with multiple lesion sets often resulted in procedure durations of several hours. Similar to ablation, the procedure times and duration of ionizing radiation exposure increased with the complexity of CRM devices. Radiation exposure increased from single lead to dual lead, with the highest exposure recorded with cardiac resynchronization (CRT) devices . The radiation exposure to patients during a CRT implant resulted in a 2–9 times greater exposure than any other device implant . Recent data from the RADAR study showed that the DDE radiation effects from atrial fibrillation ablation were comparable to implant procedures of CRT devices . Increased DNA damage was identified in circulating lymphocytes and monocytes as measured by the standardized comet assay. Radiation damage to these cells was seen from either an ablation procedure for atrial fibrillation or from CRT device implantation. The damage to DNA in these cell lines took 3 months to recover. In one study, due to accumulating radiation exposure to the implanting physician’s right hand, the authors not only recommended CRT implantation to be limited to a yearly number, but also recommended avoiding implanting devices on the patient’s right side . It was estimated that the DSE from performing ablation procedures at a frequency of about 360 cases a year would result in an added lifetime risk of a lethal cancer for 1 in 92 EP physicians . Applicable risk data from interventional cardiologists have shown that these physicians are at a significantly higher risk to develop radiation-induced cataracts and brain and neck tumors .
Medical personnel, physicians, and staff present in procedures requiring fluoroscopic imaging are among the occupations with the highest radiation exposure . Lead aprons are worn to mitigate this risk. Strict federal and institutional guidelines have been established to limit occupational radiation exposure. Exposure is closely measured with dosimetry badges required to be worn by all physicians and staff . Extensive safety data is collected with each procedure, including fluoroscopic equipment use, ionizing radiation emission, procedural times, and radiation dose. Many EP labs have upgraded their x-ray equipment to program lower frame or pulse rates. In the most aggressive attempt to overhaul all methods to minimize patient and physician exposure, an ultralow-dose radiation protocol was adopted in one German hospital for all pacemaker and defibrillator implants . Through the combined use of reduced pulse width and rate of emission, increased thickness of copper filters, reduced detector entrance dose, and an optimization of postprocessing image settings, the physicians were able to reduce the effective dose exposure by 59%. As technology advanced, finally giving rise to treatments for even the most complicated arrhythmia patients, procedure times often lengthened. Very little progress has been made for personal protective garments to shield the physician and staff. Lead aprons are worn to protect the cumulative irradiation risks of all-cause cancer and mortality that have been documented with cardiovascular and electrophysiologic interventionalists. Lead aprons with thicknesses 0.25 to 0.5 mm weigh about 12 to 25 pounds, respectively. The thicker, heavier lead aprons provided far greater prevention of radiation transmission . The aprons are worn under the sterile gowns during the entire procedure. Durations to implant CRT device commonly extend from 1.5 to 2.5 h for CRT device implants , while ablation of atrial fibrillation or ventricular tachycardia (VT) may range from 2.5 to 6 h . Hanging lead aprons from ceiling-mounted devices makes EP lab construction cumbersome while not protecting the EP lab support staff or the industry device representative. Hanging lead shields have taken on creative shapes, hinges, and armholes, designed to decrease the primary operator’s DSE, DDE, and DOE while many—including lab staff and anesthesia personnel—are still subjected to wearing heavy lead aprons. No metric exists to track the cumulative occupational load-bearing burden of donning heavy lead aprons. Apron-time, or the amount of time that the lead apron is worn during an EP procedure, is primarily an all-or-none time parameter directly correlated to the length of the procedure. To our knowledge, no study within procedural electrophysiology currently or in the past has collected data on lead apron-time, musculoskeletal disease (MSD), or DOE. Despite progress in decreasing radiation exposure, lead apron-time remains unchanged. A recent review analyzed the work-related MSD among at-risk physicians . These studies included interventional radiologists, surgeons (general, orthopedic, and plastic surgeons), and interventional cardiologists (including electrophysiologists). Degenerative spine disease, defined specifically as spondylosis, spondyloarthropathy, herniated or ruptured disc, or radiculopathy, increased in prevalence among interventional cardiologists from 8 in 1997 to 35% in 2015 . The most common site affected was the lumbar and cervical spine (Fig. ). The prevalence of MSD among interventional cardiologists quadrupled in less than 20 years . The ability to map and treat even more complex arrhythmias has resulted in even more prolonged procedure times. Lengthy procedure times have catapulted the specialty of electrophysiology to be one of the highest risk specialties to experience and suffer from MSD. Comparing cardiac and EP interventionalists to non-interventionalists, the interventionalists had a 10% higher risk of radiation-related illness while a > 50% higher risk of orthopedic injury . An astounding 49% reported an orthopedic injury involving the spine (cervical and lumbar), hip, knee, and ankle. The prevalence in this Canadian survey of interventional electrophysiologists for lumbar spondylosis was 25.9% and cervical spondylosis at 20.7%, showing a marked higher trend with years of occupation . 1997 was the last year the CDC studied work-related MSDs . At that time, MSD was the highest cause of disability, resulting in it being the number one cause of absenteeism in all healthcare workers. The impact to the nation economically back then was estimated to be an annual loss of $13 to $20 billion. Specific to physicians, work-related MSD resulted in 9% of physicians requiring a leave of absence, practice restriction or modification, or early retirement . Although the lead aprons may protect against 1 in 92 EP physicians from a lethal cancer , they cause at least 1 in 3 EP physicians to suffer severe pain from an MSD, and most will likely require at least temporary disability, if not major surgery . These statistics could likely be extrapolated out to EP lab staff. There are 4–5 EP lab support personnel in a typical ablation or implant procedure. These individuals share the cumulative orthopedic risk of occupational lead use leading to absence or early retirement . Utilizing a metric of apron-time would provide new and essential data to elucidate the risks of DOE further.
Ablation procedures are first to phase out fluoroscopy CRM device implantation procedures are last to phase out fluoroscopy Qualitative imaging comparison Table provides a comparison between imaging modalities to help identify step-specific deficiencies that industry partners might advance to make zero-fluoroscopy implants of all new devices a reality. Each modality of imaging has different advantages and disadvantages. However, combination ICE plus EAM provides superior imaging overall benefit that could completely eliminate the need for fluoroscopy for initial implantation. Final positions of leads could be confirmed with a post-procedure x-ray providing future comparisons in case of concern for dislodgement or other complications.
During EP procedures, the physician closely observes live fluoroscopic images as electrophysiologic wires are advanced from their vascular access points to specific positions within the cardiac chambers. For arrhythmia ablation procedures, the wires are placed in the cardiac chambers only temporarily, commonly moved throughout the chambers collecting wavefront activation measurements and applying either radiofrequency ablation or cryoablation lesions. Two U.S. Food and Drug Agency (FDA)-approved alternate forms of visualization, EAM and ICE, are well-accepted methods used daily. EAM has been the workhorse for mapping cardiac chambers and ablating most arrhythmias. The two most commonly used commercial EAM systems are EnSite (Abbott Medical, Abbott Park IL) and Carto (Biosense Webster, Inc., Irvine, CA). The acquired 3-dimensional virtual structural image created by EAM is a fixed shell structure with precise site-specific color-coded voltage change recordings (Figs. and ). If a heart rhythm is stable, following the same path beat-to-beat, then the local time-dependent voltage changes can be processed to allow visualization of the electrical wavefront as it may propagate across the myocardium. Real-time visualization of the catheter positions allows the safe repositioning of the wires within the borders of a fixed virtual image shell. Ultrasonography, and more specifically ICE, on the other hand, provides a live 2-dimensional image slice of cardiac borders of different densities with a probe that is easily deflected and rotated. In real time, one can visualize the cardiac border chamber border limits, interatrial septal motion, valvular motion, and pericardial space. It was not long after the introduction of EAM that the notion of zero-fluoroscopy use during ablation could be realized to minimize the long-term risk of EP procedural irradiation in a pediatric population. In 2002, Drago and colleagues used single catheters of EAM to map and ablate right-sided accessory pathways . With a 95% success rate with no complications, zero-fluoroscopy ablation studies gradually emerged to encompass all forms of arrhythmias in various patient populations. A recent literature review of EP ablation procedures analyzed studies over the last 17 years that aimed for near-zero to zero fluoroscopy in 20 trials to treat supraventricular tachycardia (SVT), 10 trials to treat atrial fibrillation (AF), and 4 trials to treat VT . In sum, 93% of 1,989 SVT patients had zero fluoroscopy during their procedure. More specifically, the AF trials included retrospective studies or consecutive enrollment trials to move from near-zero to zero fluoroscopy. From a safety perspective, even though these trials were non-randomized, none showed any significant greater risk of complications. In a new multicenter prospective non-randomized trial, investigators analyzed 1020 SVT patients treated with zero-fluoroscopy guidance against 2040 SVT patients with conservative fluoroscopic imaging . No differences were found comparing procedure times, complication rates, or success rates. Similarly, in an assigned 1:2 ratio consecutive enrollment multicenter trial for VT ablation, 94% of procedures were achieved without fluoroscopy . Five patients out of 163 required fluoroscopy because of needed coronary angiography. The review by Canpolat and colleagues identified that the most common hindrance preventing the goal of reaching zero-fluoroscopy time was specific physician apprehension to perform the transseptal puncture portion of the procedure . Concomitant ICE-guided transseptal puncture has solidified physician confidence allowing direct real-time visualization of the transseptal needle at the interatrial septum with visualization of left atrial microbubbles as the ablation energy allows safe passage of the transseptal needle into the left atrium. In a multicenter trial of 744 patients undergoing AF ablation, 100% success was achieved transseptal puncture without fluoroscopy, with a 0.5% complication of either pericardial effusion or tamponade . It was unknown if this complication resulted from the puncture or AF ablation. It is interesting to note that concern for lead dislodgement, patients with newly implanted devices lead (< 3 months) were excluded. There was no device interrogation evidence of dislodgement in all 46 patients with CRM devices that underwent AF ablation. Further obstacles to zero fluoroscopy in select patient groups have been identified in those that may require epicardial forms of VT ablation . The learning curve for adopting a fluoroless workflow in electrophysiology procedures has been studied previously. Analyzing historical data, a general learning curve was noted by identifying a decrease in procedure times comparing 1 year to the next while completing fluoroless cryoablation of atrioventricular nodal reentrant tachycardia (AVNRT) . Kochar and colleagues demonstrated that in a single-center experience, a zero-fluoroscopy workflow could be adopted safely for standard radiofrequency ablation procedures, including pulmonary vein isolation (PVI), supraventricular tachycardia, and premature ventricular contractions (PVC) . In their analysis, the steepest learning curve occurred over the first 40 cases of PVI, 20 cases for SVT, and 15 cases for PVCs. A similar single-center, retrospective analysis by Zei and colleagues demonstrated the safety and efficacy of a fluoroscopic reduction workflow for PVI . A significant downward trend in the mean fluoroscopic time was observed, suggesting a rapid learning curve. Experienced operators may likely have learning curves of less than 10 cases . The use of standardized simulation labs or formal training programs should also help accelerate the learning curve and ensure patient safety.
Studies to employ a workflow of zero-fluoroscopy for implantation of CRM devices lag in technology and the number of studies compared to ablation procedures. Several reasons may account for this shortfall. Unlike the ablation procedures, the implant procedure of CRM devices requires the permanent placement of pacemaker or defibrillator leads within one or more chambers at specific positions. It has already been shown that the two EAM systems are capable of interfacing with pacemaker or defibrillator leads to provide their position within the previously created virtual 3D cardiac image during the implant procedure . The Carto system, however, requires a custom-made connector to provide a workaround to see the device lead tip . ICE probes can already image a lead body, but in most platforms, the lead body is seen in a 2D cross-sectional slice. The CRM leads are constructed differently than EAM catheters, often by different companies, and require separate investigations to achieve FDA approval. To maintain a permanent, stable position, the distal tip of the CRM leads has either passive fixation tines or active fixation deployable screw tips. As will be shown below, and with some modification, both EAM and ICE could also be used directly to replace fluoroscopy in most cases for implantation of CRM devices. In addition, EAM can easily be repurposed to map the great vessels that lead to the heart as well as the coronary sinus (CS) vein and its branches (Figs. and ). EAM has been shown to significantly reduce fluoroscopy times with single , dual , and CRT devices . For traditional CRT devices, the most time-consuming and fluoroscopic-dependent step is the cannulation of the CS os with the guide sheath. This step also necessitates a contrast dye injection with simultaneous cinefluoroscopy to create a fluoroscopic map of the CS vein and branching vessels. Early data from 2012 showed that EAM could be used alongside fluoroscopy to reduce radiation exposure with the implantation of CRT devices . Reduction of fluoroscopy decreased from an average of 16.8 min down to 4.2 min with EAM guidance. However, these implants still needed cinefluoroscopy to map the CS vein and its branches. An Italian multicenter trial expanded upon this technique reducing average fluoroscopy times to 4.1 min with acceptable procedural success and complication rates . Of 125 patients, 122 patients had successful LV lead placements. A total of 5 ventricular lead dislodgements occurred (2 left, 3 right), and one patient was determined to have an asymptomatic CS dissection without pericardial effusion. No significant differences were found for procedure times, the success of LV lead placement or complications compared to historical controls of 250 patients. Procedure times remained the same with or without EAM, ranging from 1.5 to 2.5 h. Huang and colleagues published a workflow utilizing EAM, which resulted in an 86% reduction of ionizing radiation exposure . None of these investigations found any significant change in procedure times or complication rates. Individual labs have creative, innovative techniques to minimize fluoroscopic use. Despite the excellent reduction of radiation exposure and presumed risk reduction of DSE and DDE, physicians and all lab personnel still need to wear protective lead garments under their sterile gowns for the entirety of the procedure. The high DOE risk to develop an orthopedic disability in the EP personnel remains unchanged. At a probable even higher detrimental risk, the procedural nurses and EP technicians wear lead aprons on average 20–30 min longer than the physician for all cases. They may stand slightly further away from the x-ray equipment, which decreases DDE and DSE compared to the physician. Still, the increased apron-time would be expected to have a significantly greater DOE causing greater MSD. The impact of more prolonged apron-time on the prevalence of MSD is currently unknown for any EP lab personnel. Rolling lead barriers/coats for the physician provide no benefit to associate EP workers. Very limited case reports and studies have been published that document the complete elimination of fluoroscopy during CRM implant procedures. The first implant utilizing EAM only was performed in 2005 on a patient undergoing AV node ablation with an implant of a single chamber pacemaker with a passive fixation lead . An end-of-case single shot fluoroscopic image confirmed proper positioning. The pacemaker passive fixation lead type and last x-ray shot identified limitations of EAM that cannot identify active fixation screw deployment or lead body slack. This case study was followed by a series of 15 patients implanted with a single-lead VDD (ventricular paced, dual atrial and ventricular sensed, and dual atrial and ventricular response) pacemaker system. Again, passive fixation leads were used with a confirmatory single fluoroscopic shot prior to closure . Compared to historical controls, implantation of single-lead CRM devices with EAM imaging alone did not significantly increase procedure time . However, the procedure time for implantation of dual lead CRM devices ( n = 3), using only EAM imaging, increased by an average of 21 min. The procedure time in such small studies does not negate the large beneficial prospect that alternate imaging methods eliminate all the three risk categories described above for fluoroscopy. The procedure times would be expected to decrease both from an expected learning curve to the use of alternate imaging tools as well as from some technological improvements that are discussed below. It is even possible that procedure times may achieve shorter durations than fluoroscopy-guided implants. If EP procedures continue to use fluoroscopy at any step, then the only way to decrease DOE is to severely reduce procedure times. Realistically, to achieve a reduction of DOE for EP physicians and staff, the elimination of fluoroscopy becomes the true objective. Thus, with a focus below to diminish DOE for all EP personnel, all portions of the implant procedure will need safe elimination of fluoroscopy at each step. The steps proposed below are for the complete replacement of fluoroscopy to implant CRM devices. Imaging replacement steps for CRM implants Vascular access 3D venous passage to cardiac chambers and virtual structure creation Implantation of atrial and ventricular leads and confirmation of slack and helix deployment Cannulation of coronary sinus vein with lead delivery sheath Coronary sinus vein branch identification and lead insertion isualization of the subclavian or axillary can be obtained by transcutaneous ultrasound probe along the deltopectoral groove . Confirmation of blood flow directionality is accomplished with either Doppler color or simply with external pressure. With proper probe alignment, the vascular needle can be observed to enter the vessel lumen without passing through the back wall of the blood vessel or penetrating deeper structures (Fig. ). Confirmation of the guidewire in an endovascular location can also be visualized with a long-axis image of the target vessel.
Once the first vascular access is obtained, either an EAM mapping catheter or an ICE probe can be advanced to the cardiac chambers to construct anatomy en route to the atrium and ventricle. Alternatively, the imaging access site could be obtained from a different location (femoral vessels). Images obtained would be similar to those of Figs. and . Both ICE and EAM can create accurate 3D chamber images. EAM can additionally provide local endocardial surface voltage that may prove to be necessary to implanters. Researchers could utilize this data while creating the chamber structure that might reasonably identify the best permanent landing sites (regions of highest voltage sensed and/or lowest capture threshold) to target as the insertion site for pacing leads. Such data would be relatively fast and easy to obtain. New pacing capture threshold algorithms could be incorporated at common standard landing sites of pacing leads that may identify the most suitable permanent lead position.
A review of current literature exposes few reports of single or dual chamber device implantations using zero-fluoroscopy. Guo et al. published a case series of 6 patients describing a zero-fluoroscopy approach to pacemaker implantation . All procedures were performed using the EnSite NavX (St. Jude Medical, MN, USA), which utilizes 3 orthogonal pairs of electrode patches to geometrically create the right atrium and ventricle. The ventricular lead was placed by first obtaining venous access and then introducing the lead into the venous system. An alligator cable was used to connect the lead to the EnSite NavX system, making appropriate adjustments for the lead’s interelectrode distance. Then using the lead much like a mapping catheter, geometry of the superior vena cava, right atrium, and right ventricle was made by moving the pacing lead along the endocardial surface of each respective chamber (Fig. ). Once the ventricular lead was in a suitable place at the RV apex or RV septum, it was advanced 3–5 cm to account for the required slack. The lead was deployed in the usual fashion. A second set of alligator cables connected the lead to an analyzer to ensure adequate pacing, sensing, and impedance measurements. The leads were then advanced, withdrawn, and rotated to ensure stable measurements. Furthermore, the patient was asked to breathe deeply and cough prior to a final set of measurements. For patients requiring a dual chamber pacemaker, other techniques were employed to ensure proper right atrial appendage lead placement. First, right femoral venous access was obtained, and a steerable 10 pole catheter was used to make a 3D anatomic model of the right atrium and right atrial appendage. The atrial lead was then advanced from the subclavian vein access site with the lead connected to the EnSite NavX mapping system to visualize the lead tip (Fig. ). The lead was manipulated into the right atrial appendage and deployed once in a suitable position . The above-mentioned RV lead techniques were performed to ensure proper slack, lead parameters, and lead stability. Passive or active fixation leads could be imaged similarly to determine real-time position within the cardiac chambers. Since EAM currently has no method of determining lead body slack, ICE imaging might be a preferred imaging tool. However, importance could be placed on developing visible lead bodies just as EAM systems have developed visible sheaths. For example, the VIZIGO® sheath can be seen on the Carto mapping system once it enters the matrix of collected points. Activation screw tip deployment will likely require additional confirmation methods to be developed. In the meantime, a final fluoroscopic image could be utilized to ensure proper slack and helix deployment with staff appropriately shielded behind protective barriers but immediately available to attend to patient needs.
Placement of the left ventricular lead is typically the most time-consuming step when implanting a CRT device. As such, fluoroscopic time and radiation exposure can be excessive. Cannulation of the lead delivery sheath into the os of the CS vein using 2D x-ray is based upon positioning a sheath anatomically relative to other fluoroscopically identifiable structures such as the annular “fat stripe.” Then while under continuous fluoroscopy imaging, the posteroseptal atrial septum is probed with soft, flexible wires or catheters. Once the sheath is advanced into the CS vein, it is common practice to inject contrast dye for vein patency confirmation and to define potential target venous branches along the posterolateral left ventricular wall. This step is easily replaced with direct visualization of the CS by ICE imaging and direct insertion of EP catheters (Fig. ). Once the wire or catheter has been advanced into the CS vein, the lead delivery sheath or sub-selector sheath can be placed over the wire and into the CS vein.
The first study to implant CRT devices without fluoroscopy utilized EnSite NavX EAM in 26 patients . An additional femoral vein access for an EAM mapping catheter allowed passage to the cardiac chambers to construct a 3-dimensional image of the right atrial (RA) appendage and CS. Three sheaths were placed into the left subclavian vein (utilizing ultrasound for access). An active fixation RA lead was placed into the RA appendage. A bipolar pacemaker or defibrillation right ventricular (RV) lead was inserted at the RV apex. An electrophysiologic catheter through a lead delivery sheath was advanced into CS os, mapping along the CS vein and openings of vein branches. This catheter was replaced with the left ventricular (LV) lead and advanced within one of the vein branches. It had been previously shown that a soft VisionWire (Biotronik) used as a guidewire protruding through the lead lumen could safely map CS vein branches anatomically and test the underlying substrate for acceptable pacing sites (Fig. ) . The distal tip of the LV wire is virtually imaged by an alligator clip connection from the lead pin to the EAM system. Successful deployment in 24 of 26 LV leads was achieved . The left subclavian vein was obstructed in one patient. A successful device implant from the right subclavian access was also noted with this method. In the second patient, fluoroscopy identified a Thebesian valve obstruction. Once a guidewire was passed beyond the obstruction, the LV lead was placed without further fluoroscopy. Note that with this method of implant, proper lead body slack is unable to be determined.
Several deficiencies could easily be overcome with existing technology. Most obvious is the need to image the lead body and active lead screw deployment. At least two main avenues of approach will help achieve the fluoroless goal. Innovation of the leads and/or the imaging equipment itself may be required. Placing sensors along the distal 5–10 cm of the pacing lead body will allow adequate 3D virtual imaging, providing the operator knowledge of sufficient lead slack and proper active fixation screw protrusion. During the EAM of cardiac chambers, voltage and rapid pacing algorithms could predetermine the best landing sites for permanent pacing, saving time, and possible complications. New electrical testing algorithms could be devised to confirm proper screw deployment. New coronary sinus vein branch mapping tools might include small gauge sensor-laden flexible and deflectable fiber that allow adequate mapping, lead guidance, and capture threshold testing. ICE probes could be designed to allow 360° rotation as opposed to deflection which could otherwise result in dislodgement of a lead just implanted. During the probe rotation, a virtual image of the lead could be accurately placed in the virtual 3D chamber. A 4D ICE probe would allow real-time imaging of all leads within the cardiac chambers. To our knowledge, there is no current research and development in placing sensors on pacing or defibrillator leads to allow for EAM and delivery without the use of fluoroscopy. We do, however, know from the MediGuide™ technology (St. Jude Medical Inc., St. Paul, MN) that sensors mounted on intracardiac devices (electrophysiology catheters) can be visualized in real time, thus minimizing fluoroscopy exposure . If this concept is applied in the development of sensors placed at the tip of pacing and defibrillator leads, we could potentially implant devices without fluoroscopy. More research would be needed to understand how the leads would handle and perform with the addition of mapping sensors placed at their tip. Most certainly, any new technology placed within implanted leads will require clinical testing for FDA approval. Time costs and financial costs will present as barriers. Even well-intentioned decisions to save a mechanically balanced method from a previous FDA-approved lead design could backfire. The defibrillator lead recall of 2011 involved a silicone insulation breach with externalization of the conductor cables that could cause electrical failure ( https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfres/res.cfm?id=105847 ). In 10 of 20 new Riata models, inert or electrically inactive filler cables remained in the lead body design to maintain mechanical stability. Unfortunately, the stabilizing filler cable could break its distal attachment and extrude from the insulation breach as well . An extruded filler cable extrusion escaped detection by standard device surveillance and added to the confusion of how best to manage a costly recall. Concerning the recently developed leadless pacing devices, fluoroscopy is still a specific requirement for placement. In the case of the currently available Medtronic device, fluoroscopy is used to ensure proper placement by confirming the attachment of the device tines. Fluoroscopy is also used to ensure proper deployment of the yet-to-be commercially released devices from Abbott and Boston Scientific. Using intracardiac ultrasound as an alternative imaging strategy for placement is not currently an option. However, these device companies can and should develop technology within the leadless implant that would allow for detection either under ultrasound or through electromagnetic or impedance field 3D mapping. Such technology could be deployed to leadless devices along with other possible engineering designs that will allow fluoroless imaging of these devices. It may best to advise the FDA, industry, and academic medical institutions to veer their research from the very start toward a direction that needs to be accomplished without imaging by irradiation.
A genuine concern about utilizing fluoroscopy-less technologies such as ICE and 3D mapping is the cost. Cost is a true upfront limitation to widespread adoption of fluoroscopy-free device implantation. Current financial reimbursement to physicians and hospitals is being decreased, forcing decisions that favor cost-cutting, even if at the expense of longer case times, worse efficiency, and overall patient and provider safety. Although different hospital systems can negotiate a range of prices for these technologies, new ICE catheters are approximately $1000 with some discount if using reprocessed equipment. 3D mapping patches also cost roughly $500–1000 per patient. Navigation enabled catheters cost approximately $1000, creating an overall cost per case of $2,500–$3000. In traditional lab systems, these per-case costs are in addition to the fluoroscopy systems that are in place. Single and biplane fluoroscopy systems cost upward of 1.5–2 million dollars and $5,000–10,000 monthly for routine upkeep and preventive maintenance. However, if both ablation and implant of CRM devices could be accomplished in future EP labs that no longer require an upfront fluoroscopy system cost, then the individual case costs would be offset. In addition, interest is exponentially increasing in the cost analysis benefits of both minimal fluoroscopy and zero-fluoroscopy techniques. Unfortunately, in the setting of device implants, some of the detrimental consequences are a zero-sum game. We have previously described the DSE, DDE, and DOE that ionizing radiation and lead aprons can accumulate over the years in the EP lab. Even if decreasing fluoroscopy to a few minutes per case, the physician and staff are required to wear lead aprons for the entirety of the case without any mitigation of the DOE risks. In addition, the authors of the multicenter, randomized controlled NO-PARTY trial, who compared fluoroscopy-guided or minimal fluoroscopy-guided EP Study in SVT ablation with the EnSite NavX system, concluded that the additional cost of incorporating near-zero fluoroscopy was offset by the reduction in cancer afforded by this technique . We should not throw the baby out with the bathwater. The pursuit toward zero-fluoroscopy device implantation should not be discounted due to the current cost of lab setups and current technology. If fluoroscopy had not been the first imaging method applied to EP procedures, and EAM and ICE were the initial imaging tools, then fluoroscopy may have never progressed beyond a very limited C-arm use. In fact, with continued improvements in image and catheter refinement and early adoption in academic training programs, the requirement for costly fluoroscopic imaging in future EP labs may not be required and certainly not the gold standard.
The exceedingly high prevalence of orthopedic disability among EP physicians and their lab personnel should be greatly feared. There is an additional concern beyond the scope of retirement in pain. DOE is a rapidly growing deterrence to attract the brightest physicians to, arguably, the most biophysically complex healthcare field. The era to usher in fluoroless EP labs is at hand. The two categories of EP procedures that had relied upon fluoroscopic imaging include ablation of arrhythmias and implant of CRM devices. A review has been presented of the technology and multiple studies that have proven the safety, effectiveness, efficiency, and short learning curves that can eliminate fluoroscopy use in ablation procedures. Much of the same technology can be easily pivoted to evolve implant procedures of CRM devices. Shortcomings of fluoroless ablation procedures would include epicardial ablation or alcohol injection of the vein of Marshall, where fluoroscopy use may still be required but in a far more limited role. Rescue fluoroscopy, at least by C-arm, should remain available for EP labs to assess for potential complications. We have identified methods to eliminate fluoroscopy use at each specific step of the implant procedure of CRM devices. Most of these methods already use FDA-approved equipment that requires additional clinical studies to provide further safety measures, time-saving steps, and shorter learning curves. We call for new studies, new apron-time metrics, new simulation labs, new learning curve assessments, and new priorities in fellowship training programs. We have identified that the DOE risks are of such a high magnitude that this categorization of risk needs to be placed alongside and maybe in front of the DDE and DSE risks of fluoroscopy. Cost barriers for new lead design, in addition to risks of recalls, perhaps too high to overcome, thus might divert innovation toward new imaging refinements and modalities that can track lead advancement, slack, and helix deployment. The “cure” that lead aprons can prevent the development of cancer for 1 in 92 physicians is overtaken by the fact that 1 in 3 EP physicians will develop temporary or permanent orthopedic disability. Fluoroscopy is an obsolete, harmful tool that creates a 2D gray shadow world with zero electrophysiologic information, when what is required is a 3D colored electrophysiologic world. It is quite feasible that the first company to provide efficient, safe, and accurate visualizing methods to implant CRM devices will have a significant market advantage. In a stooped posture, EP physicians now begging for methods and training to eliminate fluoroscopy is not an unexpected paradigm shift. Everything old is new again. Aegrescit medendo is replaced appropriately by cur ate ipsum . Physician heals thyself!
Below is the link to the electronic supplementary material. Supplementary file1 (MP4 21377 KB)
|
Unhelpful Information About Low Back and Neck Pain on Physiotherapist's Websites | 4c726ecd-f2fc-4e0a-b083-50af2da62226 | 11718599 | Patient Education as Topic[mh] | Introduction As the internet serves as an increasingly important source of information for patients with musculoskeletal pain conditions (MSK), the demand for reliable online resources continues to grow (Fritsch et al. ). A significant proportion of patients seek online information as their primary resource before consulting with healthcare professionals (Gualtieri ). However, the quality of such information varies significantly and frequently does not align with best‐practice recommendations for MSK (Hauber et al. ; Peterson, Rainey, and Weible ). It has been observed that websites aimed at public consumption provide inaccurate, often biomedical‐focused information about low back pain (LBP) and neck pain (NP), which fails to adequately address patients' informational needs. (Black, Sullivan, and Mani ; Costa et al. ; Ferreira et al. ; Neelapala, Raja, and Bhandary ; Suhail and Quais ). This online information diverges from high‐quality best practice recommendations and guidelines for LBP and NP, which advocate for a biopsychosocial framework that acknowledges the complex interplay of biological, psychological, and social factors in the onset and persistence of pain (Maher, Underwood, and Buchbinder ; Wilson et al. ). While previous studies have analysed small samples of websites, the representativeness of these samples relative to the available websites remains uncertain (Basnet et al. ; Ferreira et al. ; Neelapala, Raja, and Bhandary ; Peterson, Rainey, and Weible ; Santos et al. ; Suhail and Quais ). To the best of our knowledge, no research has specifically analysed websites from healthcare professionals' practices. The prevalence of incomplete, incorrect, and fear‐inducing information disseminated by healthcare professionals on their websites remains unknown. Ensuring that the information provided by healthcare professionals aligns with high‐quality best‐practice recommendations and guidelines for LBP and NP is critical. Incomplete, incorrect, and fear‐inducing information can delay recovery or even cause harm, leading to distress, incorrect self‐diagnosis, and inappropriate treatment, as indicated by various studies (Almeida et al. ; Dutta‐Bergman ). In the Netherlands, the majority of private physiotherapy practices in primary care offer online information about MSK to inform patients about the causes, symptoms, and treatment options. However, there has been no comprehensive analysis of the accuracy and consistency of this online information with the current guidelines. Therefore, the primary aim of this study was to evaluate the quality of information regarding LBP and NP available on the websites of Dutch private physiotherapy practices in primary care. To achieve this, three research aims were formulated: (a) to analyse the biopsychosocial content of the information provided about LBP and NP; (b) to analyse the consistency of the treatment recommendations with the Dutch LBP and NP guidelines, and (c) to explore fear‐inducing language about the causes, advice, and consequences of LBP and NP. This study aims to highlight the gaps and inconsistencies in the online information provided by private physiotherapy practices. This evaluation is critical to improving the quality of online health information, ensuring it is more closely aligned with best practice guidelines.
Methods 2.1 2.2 Study Design A population‐wide, website‐based content analysis was conducted between September 2020 and February 2021. This study followed a four‐step process (O'Connor and Joffe ). First, the specific elements of the content analyses were defined in a coding frame. Second, the coders underwent training, and inter‐coder reliability (IRC) was assessed. Third, a list of all websites of Dutch private physiotherapy practices in primary care was obtained and screened for eligibility. Fourth, all eligible websites were analysed using the coding frame. 2.1.1 2.1.2 2.1.3 2.1.4 efining the Coding Frame Based on a study that evaluated popular websites, a modified list of specific characteristics of interest in website content was developed (Black, Sullivan, and Mani ). Specific additions focused on adherence to guidelines and explanations of (treatment) advice, general advice, stated causes, consequences, and whether fear‐inducing language is used. The feasibility of the list was tested, and the coding frame was revised by the first two authors and 16 coders using five randomly retrieved physiotherapy practice websites. The complete coding frame is presented in Appendix .
Coder Selection, Training, and Inter‐Coder Reliability Coders were recruited from students in the final year of a four‐year Bachelor Physiotherapy program of the Hanzehogeschool Groningen, University of Applied Sciences, the Netherlands. All the coders participated voluntarily. During their physiotherapy curriculum, all coders were educated on the Dutch guidelines for the LBP and NP and the concept of reassuring or fear‐inducing language. To enhance this education, the coder received additional refresher training before participating in the study. The training was provided by the second author (RRR), and the coders received three 1‐h training sessions. The coders were randomly allocated to two groups using simple randomization (with a random number generator in Microsoft Excel). For the training, both groups were first familiarised with the coding frame and received a refresher course about the guidelines and the concept of reassuring of fear‐inducing language based up on the recommendation in the LBP guideline in a group meeting. Next, each coder practiced using the coding frame themselves using seven randomly retrieved websites equally to all coders using simple randomization (with a random number generator in Microsoft Excel). A week later, an online session was organised in which the group discussed their findings and discrepancies and refined how to use the coding frame. After that, again seven websites were used for individual practice, and another group discussion was organised a week later. To gain insights into the coding frame and increase transparency, nine websites were selected (based on heterogeneous results on the coding frame) to be coded by all raters. The codes were used to calculate the inter‐coder reliability (ICR). When the median of the Gwet's agreement coefficient (Gwet AC) was lower than 0.6, all websites were analysed independently by two coders. Single items with a value lower than 0.6 were given extra attention during training before data inclusion could start. Subsequently, all findings were discussed within the group to solve any discrepancies and to function as additional training.
Eligibility Screening After training, all coders participated in the screening of websites. A list of all websites was retrieved using the free‐to‐access, online database Zorgkaart Nederland (Patiëntenfederatie Nederland ). The list was divided by the twelve Dutch provinces, and each province was allocated to a group of coders. All the websites were screened for information availability. Websites were included if they contained three or more paragraphs on LBP or NP. Websites that contained general information from a single source (e.g., freely accessible online physiotherapy encyclopedia) were excluded to avoid duplicates within the content analysis.
Content Analysis The list of eligible websites was allocated equally to all coders. The coding frame was converted to an online form using Survey Monkey (SurveyMonkey, 2021), in which all items were addressed per website. Each coder noted any points of discussion or ambiguity regarding the processing of information from the website into the coding frame, which were addressed together with the first two authors (RvdN and RRR). Consensus on the coders' discussion items was reached through discussion. In addition, all items, including the achieved consensus reached, were discussed with the entire coder group on a two‐weekly basis to further refine the coding process.
Data Analyses The analyses were performed using IBM SPSS Statistics 28.0 (IBM Corporation, 2021). Descriptive statistics were used for presenting various study variables such as explanations of LPB and NP, treatment recommendations, and the identification of non‐fear‐inducing language in the website content. The ICR was analysed on a per‐item basis using 9 websites and 32 coders. Percent agreement and chance correct agreement (Gwet's agreement coefficient, Gwet's AC) were calculated (Feng ; Gwet ). For the interpretation of Gwet's AC, > 0.8: very good, 0.6–0.8: good, 0.4–0.6: moderate, 0.2–0.4: fair, and 0–0.2: poor were used. ICR calculations were conducted using R version 4.3.2, using the irrCAC package. For the analysis of each website's content, the Biopsychosocial Information Categorization Checklist (BPSIC Checklist) was used (Black, Sullivan, and Mani ). The scoring criteria for the BPSIC checklist were incorporated into the coding frame to ensure uniform analysis of the website data. This checklist incorporates various ‘core principles’ for both biomedical and psychosocial criteria. We modified the terms ‘limited psychosocial’ and ‘reasonable psychosocial’ to limited and reasonable biopsychosocial to align better with the biopsychosocial model. BPSICC consists of three categories: (1) biomedical: only biomedical and no psychosocial descriptions or examples were provided; (2) limited biopsychosocial: in addition to biomedical, 1–2 psychosocial and/or 1–2 psychological and/or 1–2 social descriptions or examples were provided; (3) Reasonable biopsychosocial: in addition to biomedical, 3 or more psychosocial, psychological, or social descriptions and/or examples were provided. For the recommended intervention or treatment and general advice, the website content was compared with relevant information from the current Dutch guidelines for LBP and NP (Koninklijk Nederlands Genootschap voor Fysiotherapie , ). The results were presented as either consistent, not consistent with, or not mentioned in the guidelines. Consistent meant that the website recommendation aligned with the guidelines. ‘Not consistent’ meant that the website information contradicted the guidelines (i.e., discouraged). ‘Not mentioned’ meant that it was not addressed in the guidelines. Each intervention, treatment, and advice were summed to determine the total number and percentage in relation to the overall count. Where possible, a category for fear‐inducing language was created with similar terms, and the total number and percentage in relation to the overall count were calculated. Coders were instructed to identify negative suggestions through graphics or textual depictions portraying a fearful representation of spine‐related complaints. They were asked to articulate the features of the various graphics, using statements such as ‘vertebrae colored red’, ‘painful facial expression’, or ‘vertebral collapse marked in red’. Each of these features were summed to calculate the total score.
Results A total of 8607 private physiotherapy practices were identified. Following the removal of duplicates and entries lacking information, n = 834 (10%) websites remained. After content analysis, n = 449 (54%) websites revealed the presence of information about LBP, while n = 295 (35%) contained information about NP, and n = 90 (11%) websites included information about both LBP and NP. The IRC between coders ranged from 67% to 100% agreement, with a median of 83% (IQR 77% ‐ 94%). Gwet's AC ranged from 0.40 to 1.00, with a median (IQR) of 0.68 (0.63–0.89). 3.1 3.2 3.3 3.4 3.5 Fear‐Inducing Language For advice or explanation about causes and consequences, fear‐inducing language was used n = 1624 (69%) times and not consistent with the guidelines. An analysis of the textual quotations revealed the presence of six distinct categories of fear‐inducing advice. The categories are presented in Table , with examples for each category. Contextual quotations are included to provide further insight. In n = 133 (16%) of all the websites, various graphics that present a fearful representation of spine‐related complaints were shown 220 times. Table presents a description of the top five images exhibiting a fearful representation of complaints related to the spine.
Biopsychosocial Content Analysis Following the analysis of the websites for the BPSICC, the majority of websites were found to contain biomedical information, with LBP n = 287 (64%), NP n = 174 (59%), and those with both LBP and NP information n = 60 (67%) representing the highest prevalence (Figure .). A limited number of websites were found to be reasonably biopsychosocial in their content (LBP n = 38, 8%; NP n = 19, 6%; combined n = 2, 2%). The BPSICC total scores are presented in Table . Table displays the scores of the core biomedical and biopsychosocial principles from the BPSIC Checklist. The data indicated that pathoanatomical and biomechanical influences, along with the perception of LBP as a universal or normal phenomenon, are the most commonly cited. Conversely, influences relating to thoughts, emotions, and behaviours associated with the course of LBP, as well as the individual's unique biopsychosocial perception towards LBP, were mentioned least frequently.
Factors Associated with LBP and NP A total of 950 factors related to pain, behaviour, and psychosocial factors were presented on 484 (58%) websites. Of these, 306 (33%) factors were not mentioned in the guidelines. Of the individual factors, (general) stress was the most associated with LBP and NP ( n = 301, 32%). In the social domain, the work‐related factors were the most frequently mentioned ( n = 151, 16%). These findings are consistent with the guidelines. All factors are presented in Table .
Recommended Interventions Possible interventions or treatments were advised 1855 times on n = 560 (67%) websites (Figure .). A total of 58 different interventions were proposed for LBP in a total of n = 943 (51%) times. Of these interventions, n = 23 (40%) were consistent with the guideline ( n = 700, 38%). Nevertheless, n = 32 (55%) interventions were not mentioned in the guideline. For NP, n = 52 different interventions were suggested and n = 32 (61%) were not mentioned in the guideline. Interventions that were not consistent included dry needling ( n = 73, 10%) and high‐power laser therapy ( n = 1). An overview of the interventions is presented in Table , and other interventions that are not listed can be found in Table .
Advice, Causes, and Consequences Most websites ( n = 542, 65%) offered general advice or explanations about the causes and consequences in a total of 2358 times. The overwhelming majority of these websites ( n = 427, 79%) provided inconsistent advice with the guidelines (Figure .). A total of 509 (22%) recommendations were consistent with the guidelines on n = 115 (21%) websites. Examples of the consistent advice include the following: ‘Stay active,’ ‘Bed rest is not useful or applied in a maximum of two days,’ and ‘Stay in contact with your colleagues.’ Furthermore, illustrative examples of reassuring, consistent advice include the following: ‘Keep moving as best as you can and be not afraid that you will do damage,’ and ‘Try to move gently despite your pain.’ In addition, conflicting advice has been found, including the suggestion that individuals should ‘stay active but be careful with bending and lifting or avoid painful movements.’ References were made to different types of diagnostic imaging on n = 110 (13%) websites. Diagnostic imaging was discouraged 94 times, in accordance with the guidelines. Diagnostic imaging was recommended 68 times, not consistent with the guidelines. Additionally, visual material was utilised to support the general advice on n = 355 (43%) websites.
Discussion This study presents a national population‐based content analysis of Dutch private physiotherapy practice websites. The majority of websites display a biomedical orientation and are not in line with the applicable LBP and NP guidelines. Fear‐inducing language was observed on the majority of websites. One may argue that this study ‘only’ studied websites; however, we argue that the results of this study may be indicative of larger issues affecting patients, the general population, the physiotherapy profession, and the healthcare system. Dissemination of inaccurate information may negatively influence patients' knowledge, beliefs, attitudes, and expectations towards LBP and NP, which may at their turn negatively influence outcomes. The information provided may have implications for their health and well‐being, potentially causing unnecessary anxiety and incorrect health‐related decisions (Kragting et al. ). The influence of a physiotherapist extends beyond the individual patients. It can affect broader populations, particularly if the information is disseminated through their own websites. Since people who trust online health information are more likely to use the internet for health‐related activities, such as searching for healthcare providers and hereby potentially visiting these private physiotherapy practice websites (Hou and Shim ). Incorrect information may therefore lead to misconceptions within a larger population. For this reason, it is crucial for physiotherapists to consistently pursue accuracy and carefulness in the information they provide, in order to enhance patient health, their credibility, and preserved trust in healthcare in general. Our results show that there is a clear need for greater awareness among the professional community regarding the content of the information on LBP and NP presented on websites and fear‐inducing language. Although this study may be indicative of a deeply rooted biomedical perspective on LBP and NP among physiotherapists in the Netherlands, it remains uncertain to what extent the information provided on the websites is translated into or indicative for clinical practice. It is important to recognise that this information is provided by authorised healthcare professionals as healthcare professionals are considered to be the most reliable source of information (Farnood, Johnston, and Mair ). Online information, as also demonstrated in other research, is predominantly biomedical in nature for both LBP and NP (Black, Sullivan, and Mani ; Kondo et al. ; Neelapala, Raja, and Bhandary ). Our findings are in accordance with prior research on government and hospital websites, where most LBP advice was also found to be inaccurate (Ferreira et al. ). As with other MSK conditions such as patellofemoral pain, knee crepitus, complex regional pain syndrome, or osteoarthritis, patients should approach the advice and information provided with caution because the credibility, accuracy, and comprehensiveness of such information is low (Goff et al. ; Moore et al. ; de Oliveira Silva et al. ; Serighelli et al. ). Previous research examined the alignment between the content of websites and the online information needs of patients with LBP (Nielsen, Jull, and Hodges ). Unfortunately, this aspect was not considered in our study. The most common treatment advice (exercise therapy) aligns with the high‐quality best practice recommendations and guidelines for LBP and NP. This also meets the expectations of people who associate exercise with being the primary treatment for LBP. However, they also advise caution when performing the exercises (O'Keeffe et al. ). These perceptions appear to correspond with the fear‐inducing language used in the advice commonly found on the websites. Furthermore, physiotherapists using fearful language seem to have worsened the clinical outcome (Darlow et al. ). Although the Dutch guidelines advise providing reassuring information, the English translation of the Dutch guideline specifically states to avoid language that encourages fear of pain and catastrophic thinking. How language is used is of significant consequence. Previous research has demonstrated that the explicit labelling of back pain as a pathological condition is associated with the perception of pain, tissue damage, and a poor prognosis among patients (O'Keeffe et al. ). Furthermore, the practice of pathological labelling can result in a higher need for imaging, a willingness to undergo surgery, and a greater inclination to seek a second opinion, which in turn can lead to higher healthcare costs. This can be prevented by careful selection of appropriate terminology (O'Keeffe et al. ). It is likely that similar dysfunctional perceptions, fear‐inducing, and catastrophizing thoughts will also result from the use of pathological labeling on websites, leading to prolonged symptoms, higher disability, and higher healthcare costs. However, information that may be fear‐inducing for one individual could be reassuring for another. The strength of this study was the use of a national population‐based approach, which has ensured that the study provides a comprehensive and representative overview of what is described on the Dutch private physiotherapy practices in primary care, allowing for more precise estimates and reducing the error in the interpretation of the results. The high inter‐coder reliability achieved in this study demonstrates the consistency and accuracy of the data extraction process, ensuring that the findings are based on reliable data. Additionally, Gwet's AC1 and AC2 were chosen due to the high prevalence of equal choices, which could lead to a paradoxically low coefficient when using other statistical procedures. The extensive training provided to the coders also underscores the rigour of the methodological approach. A limitation of the study is that the data was extracted from the websites in 2020 and evaluated against the standards of the Dutch LBP guideline published in 2021 and NP guideline 2016. We have randomly re‐examined 20 websites in 2024. These websites showed no noteworthy differences; however, our sample of 20 represented only 2.4% of all websites in this study; generalisations may need to be approached with some caution. Another limitation is that data on the (impact) reach of the website were not inventoried. It is possible that larger practices may have greater financial resources available to them, which would enable to optimise texts for search engine algorithms. This would allow them to reach a larger patient population. It is unknown if these websites reach larger amounts of patients and whether these websites are more or less consistent with the biopsychosocial model and the guidelines. Another limitation is that the LBP and NP guidelines do not fully align with regards to the advice that should be provided or avoided. For instance, the NP guideline does not clearly indicate that imaging and fear‐inducing language should be avoided. The NP guideline, published in 2016, may be more outdated, which could explain these discrepancies in recommendations. However, this is still the endorsed guideline used today. In the absence of standardised criteria or validated methods, we used the Dutch guideline for LBP as a basis for identifying fear‐inducing language. This is also a limitation of the study. Furthermore, it is possible to score as Limited or Reasonable Biopsychosocial if psychosocial factors are sufficiently addressed, although the advice may still be inconsistent with the guideline and/or contain fear‐inducing language. Categorization based on the frequency of psychosocial content can be debated, and the precise description and interpretation of this content are essential. Currently, we do not know whether biomedical websites use more fear‐inducing language than websites scoring as Limited or Reasonable Biopsychosocial. Future research could address this relevant question. Overall, there seems to be an urgent need for effective implementation strategies to transform the physiotherapy websites into a helpful source of information. Our research findings should encourage physiotherapists to thoroughly evaluate and if needed rewrite the information on their websites concerning LBP and NP to reflect the latest guidelines and current scientific knowledge. Furthermore, there is a lack of standardised criteria or validated methods for identifying reassuring or fear‐inducing language. Future research could focus on developing criteria to enable the reproducible identification of such language, underscoring the critical need to investigate the provision of reassuring information for both LBP and NP. For the purpose of implementation, we propose that the guidelines explicitly include a section on information, advices, education, and language use towards patients, which can be used in private (physiotherapy) websites or linked towards professional organisation's websites. For example, within the guidelines, it can be specified what constitutes ‘fearful‐provoking’ information and which misinformation should be avoided. Additionally, a section could be added on public information dissemination, particularly offering online information.
Conclusions This study shows that most Dutch private physiotherapy practice website are not a reliable source of information for patients with LPB and NP. The Dutch physiotherapy community needs to take action to comprehensively review and update the information on their websites to align with high‐quality best practice recommendations and guidelines for LBP and NP. It is important to strive for better information for patients to reduce fear, to support them in making better recovery choices, to achieve less disability, and to improve their quality of life.
Original idea of the study from R.R.R., R.N. and R.R.R. designed the study. R.R.R. and the students collected the data. R.N. analysed the data and discussed findings in detail with all the authors. R.N. had a primary role in preparing the manuscript, which was edited by D.P., H.R.S.P., M.F.R., and R.R.R. All authors critically reviewed several drafts of the manuscript and approved the final version of the manuscript.
The authors declare no conflicts of interest.
Appendix S1. Table S1.
|
A Survey on forensic odontologists’ activity in Italy during the COVID-19 pandemic | 96f933d0-7288-49de-befb-c63796211d3c | 9988697 | Forensic Medicine[mh] | Introduction The restrictions induced by the Covid pandemic have caused to people all over the world many kinds of huge disruptions to daily life and working behaviors. This resulted especially true for the dental professionals for the high risk of infection and virus dissemination implied in the typical close, continuous and prolonged contact dental operations with aerosol production. In the beginning of the pandemic, when nobody still knew much about the virus diffusion, the dental offices and clinics were let open to public for urgencies and operations which couldn’t be postponed, according to the recommendations of the Italian Medical and Dental National Board (FNOMCeO). After the first lockdown, though, more precise operational guidelines were released by the Italian Ministry of Health and also by some dental scientific societies , then furtherly updated according to the continuous flow of evidences about the virus behavior and characteristics. Triage procedures, use of proper PPE (personal protective equipment) for dental staffs, preliminary mouth rinsing of patients, and detailed measures for disinfection and ventilation of the different areas (operative, reception, etc) of the dental offices were quickly implemented and became compulsory for public and private dental clinics and professionals. Non specific guidelines or procedures were released for forensic odontology or medico-legal activity with the exception of some recommendations for autopsies provided by the pathologists , , , , , , that resulted useful for forensic odontologists dealing with dead bodies. According to the Italian judiciary system, however, some medico-legal activities requested by the magistrates’ Courts, and related to civil litigation and about permanent impairment evaluation, raised several issues during the pandemic period. The procedure usually requires that the expert go to Courts to be officially charged of the case in presence of the magistrate and lawyers; then the visit of the claimant is performed in a session open to other parties, lawyers and experts. A thorough medicolegal visit of the stomatognathic system requires some time and is performed directly on the patient without facial mask. These operative circumstances implied specific risks of contagion, that required to be properly managed by the forensic odontologists involved during COVID-19 pandemic. A recent research conducted via questionnaire administered to 34 different countries found that the activity of a forensic dentist is limited to body or bitemark identification and age estimation only in 27% of the countries . According to the aforementioned research, the activity of a forensic odontologist is extended in Italy to include dental malpractice analysis, evaluation of oral impairment for Courts, private parties and insurance companies. . The odontologist’s forensic activity was accordingly disrupted by the large virus diffusion, considering especially the characteristic of no urgency of most of his practice. Even the magistrates’ activity was mainly suspended for some months (March-April 2020) firstly and then simply reduced for some others (till July 2020), with limited possibilities of public access to the courts and parallel development of online procedures with the sole exception, obviously, for any undeferrable civil proceedings or criminal courts’ activity. Moreover, a unique telematic transmission of all the judicial acts and documents has been established only for civil proceedings since 2014 whilst for criminal investigations and trials only paper files were used. In this weird period of time, any forensic odontologist appointed by private clients or by the Court carried out the visits applying the same rules and preventive regulations adopted during the usual dental clinical activity. Just the civil judicial procedures involving the odontologist, usually performed in presence, were made completely online and via the nowadays famous many video conferencing platforms. In special cases, especially when a medico-legal assessment was requested to the forensic odontologist by an insurance company, technical reports were written “ per tabulas ”, term which means “on the examination of the documental evidences” and without the direct clinical examination of the individual involved. To the authors’ knowledge, no previous studies investigating the direct, (i.e. caused by the contagion) and the indirect (caused by the prophylactic regulations and restrictions) impact of the pandemic on the forensic odontological activity are available. The procedure of ascertainment of body lesions and impairments is commonly and globally based on the direct examination of the dead bodies with an autopsy and on the direct assessment on living subjects , , . A specifically designed survey was addressed to investigate how extensively the COVD-19 pandemic impacted directly and indirectly on the Italian forensic odontologists’ daily activities, the procedures adopted to manage the different contagion risks for expertise conducted on living people or dead bodies and the possible peculiar cases which required the intervention of a forensic practitioner.
Materials and methods Given the lack of similar studies, a specific questionnaire was developed according to the structure of some previous questionnaires used to investigate the impact of Covid-19 pandemic on the daily clinical dentistry activity , , , , , . Our questionnaire consisted of 30 questions, 28 of which were close-ended and multiple choice and 2 were open-ended (specialty and region/ city of activity; ). The subjects covered by the questions have been divided into sections and summarized in the attached supplementary document. The first section investigated any general difficulty encountered by the professional; the second set of questions aimed at investigating the forensic odontologists’ activity as expert for Courts, insurance companies or private parties during the pandemic different periods of time. The third section focused on the management of the cases of body or bitemark identification or age estimation while the last questions investigate the possibility of any future development in the professionals’ daily practice. Most of the questions referred to three different pandemic time intervals: the first lockdown (from 08.03.2020 to 04.05.2020), the period of time immediately after the lockdown (from 06.05.2020 to 06.11.2020); the third period of time, after 06.11.2020. The virus diffusion, contagion risks, legal restrictions and regulations changed significantly in Italy during these three time intervals, hence we assumed that the impact should have been investigated accordingly. The forensic odontological community in Italy is uneven and the lack of statistical or epidemiological data do not allow to know exactly the inference of this type of activity in Italy. Many FOds, even if adequately trained, carry out forensic activities with discontinuity. Thus, selection of the sample for the online survey was made by invitation to all the members of the Italian largest forensic odontology association (Pro.O.F- Forensic Odontology Project). The participants had access in absolute anonymity to the questionnaire that was available from 18.06.2021 to 20.09.2021, then the collected data were elaborated with descriptive statistic.
Results As the main result, a total of 122 professionals answered to our request. The sample has been divided by age (five age cohorts with a 26 over 66 age range), gender and region of provenance (northern, central, southern Italy). Respondents were 74% male and 26% female, largely aged over 46 years. More specifically, answering FOds’ (forensic odontologists) age was as follows: 26–36, 9% (n. 11); 37–46, 15% (n. 18); 47–56, 31% (n. 38); 57–66, 37% (n. 45). The participant FOds came from 16 regions of, the national territory: 90% from the northern and central regions (51% and 39% respectively) and only 10% from the southern part of Italy. The northern regions included: Lombardia, Liguria, Piemonte, Emilia-Romagna, Veneto, and Friuli-Venezia Giulia; the central regions included: Lazio, Toscana, Umbria, Marche, and Abruzzo; the southern regions included: Calabria, Sicilia, Puglia, Molise, and Campania. 53 odontologists stated that they are usually enrolled by Courts and Private Parties. The mainly represented activities are expertise for civil or criminal Courts (about 50% of the whole) and impairment evaluation and dental malpractice cases for private parties (about 35%) in all the three regional areas ( ). The procedures performed for insurance companies was the least represented in all the regions. The number of the assignments decreased significantly regardless of the geographical area for 79% of the answering FOds during the first lockdown – from 08.03.2020 to 04.05.2020 ( ). The reopening of most activities and businesses after the lockdown (second pandemic phase from 06.05.2020 to 06.11.2020) was followed by a maintenance of the number of the assignments for the 42,5% of the Fods and an increasing one in the 23.8% of the sample. The increasing of the assignments interested mainly FOds located in the northern (41%) and central regions of Italy (45%), while, in contrast, those from the southern regions recorded a decreasing of the assigned tasks (66,6%) ( ). This trend is actually confirmed and equally distributed over different national areas. Most FOds reported to have now the same number of tasks compared to pre-pandemic time (54%), whilst an increased charges occurred in 12% and a decrease in 34% of the interviewed practitioners. In are reported the special precautions adopted by the FOds who were required to perform a visit of a patient for any kind of assignments (criminal or civil Courts, Public Prosecutor, police, private parties, insurance companies, etc). The reduction of the number of the participants is mainly related to dental visits requested by a magistrate Court, to which parties, lawyers and experts of the parties must normally be admitted. Given the high risks of contagion, the FOds who acted as experts for the Court were allowed by the Judge to ask for a reduction of the number of the participants or even to perform the visit behind closed doors, being anyway granted to the lawyers or the other experts the remote online participation to the procedure through one of the many platforms for teleconference. The evaluation method " per tabulas " (expertise without direct visit and only based on the documents) was adopted by 23% of the sample and only for cases provided by the insurance companies. The guidelines available for dental clinical practice were deemed adequate for the odontologist activity for the 90% (110) of respondents, and the 53% considered that the preventive measures, especially the use of PPE, were more necessary for the usual clinical dental practice rather than the forensic one. ( ). Troubles in daily practice due to the pandemic and related precautions was experienced by he 57% of the respondents and were evenly reported independent of age or area of origin, apart from a consistent higher occurrence reported by FOds aged > 66 years (90%) ( ). The reported difficulties were: travel limitations (66%), partial closure and restrictions on admissions to offices and courts (52%), and the evaluation " per tabulas " and via telemedicine (23%) for those who have carried out activities for insurance companies. In 19 cases (15%) a complain have been issued for the misuse of PPE by the experts appointed by the Courts during the expertise meetings with the parties and/or for having prevented the remote participation of the experts to those meetings. Only in one but significant case, one expert was prevented from participating in presence to the medical examination and the interruption of any relationships with the parties followed; all the complainants were male and over 47. The 80% of the affected professionals continued to carry out their activity from home (the so called “smart working”); the 20% completely stopped their activities regardless different age cohorts and regions. Online procedures for Court trials and the related expertise and tele-visiting got the support of the 85% of the FOds during the pandemic. Moreover the 34% of the participants considered the online remote procedures (online Court audiences, tele-visiting, telematic submission of experts’ reports, etc) an useful alternative to the activity in presence also in the coming post-COVID era. The shows the confirmed COVID-19 cases of occupational origin among the members of the dental staff, all from individuals coming from northern or central regions (panel A). No participants from a southern region contracted the disease; all certain contagions, either professional or extraprofessional, occurred in northern region (panel A). 13 contagions have been reported as occupational accidents, more than 50% (7 cases) of which by dental assistants, 5 by dentists and only 1 by a dental hygienist. In two cases the employee filed a civil claim for contagion caused by a negligence of the employer. Both cases were reported in Como, a city in the northernmost part of Italy. No criminal lawsuits for occupational contagion were reported by the respondents to the survey (panel C). Looking at the “core” of the odontologists’ forensic activity - as such, human identification (ID), bitemark analysis and age estimation - it results that only 4 odontologists out of 122 respondents were involved in such cases since March 2020. They reported 2 cases of age estimation, 4 cases of human ID, and 1 case of bitemark analysis. Moreover, 10 FOds were involved in 8 cases of domestic abuse, 3 cases of abuse on a disabled individual, and 1 case of child abuse ( ).
Discussion Since the first weeks of 2020 we have been living a difficult period of time in which a new Coronavirus, diffused all over the world as a pandemic, caused severe consequences, grave social inequalities and health services disruptions even influencing the socio-economical stability of the countries. Healthcare workers were in the first line to tackle the pandemic issues and forced to suffer from staff crisis, ethical dilemmas, overwhelming exhaustion, paying in term of severe consequences for their life and mental and physical illness. The researches in the Literature, many of which based on forms sent to a specific category of health workers, investigate the difficulties in performing their activity related to the virus diffusion; among them there are also five studies about dentistry in Italy during the time of pandemic. Cagetti e coll. sent a form to 3599 dentists from northern Italy since that part of the country was, almost in the beginning of the pandemic diffusion, the most affected region. The Authors investigated mainly the development of the Covid infection and the positive and symptomatic cases among dental workers, finding that 473 participants (13.1%) suffered from one or more symptoms referable to Covid-19 syndrome but only 31 (0.86%) individuals resulted positive to Covid and only 16 of them really developed the disease. Consolo et al. investigated the psychological consequences of the pandemic and the adoption of preventive measures in a population of dentists from the Modena and Reggio Emilia area in northern Italy. This study confirmed that the Italian dentists fully adopted the national and international guidelines and shut down their practice or strongly reduced their activity. The 74,4% of the participants told that Covid-19 has had a very negative impact on their activity and the most part of them (89,6%) was very worried also for the future of their professional activity. De Stefani et al. sent a form to 1500 Italian dentists in May 2020 to investigate their attitudes toward the Covid-related risks; Sinjari et al. and Izzetti et al. confirmed that dental operations were strongly reduced during the first lockdown and limited to the emergencies. The research of Johnson et al. , based on forms sent to Indian forensic odontologists, found that it is necessary to implement protocols for the virus infection prevention but also to provide the healthcare workers with the necessary safety conditions to cope with emergencies. Because of the lack of previous similar studies in the forensic fields, our research and the data collected from the forms represents a new perspective in the investigation of the impact of the pandemic on forensic activity and give us a full and clear picture of this period of time and its consequences on the Italian forensic odontologists’ activity performed on living and dead bodies. Most of the participants in our sample are men, in line with the data from the CED FNOMCeO (Data Elaboration Centre of the Italian Medical and Dental National Board) which in 2019 counted 22,899 male professionals out of a total of 35,536 registered dentists. Similarly, the most represented age range is from 47 to 66 years, the same average age reported by the 2019 FNOMCeO data. Speaking about the forensic activity, this can be possibly explained with the long training necessary to acquire enough skill and knowledge to satisfactorily practice such a profession. The answering FOds come mostly from northern Italy, and especially from Lombardia, one of the worst hit regions by the pandemic. The geographical distribution of the results corresponds exactly to the distribution of the general population (about 18 mil North; 12 mil Centre; 14 mil south) and the higher percentage of dentists and Fods in the Italian northern regions. Moreover, the impact of COVID-19 was pretty higher in the northern regions of the country at least during the first pandemic wave. The forensic professionals, as those of all the other specialties did, denounced difficulties in performing their activity with a sharp decrease of the assignments in all the geographic areas since the beginning of 2020, when the virus first diffused all over the country, the knowledge of its characteristics was very poor, and the first tentative procedures to manage the sanitary emergencies were implemented. As expected, the forensic activity in general, in Court and private practice, along with the clinical one, was suspended with the only exception of the emergencies and the proceedings in Court which could not be postponed. Furthermore, our research analyze the trend of the assignments in three different phases of the pandemic. In the period May-November 2020, since many businesses reopened, a new spike in the number of infections and new and stricter regulations from the government caused a general plateau in the odontological activity in northern and central Italy and a further decrease in the southern. Only in few cases it was registered a slight increase of assignments in northern and central Italy in this period. In the following phase, after November, many new regulations and regional restrictions were introduced according to the virus diffusion and hospital occupancies, and the trend of the FOds activity remained the same. Many more difficulties, which may explain the trend, affected this phase: quarantines, travelling restrictions, limits of access to Courts and public offices, evaluation " per tabulas" or tele-medicine visits. Among the difficulties reported, there were also problems in the triage procedures, reduction, re-programming or postponement of the visits, problems in the use of the PPE (especially the FFP2/KN95 masks) , , and in keeping the social distancing rules properly which in some cases caused complains to the FOd appointed by the Court. The precautional procedures implemented by the FOds were the same as in the clinical activity, even if a 53% of the participants reported a more focused attention to the rules during the forensic activity especially for the proper adoption of the PPE, room ventilation and triage procedures. This is quite obvious since the proposed guidelines for general dentistry are considered adequate and useful and the procedures in the daily clinical practice produce much more aerosol with the use of ultrasonic instruments and turbines and surgery , . The personnel in the autoptic room, at risk for the possible contact with infected material from the corpses, sharp injuries, splashes, and aerosol diffusion, had been properly instructed about the possible presence of the SARS-CV 2 in dead bodies up to (at least) 16 days at high viral loads and also in an active replication status which could transmit the infection by dermal inoculation, inhalation, ingestion, contamination of intact skin or oral, nasal, eye mucous membranes . Then, the risk of contagion could be considered of relatively scarce relevance when some supplementary precautions are carefully applied to the standard universal precautions for infection prevention and an adequate compliance with safety protocols is ensured , , , , . This is clearly showed by the low percentage of occupational contagion recorded by the questionnaire responding FOds which is around 2,4% (3 out of 122, panel A) compared to the highest percentage of the total contagions (extra-professional and unknown, panel A) registered among the participant (dentists and dental staff) which was about 13,1% (16 cases). These data are in agreement with the Discepoli et al. survey , which reports a percentage of 4.7% positive diagnosis to COVID‐19 among dental staff (dentists and dental hygienists) with a double of positive participants in Northwest regions than the other geographic Italian area. However, the questionnaire shows higher amount of reported occupational accidents due to professional contagion, mostly by dental office assistants (13 referred cases, panel B). Generally speaking, the reported accidents decreased in 2021, thanks to the general lockdown and the suspension of numerous activities. Of that number, however, nearly 7 out of 10 cases affect health workers. In the pandemic year, road accidents denounced to the insurance companies consistently decreased, while, on the contrary, the occupational recorded a 7,9% increase compared to 2019 only in the industry and services sector. Such an increase, from approximately 28,500 cases in 2019 to over 96000 in 2020 (+ 236.5%), is driven almost exclusively by the cases found in the work sector mostly exposed to the high risk of contagion and significant stress such as healthcare and social assistance . It must be specified, however, that a specific category of workers, similarly especially exposed to the risk of contagion, such as family and self-employed doctors and dentists, are excluded from the protection of the national occupational insurance offered by INAIL and therefore also from the aforementioned data. The National Institute for Insurance against Occupational Accidents (INAIL) is an Italian non-economic public body that manages compulsory insurance against occupational accidents and diseases and which applies an accurate evaluation of the possible source of contagion as the first stage of the forensic investigation in cases of Covid-19 positivity. In two cases, the personnel sued the dentist or the dental clinic, for responsibility in the occupational contagion. These two cases do not apply to the INAIL protection of the exposed worker, but they are normal civil lawsuits brought to obtain compensation ( panel C). They are few actually, but the outcome of the cases may prove to be worrying for the dentists if the infected worker gets a favorable sentence. We have no information so far about the evolution of these civil trials and this is a limitation of our research. In addition, most of the declared occupational infections ( ) where reported in northern Italy: 100% of the cases of infections in the occupational context that led to the interruption or re-programming of the visits, 57% of the dental assistant infections and 100% of the hygienists. This figure is not surprising considering that the northern regions, and especially Lombardia, were among the most affected by the number of cases and diffusion of the infection in Italy. In the Literature we found other unpleasant and disappointing data: during the pandemic some authors reported a significant increase in the number of violent crimes, such as domestic abuse and harassment. The number of domestic abuses against women registered a worrying peak in the last 18 months, and as such the UN defined the phenomenon as a real “shadow pandemic” . This phenomenon has been investigated also in our research but, since the limited data in our study, it is difficult to take definitive conclusions on the matter. We may anyway confirm that among the participants in our research we found 12 cases of assignments by the Courts regarding violent crime cases and in particular 8 cases of domestic violence, 3 cases of abuse of disable, and a single case of child abuse ( ). Converseley, the low rate of assignments for dental age estimation and body identification should be regarded as only partially due to pandemic impact. Especially for the common body identifications, dental data are largely overlooked as primary identifier in Italy so the Italian odontologists are not regularly involved in such activities independently from the pandemic outbreak . This statement was confirmed by the answers in our questionnaire showing that the FOds were usually involved mainly in civil litigation cases ( ). On the contrary, the number of age estimations procedures requested for children was lower than the expectations. Age estimation assignments probably fell down because they were largely postponed by Juvenile Courts for the risk of contagion and for the strong engagement of the local health authorities against the pandemic diffusion. However, it should be considered that the adoption of a standard multidisciplinary protocol, especially in cases of unaccompanied minors, is still lacking in Italy. There is still no real agreement in Italy on the methods to apply for age estimation and on the involvement of the odontologist in the procedure. Thus we expected to find a limited involvement of the forensic odontologists and the real impact of the pandemic in such activities is quite difficult to find out. The forensic odontology professional environment looked with a bit of scepticism, at first, followed however by an increasing favour, at the introduction of online procedures such as videocalls, video conferences, use of certified e-mail services, which so often helped in, or even substituted, the usual procedures during the pandemic time. Many insurance companies are nowadays facing an increase in the administrative costs of the claims. A future goal seems to be the implementation of an end-to-end digital procedure for the whole management of the claim (from its opening to the eventual compensation), practice which has been already significantly and successfully adopted in the pandemic time even in the assessment procedures of medical and dental malpractice cases. The development of a fully telematic procedure would certainly increase the efficiency of the case management eventually leading to a cost reduction for all the parties involved. However, some relevant limitations suggest that the online procedures, even if useful tools to support the traditional forensic activity, won’t be able to fully replace it almost in a near future. Such limitations can be considered the ever more strict attention to the proper acquisition, management, and protection of personal data and the limited reliability of a visit made with digital tools in the identification of signs or symptoms in some cases. (e.g. mild skin lesions, symptoms detectable by palpation or percussion) , . Digital tools and procedures, though, can be profitably adopted also in identification procedures and dental autopsies and can still be considered as a valid option when an odontologist cannot be present on site after a mass disaster or a on a place where a single unidentified body has been found. However, it should be noted that, albeit digital delivery of healthcare is promising and widely used, it entails significant medico-legal issues. For instance, telemedicine is burdened by the legal issue of de-coupling (the practitioner and the patient can be in places with different medical malpractice laws) and with the risk of data breach due to the so-called “digital footprints” .
Conclusion The COVID-19 pandemic strongly disrupted any health and forensic activity. The Italian forensic odontologist faced risks of contagion in the usual activity either with the living individuals, or criminal and civil procedures, post mortem assessments in ID procedures. The forensic odontologists in their PM autopsy activities relied on guidances issued by some scientific societies and from papers published by pathologists, while their activity on the living has been much less guided because of the variability of the daily practice, taking all the risks of it in the pandemic time. Our research has been based on the answers to a survey from 122 Italian forensic odontologists. The following main results emerge: More than the 80% of the participants adopted preventive and safety measures (FFP2 masks, triage, room ventilation) even if half of them believed that the usual clinical procedures are more risky for the inherent aerosol generated. As could be expected, during the initial lockdown time, about the 75% of the FOds assisted to a sharp decrease of the assignments, which increased after the implementation of the vaccination campaign (to the pre-pandemic numbers in the 50% of the cases). The participants to the research reported 13 cases (54% chair assistants; 38% dentists) of occupational contagion complained to the National Institute for occupational insurance (INAIL). More than the 85% of these cases were instructed in northern and central Italy (54% northern; 31% central). This data was largely expected since the northern and central Italian regions were the most affected by COVID-19 pandemic. In two interesting, although worrying for the dental professionals, cases the dentists have been sued by one member of dental staff, who alleged an employer’s responsibility raising from a lack of proper safety and preventive procedures in the dental office which caused a SARS- CoV 2 professional contagion. The low reported rate of assignments for body identification emerged more as the consequence of the large underuse of dental data as primary identifier of unknown corpses in Italy, than as a noticeable impact of the pandemic. The participants to the survey considered the recommendations for the clinical practice useful also in the forensic and medico-legal activity; video calls and video consultations were largely adoptIlenia Bianchi : Methodology, Investigation, Data curation, Writing – original draft. Francesco Pradella : Methodology, Data curation, Writing – original draft. Giulia Vitale: Data curation, Writing – original draft. Stefano De Luca : Data curation, Writing – original draft. Forella Pia Castello : Investigation, Data curation. Martina Focardi: Data curation, Writing – original draft. Vilma Pinchi : Conceptualization, Methodology, Writing – original draft |
The Role of Medical Therapies in the Management of Cervical Intraepithelial Neoplasia: A Narrative Review | 0b0e472e-72cc-4c5b-8bf4-74970624bd75 | 11857687 | Biopsy[mh] | Cervical intraepithelial neoplasia (CIN) refers to squamous epithelial lesions of the inferior genital tract that are precursors to cervical cancer . Globally, invasive cervix cancer is the second most common cancer among women . The American College of Obstetricians and Gynecologists (ACOG) emphasizes the importance of the management of abnormal screening results using the current American Society for Colposcopy and Cervical Pathology (ASCCP) guidelines . The current guidelines for cervical cancer recommend personalized management based on the risk stratification of the patients . The most important pathogenic factor in the development of CINs is a persistent infection with human papillomaviruses (HPVs) , which has the highest prevalence in young women, mostly between 25 and 35 years old. The association between HPVs, cervical dysplasia, and cervical cancer is stronger for high-risk HPV types (16 or 18) . Due to cancer risk, CINs are divided into two categories: low-grade intraepithelial lesions (LSILs, formerly called CIN I and p16-negative CIN II) and high-grade intraepithelial lesions (HSILs, formerly called CIN III and p16-positive CIN II). The spontaneous regression rate in cases when CIN I is histologically proven is high, up to 85% depending on the patient’s risk factors such as age, HPV infection or smoking habits . On the other hand, spontaneous regression in CIN III is possible, but less probable than in CIN I. However, the risk of developing invasive cancer in the absence of a surgical excisional procedure substantially increases to 5–22%; moreover, cervical cancer represents 4% of total cancers . The current management for cervical dysplasia involves surgical procedures such as lasers, electrosurgery, cryosurgery or a classic excision of the abnormal or precancerous tissue due to an HPV infection . Taking into consideration the side effects of the surgical therapy that affect the obstetrical future of the patient, such as an increase in the premature rupture of membranes, preterm birth, increases in neonatal intensive care unit admission rates, and perinatal mortality, there is an increased interest in the use of chemical topical agents. Even if the therapies are still in the experimental phase, there are some that have been used successfully in the management of vulvar dysplasia due to an HPV infection . However, there has been no significant reduction in mortality or morbidity in the past 10 years, and the therapeutic approaches are still very diverse . In our opinion, these treatments can be used in patients with CIN I and cervical condyloma after the colposcopic exclusion of a high-grade lesion if the lesion is fully visualized and the junction is visible. Several classes of agents have been used, such as antineoplastics, immunomodulators, antiviral drugs, herbal extracts, synthetic chemical preparations, hormonal drugs, or a combination of these agents. The aim of this review is to identify topical agents for local use in CIN, which could be an alternative approach until the emergence of therapeutic vaccines.
For the current review, we searched PubMed and Google Scholar for all available original articles on treatments for CIN (see ). We chose databases and key words (human papilloma virus; cervical intraepithelial neoplasia; personalized treatment) to exclude relevant titles from 1996 to 2024. As automatic tool we used Abstrackr, and excluded duplicates and irrelevant abstracts. This is a free, semi-automatic, open-source application, which allows for citation screening; the reviewers then only screened articles considered relevant by the software. Screening full text manuscripts, irrelevant trials have been excluded. In the full text screening, we used the snowball method, searching for citations from included manuscripts. Based on this search, we identified a total of 5674 articles. The inclusion criteria for our search were key words, full text manuscripts, and the presence of a proper study design (clearly presented, with properly explained inclusion and exclusion criteria, dosage and administration of medication explained, follow-up described, results clearly presented, or reviews based on clinical trials and relevant information). After excluding the articles published in languages other than English, or with inaccurate or inappropriate designs, or where only the abstract was available, 91 studies remained and were included in the current review ( ).
3.1. Topical Chemotherapy 3.2. Immunotherapy 3.3. Anti-Viral Medications 3.4. Vitamin A Compounds 3.5. Trichloroacetic Acid 3.6. Photodynamic Therapy 3.7. Natural Therapies Topical 5-Fluorouracil 5-fluorouracil is an anticancer agent that inhibits DNA and therefore slows tumor growth . Although it was introduced over 30 years ago, 5-fluorouracil continues to be one of the most widely used anticancer drugs. It is used in the treatment of several different malignancies, alone or in combination with other therapeutic agents . Topical 5-fluorouracil (5-FU) is used for the therapy of skin infection with HPVs, including genital lesions . The treatment seems to be associated with severe adverse effects such as local pain. These side effects are likely a dose-related response due to multiple daily applications . Rahangdale et al. (2014) performed a prospective randomized trial in woman aged 18–29 years with CIN II, who were divided into observation and treatment with intravaginal 5-FU groups. Regression of CIN II was observed in 93% of the women in the 5-FU group—who were given a topical 5-FU dose every 2 weeks—and 56% of the women in the observation group. At the 6-month follow-up, the women in the 5-FU group were twice as likely to be HPV negative compared to the women in the control group . 5-FU is an off-label medication used in vaginal intraepithelial neoplasia treatment and has been shown to play an important role in the recurrence of CINs . In addition, Fiascone et al. (2017) demonstrated that intravaginal 5-fluorouracil could be used to treat VAIN (vaginal intraepithelial neoplasia) with a high success rate .
A new targeted therapy is represented by antibody–drug conjugates (ADCs), which are a modality that can kill tumor cells in a more effective way. Immunotherapy consists of a monoclonal antibody that targets a tumor-specific antigen not generally found on normal cells. This can be a promising treatment for use in cervical cancer [ , , , ]. 3.2.1. Imiquimod (Aldara 3.2.2. Interferons 3.2.3. Granulocyte–Macrophage Colony-Stimulating Factors (GM-CSFs) ® ) Imiquimod is an imidazoquinoline-fused drug that has been demonstrated to have potent immunomodulating, antiviral, and antiproliferative activities. Clinical studies have shown that imiquimod can induce a significant reduction in viral loads . Thus, the enhancement of the immune response mediated by imiquimod may have potential applicability in the treatment of cervical dysplasia, as a persistent HPV infection is strongly linked to the development of CIN and malignancies. Imiquimod has shown significant efficacy in treating vulvar intraepithelial neoplasia (VIN) and vaginal intraepithelial neoplasia (VAIN) . In Grimm et al.’s study, fifty-nine patients with untreated CIN II/CIN III were randomly allocated to a 16-week treatment with vaginal topical imiquimod or placebo. The vaginal applications of 6.25 mg imiquimod were administered as follows: one vaginal suppository weekly for the first 2 weeks, two vaginal suppositories weekly for the next 2 weeks (week 3 and 4), and then three vaginal suppositories weekly until the end of the 16-week treatment period. The randomized controlled trial performed by Grimm demonstrated local tolerance without significant adverse effects with statistically significant regression (73%) in the study group compared with the placebo group (39%) ( p = 0.009). Complete histologic remission was observed in 47% of the patients receiving the imiquimod, which is higher than in the placebo group (14%) ( p = 0.008) . Cutaneous side effects were observed, such as vitiligo, psoriasis, or pemphigus, but in rare situations, and these reactions were reversible Toll-like receptor 7 and 8 (TLR 7 and 8) and nuclear factor-kappa B (NF-kappa B) were found to play an important role. The major biological effects of imiquimod are mediated through TLR and NF-kappa B activity . Pachman’s study included 56 patients and compared standard excisional treatment vs local applications of imiquimod. No differences were observed in CIN recurrence after 2 years between the 2 groups. Side effects after imiquimod were significantly higher than after excision, such as fatigue, fever, vaginal discharge, and headache. The study of Pachman et al. had some limitations . First, it had a small sample size—their goal was to recruit 152 patients, but they had only managed to enroll 56 patients. A possible explanation was the frequent visit routine, demanding five vaginal applications of imiquimod followed by two clinic visits daily .
Interferons (IFN) are low-molecular-mass glycoproteins released by host cells that belong to cytokines, which are molecules used for crass-talk between cells in order to enhance the protective activity of the immune system to help eradicate pathogens . They also have antiproliferative effects, and are used in the treatment of cancer, as well as being protective effects against radiation. The application of local IFNs could be useful in the initial phase of a viral infection to stop viral replication . Not many clinical trials have been carried out on interferons’ relevance in CIN . Yua Go’s study in 2024, carried out on 112 patients, compared argon plasma coagulation with interferon. The APC group included 77 patients, 35 patients being treated with interferon. A significantly higher HPV clearance after a 12-month follow-up was observed in the APC group—significantly higher compared to in the interferon group (87.67% vs. 51.52%, p < 0.05). Treatment with APC is more effective compared to interferon, with a significantly higher cure rate in the APC group (79.22% vs. 40.0%) .
GM-CSF is a hematopoietic cytokine that helps repairing and making more white blood cells. Antigen-presenting cells (APC) in CIN were evaluated in two clinical trials with 15 patients with LSIL and 11 healthy women. GM-CSF local applications were performed; all patients with CIN presented an elevated immune response after the application, NK and T cells produced interferon gamma and APC was increased. No toxic effect was recorded during the follow-up (until 30 months). Immune response against HPV16 after GM-CSF applications was recorded in all women with CIN. Validation is needed due to the limited number of subjects . These cytokines are produced by keratinocytes. The levels of GM-CSFs are correlated with tumor-associated DCs. Moreover, in vitro, DCs have been shown to specifically induce the apoptosis of keratinocytes transformed by HPVs, whereas normal keratinocytes are not affected .
3.3.1. Cidofovir 3.3.2. Imunovir (Inosine Pranobex) A relatively safe immunomodulator, inosine pranobex (Imunovir), was introduced as a potential treatment method for HPV infection and cervical cancer, which can be tailored to each patient’s condition . More recently, inosine pranobex has been developed as an oral therapy, and this has increased success rates in the treatment of HPV lesions with classical therapy . Tay’s study showed that inosine pranobex was superior to the placebo in improving the vulval epithelial dysplasia caused by subclinical HPV infections. The participants in this study were women with chronic pruritus vulvae. This study revealed that inosine pranobex induced significant regression in the dysplasia in almost two-thirds of the treated patients. The compliance with administering the oral medication and the low adverse drug reactions make this an appropriate treatment for women with a symptomatic vulval HPV infection . There are no studies in the literature demonstrating that inosine pranobex can eliminate HPVs from cervical lesions.
Cidofovir, an acyclic phosphonate nucleoside active against DNA viruses, is recognized as an effective treatment for genital and extra-genital HPV lesions . Cidofovir in topical applications has been shown to have various indications: the treatment of herpes lesions in immunocompromised patients that are resistant to acyclovir, Molluscum contagiosum, adenovirus infections, and cutaneous and genital warts, which are not associated with HPVs . Cidofovir 1% gel can inhibit cervical dysplasia lesions. In a study by Van Pachterbeke et al., cidofovir was observed to be active against CIN II lesions in women. In his study, histologically complete remission was demonstrated in 61% of the cidofovir group compared with 20% in the placebo group. The failure to completely regress could be caused by the insufficient penetration of the drug into all targeted cells, or the severity of the lesion. The short observation time of 6 weeks is another important weakness. Most of the studies that used a longer study period (3 to 6 months) found more complete responses . Thus, it must to be verified whether a prolonged exposure to topical drugs could provide better outcomes. The Van Pachterbeke et al. study grouped the results of the CIN II and CIN III patients together, so a better response could be expected in an exclusively CIN II population treated with imiquimod and cidofovir, based on the higher remission rates for CIN II patients in the studies with separate CIN II and CIN III datasets [ 35].
Retinoids Retinoids are synthetic derivatives of vitamin A. First- and second-generation retinoids have the flexibility to bind with several retinoid receptors, compared to third-generation retinoids . An important role of retinoids is their involvement in epithelial cell proliferation and differentiation (which are essential for cell growth, differentiation, and cell death). Retinoids have been studied as a prevention option for various cancers . Retinoids may have a role in inducing apoptosis and differentiation . Alvarez et al. reported the results of a clinical trial of aliretinoin (9-cis-retinoid acid) in 114 women with CIN II or III. The participants were randomized into three groups that received daily oral doses of aliretinoin (50 mg or 25 mg) or a placebo for 12 weeks. After medical treatment, the histological response was zero. There was no significant difference in the regression rate between the placebo (32%), 25 mg (32%) and 50 mg aliretinoin (36%) groups. Headache was the most frequent adverse effect . Ruffin et al. performed a randomized study in which 175 women with CIN II or III were given daily doses of all-trans retinoic acid (atRA), via vaginal application, at doses of 0.16%, 0.28%, or 0.36%, or a placebo for 4 consecutive days. A biopsy was performed 12 weeks after entry into the trial to evaluate the outcome. No significant difference was observed in response rate among the study groups. However, the women with CIN II in the treatment group showed a higher incidence of disease regression compared to CIN III . No sign of therapeutic efficacy was obtained when using retinoids for preventing the progression of CINs, according to a recent meta-analyses .
Trichloroacetic acid is an analogue of acetic used for its destructive effect on the skin and mucosa, causing burns and physical destruction through protein coagulation. Gleiser et al. provided encouraging data on the local use of trichloroacetic acid for the treatment of LSILs and HSILs, with the regression of the lesions observed at 6 weeks post-treatment . A histologically complete remission was observed in a phase II clinical trial conducted by Schwameis et al. (2022; phase II clinical trial on Topical Trichloroacetic Acid used in CIN), with the clearance of high-risk HPV infection. Mild discomfort was reported in 36 cases of the 102 patients included, with no severe adverse events .
5-Aminolevulinic Acid Photodynamic therapy (PDT) is a relatively new treatment modality, used in epithelium neoplasias. PDT is based on photoreactive drugs that preferentially concentrate in tumors . Clinical studies on the treatment of CINs using PDT are limited, but the topical application of 5-ALA in the PDT of CINs is correlated with disappointing results .
Natural therapies, including the administration of propolis, turmeric, other plants (e.g., green tea, Sanguinaria canadensis, glycyrrhizin acid (sweet wood extract), and aloe vera), or B vitamins, do not have enough evidence behind them to be recommended for the treatment of CINs . Podofilox 0.5% is a mitotic agent developed from podophyllin resin. Unlike podophyllin, it is more stable, and causes fewer systemic effects . A local preparation based on a Coriolus versicolor extract (Papilocare ® , Gedeon Richter, Targu Mures, Romania) has also recently appeared in Romania. The studies on this preparation are in the beginning stages. Preliminary data were presented in 2019 from groups of 84, 66 and 29 patients corresponding to high-risk HPV 16-18-31 diagnosed with ASC-US or LSIL, which showed a significant reduction in lesions . Polyphenon E Sinecatechins is another topical drug used for condyloma acuminata that was recently approved and has shown promising results in some clinical trials. Its clinical efficacy is due to the presence of catechins that are known to interfere with multiple cellular pathways, providing modes of action that may result in better treatment outcomes. After concentrating the eluate, more than 90% (by weight) of the resulting powder consisted of catechins . The most abundant and the most important active catechin compound is (epigallocatechin gallate) (EGCG), which accounts for >55% of the total polyphenol content, according to the Veregen prescription information . The first botanical drug to be licensed for topical use to treat external genital and perianal warts in immunocompetent patients was Polyphenon E (Veregen ® , MediGene AG, Martinsried, Germany). This product is an extract made from the leaves of green tea, Camellia sinensis, and contains tea polyphenols. Catechins account for more than 85% of the tea polyphenols . Polyphenon E’s most important component is epigallocatechin gallate, with high biological activity, including antitumor, immunostimulatory, and antiproliferative effects against keratinocytes that are infected with the virus . Moreover, catechins have strong antioxidative properties, which could inhibit the transcription of HPV proteins, as Rösl et al. have demonstrated . The efficacy of using Polyphenon ® E (MediGene AG, Martinsried, Germany) in the treatment of genital and perianal warts has been proven in a number of studies . Stockfleth et al. showed that more than 50% of the patients treated with Polyphenon ® E 15% and 10% ointments exhibited complete clearance of all the baseline and new genital and perianal warts after self-applying the topical treatment three times a day for up to 16 weeks. The same study revealed that the rates of recurrence 12 weeks after the treatment were between 4% and 6% . Tatti et al. demonstrated a similar efficacy for a topical Polyphenol ® E treatment, with a rate of complete clearance of the anogenital warts between 53 and 55%, while the recurrence rate was below 7% . Meta-analysis studies comparing different self-applied topical treatments revealed that products containing catechins have better clearance and recurrence rates (46.8–59.4% clearance rate; 4.1–11.8% recurrence rate) compared to imiquimod 5% and podophyllotoxin, and are also well tolerated by patients . Recently, researchers showed an increased interest in the use of sinecatechins in cancer therapies. A case report in 2013 by Gupta et al. detailed the case of a 45-year-old patient diagnosed with vulvar intraepithelial neoplasia, warty subtype, with HPV and CIS; after being treated for 8 weeks with imiquimod 5% ointment without any result, the patient was prescribed Veregen (sinecatechin 15%) ointment three times a day for six weeks. At the end of the 6 weeks, local examination of the patient showed a complete resolution of the lesions, with only mild scarring of the vulva, which was accompanied by negative biopsies for dysplasia or carcinoma . Ahn et al. studied the application of green tea extracts such as, Polyphenon ® E, on human cervical lesions. Overall, the topical application of the Polyphenon ® E showed clinical efficacy in 74% of patients. A better outcome was observed in combination with the oral administration of Polyphenon ® E capsules, with a positive response in 75% of patients . The use of Polyphenon ® E 15% and 10% ointments led to recurrence rates of 5–9% and 4–1%, respectively . The other treatment modalities resulted in recurrence rates between 5% and 65%. Cryotherapy showed a high clearance rate, but the risk of recurrence was about 20–40%. Also, imiquimod 5% cream, associated with podofilox, demonstrated good efficacy, but the recurrence rates ranged from 13% to 19% and up to 91%, respectively . The available topical therapies are listed in .
Cervical cancer is an important public health problem. In 2018, the European Society of Gynecological Oncology (ESGO), European Society of Pathology (ESP), and European Society for Radiotherapy and Oncology (ESTRO) developed evidence-based recommendations for the management of patients with cervical cancer . Due to the frequent occurrence of spontaneous CIN II regression and possible adverse effects of treatment on future pregnancies, conservative management is the proper approach. However, there is a risk of progression to cervical cancer, so personalized approaches are needed . For postmenopausal women, the large loop excision of the lesion remains the most cost-effective treatment [ , , ]. All the proposed local treatments were evaluated, and discrepancies were found in the results for the topical agents between studies. 5-fluorouracil has shown good results in terms of CIN remission. Moreover, it is known to be an efficient anticancer drug for use in chemotherapy to treat various types of neoplasms. Polyphenon ® Eointment is an efficacious and safe topical treatment for genital warts. Its use needs further clinical investigation. Chemical topical agents may be a therapeutic option, especially in young women where there is a natural clearance of HPVs and where surgical treatment is not indicated. Undoubtedly, after the implementation of national HPV vaccine programs, the incidence of HPV-induced lesions will decrease, but until then, all therapeutic options are valuable. We summarize the recurrence rates and clinical outcomes according to the studies included, while the main limitation of our review is the lack of long-term follow-up, besides the heterogeneity of the included studies. In Rahangdale et al.’s work (2014), after the 6-month follow-up, the regression of HPV was significantly higher in the 5-FU group compared to the control group . In Grimm et al.’s study with imiquimod, complete histologic remission was obtained in 47% of the imiquimod group (47%), but long-term follow-up was not performed . In Pachman’s study, after 2 years of follow-up, there were no differences in dysplasia recurrence between the two groups, that is, study and control . Via GM-CSF treatment, Hubert Pascale et al. obtained good immune responses in LSIL and HPV16 after a 30-month follow-up . Van Pachterbeke et al. used Cidofovir, an antiviral medication, with a 61% histological clearance of CIN compared with the control group (20%), but the follow-up period was very short, at 6 weeks . In Alvarez et al.’s study on retinoids, comparable outcomes were observed with the control group in which loop electrosurgical excision was carried out after 12 weeks of follow-up, with a 32% regression rate . The same was obtained in Ruffin et al.’s study , but compared with placebo, an extensive meta-analysis showed no evidence of therapeutic activity following the retinoids treatment of CIN Gleiser et al. showed a better outcome following treatment with trichloroacetic acid with short-term follow-up (6 weeks) . The same good results with clinical complete remission were recorded by Schwameis et al. after 6 months of follow-up . In their work on natural therapeutic topical compounds, after 6 months of follow-up, Serrano et al. observed normal cytology in a significant proportion (88%) . In Stockfleth et al.’s study on Polyphenon ® E, the rates of recurrence after 12 weeks of follow-up were between 4% and 6% . The same was observed by Tate et al., at <7% . Recurrence after the use of Polyphenon ® E depends on dosage . In Yua Go’s study with interferon, the researchers did not get the expected results with a better clearance rate after argon plasma treatments at a 12-month follow-up . Sikorski et al. (2003) observed that IFN-gamma intracervical injection (used on 20 women with CIN1 or CIN II) seems to be a valuable method, yielding significant treatment-related regressions ( p < 0.05) compared to spontaneous regression . Michelin (2015) concluded that immunotherapy with IFN-α, applied subcutaneously once weekly in 17 patients with CIN II-III), is a viable clinical treatment . Until now, this approach to topical treatment has seen no clinical application given the lack of proper validation. Our review found heterogeneous results for many treatments. In addition, new directions towards a personalized approach are being taken into account. We only want to underline their value and the need for further evaluation. As a future direction, in terms of efficacy and safety, delivering a smaller amount of an active substance at the local site is ideal. Conventional dosage forms are associated with leakage, and frequent, repeated administration is necessary. Novel carriers can be engineered with bio-adhesive properties in order to achieve the correct therapeutic level for a prolonged period of time . Photodynamic therapy is a pathogenetically substantiated treatment. The reproductive outcomes in treated patients have demonstrated the high effectiveness of chlorin e6-mediated fluorescence-assisted systemic photodynamic therapy. Recurrence occurred in 3.3% to 8.9% during the follow-up period of 2 years. This treatment had high efficacy and a good safety profile: the reported adverse events were mild, with rapid recovery after the therapy. Vaginal discharge and a burning sensation were identified as the most common side common effects. Additionally, the method was evaluated for its effects on fertility, and it was found that its use is safe . The effectiveness of PDT may vary depending on the type of PDT agent used, as well as the duration of treatment, the dosage of medication and frequency of the PDT administration, the location and severity of the lesions and the host’s immunological response . Molecular biomarkers are promising prognostic factors that can lead to the correct risk stratification of patients. Various biomarkers such as squamous cell carcinoma antigen (SCC-A), serum YKL-40, circulating HPV DNA and circulating micro-RNAs have shown potential utility as non-invasive biomarkers . Viral (HPV DNA and HPV E6/E7 mRNA) and cellular markers (Cyclin-Dependent Kinase Inhibitor 2A (CDKN2A)/p16ink4a, together with the proliferation marker Ki-67, region 3q26) have been evaluated as biomarkers to improve the screening and prognosis of CC . The vascular endothelial growth factor (VEGF) serum levels were also shown to be a viable biomarker . Interleukine-6 (IL-6) may be involved in the progression of CIN to cervical cancer, and could offer a treatment biomarker for this disease . Robust molecular markers are needed in order to predict therapy response and survival, and they may help in the development of new targeted therapies . In terms of future directions, along with topical compounds, researchers are focusing on targeted therapy, using, for example, chimeric antigen receptor-modified T cells (CAR T), which is a promising approach. This is a kind of immunotherapy that utilizes the ability of the immune system to recognize specific antigens on the CC cell’s surface . Anti-programmed death-1/anti-programme death-ligand 1 (Anti-PD-1/PD-L1) antibodies are immune checkpoint inhibitors, which have shown remarkable clinical efficacy in certain gynecological malignancies [ , , , , ]. Knocking down human papilloma virus oncoproteins should also not be overlooked . Another interesting therapeutic option is non-invasive physical plasma treatment (NIPP), which is based on slowing cell growth due to DNA damage, apoptosis, and cell cycle arrest . Antioxidative sodium selenite combined with citric acid and silicon dioxide contained in a medical device has proven effective against histologically proven CIN II as well as p16-positive CIN I . Other therapeutic avenues include angiogenesis inhibitors, together with immune checkpoint inhibitors; immune system enhancement, including agents targeting defective DNA repair, such as oli ADP ribose polymerase (PARP) inhibitors [ , , , ]; or live vector-based vaccines, which represent a genome-editing method based on the endogenous repair mechanism . Anti-angiogenic drugs such as Bevacizumab can improve tumor infiltration by immune cells. In addition, cemiplimab, together with local chemotherapy, has shown efficacy in treating CINs . Safe gene delivery strategies are also being developed . Nanomaterials have emerged as another chemotherapeutic avenue . Among them, chitosan nano-capsules containing Chlorocyan-aluminum phthalocyanine as a photoactive agent seem to be feasible and safe . Future research is needed in the areas of artificial intelligence (AI) and machine learning (ML), these being critical components of digital management in cervical cancer care. These technologies offer personalized healthcare services that could revolutionize the field . Large language models (LLMs) show the potential to provide intelligent question-answering with reliable information about medical queries in clear and plain English, which can be understood by both healthcare providers and patients . Alternative medicine therapies are being frequently used to treat cancer in so-called integrative oncology . Fuzheng Jiedu, a traditional Chinese medicine formula, was shown to be effective against persistent HPV infections and to reduce the HPV conversion rate in patients with infertility. Traditional Chinese medicines have been shown to improve symptoms and can increase CD3 + , CD4 + , and CD4 + /CD8 + levels. The effect of this combined therapy is stronger than that of TCM or classical medicine alone . New studies on natural compounds, such as Papilocare ® vaginal gel, have shown higher regression rates of LSIL cervical dysplasia caused by this treatment than spontaneous regression rates . The same situation was observed in the PALOMA trial, where Papilocare ® demonstrated efficacy in treating human papillomavirus (HPV) infection associated with LSIL lesions . The treatment with Coriolus versicolor has raised expectations notably .
Topical therapies can play an important role in secondary prevention by safely treating precancerous lesions, which can be a great advantage compared to surgical methods that are burdened by many complications in the short and long term. In particular, in young women where the preservation of fertility is the main goal, reproductive outcomes and recurrence frequency must be evaluated after targeted personalized treatments. Various options have been evaluated, but they still require validation. A risk stratification of the patients and a strict follow-up are needed.
|
A Novel Serum Glycobiomarker for Diagnosis and Prognosis of Cholangiocarcinoma Detected by | 3ac3f38e-767d-4c6a-83d1-cfd4d606006d | 8125881 | Histology[mh] | Cholangiocarcinoma (CCA), a cancer arising from bile duct epithelium, is one of the most common primary liver cancers in North-Eastern Thailand . Early detection of CCA is still a challenging task for CCA management due to the non-specific signs and symptoms and the lack of a sensitive/specific marker for detection of the disease. Tumor-associated glycans are expressed as a consequence of aberrant glycosylation and have been used as the biomarkers for detection of cancer. Glycomic analyses of CCA cell lines and patient tissues using lectin-based approaches indicated the occurrence of aberrant glycosylation in CCA . Several CCA-associated glycans such as carbohydrate antigen 19-9 (CA19-9), CA-S121, CA-S27 and carcinoembryonic antigen have been used as the glycobiomarkers or candidates for diagnosis, monitoring and prognostic prediction of CCA [ , , ]. Based on carbohydrate binding properties, lectins are widely used for glycobiology research in several aspects, e.g., functional analysis of specific glycans and glycoproteins, affinity purification of glycans and detection of disease-associated glycobiomarkers. Many plant lectins, for instance sialic acid-binding lectins (e.g., Maackia amurensis lectin-II and Sambucus nigra agglutinin) and N -acetyl galactosamine/galactose (GalNAc/Gal) binding lectins (e.g., soybean agglutinin, Sophora japonica agglutinin (SJA), Vicia villosa lectin, and Wisteria floribunda agglutinin), have been used to detect CCA-associated glycans and glycoproteins in tumor tissues and sera from CCA patients [ , , , , , , ]. Considering the reactivity of these lectins in CCA, GalNAc/Gal-associated glycans could be a potential biomarker for CCA diagnosis and prognostic prediction. They were highly detected in patient sera, although the level of these glycans in the sera of CCA patients were variable. In this study, Butea monosperma agglutinin (BMA), a GalNAc/Gal binding lectin from Butea monosperma , was used to detect the expression of CCA-associated glycans in patients’ tissues and sera. A double BMA sandwich enzyme-linked lectin assay was developed to measure BMA-binding glycan (BMAG) levels in the sera. The potential of using serum BMAG level as a diagnostic and prognostic marker of CCA was investigated.
2.1. BMAG Was Significantly Increased in Pre-Neoplastic Bile Ducts and CCA Cells 2.2. Expression of Tissue BMAG Was Associated With CCA Genesis in Hamster Model 2.3. BMAG Was Detected in the Patients’ Sera and Can Be a Diagnostic Marker for CCA 2.4. Serum BMAG Was an Independent Poor Prognostic Indicator for CCA To analyze the correlation between serum BMAG and the clinical data of patients, CCA patients were categorized into two groups according to the mean of serum BMAG; a low serum BMAG group (<82.5 AU/mL) and a high serum BMAG group (≥82.5 AU/mL). The univariate analysis indicated an association of serum BMAG with the sex of CCA patients ( p < 0.01; ). No correlation of serum BMAG with age and tumor size of patients was observed using Pearson correlation ( ). The overall survival of CCA patients was 252 days (95% CI = 196.7−334.3). Kaplan–Meier analysis with a log-rank test revealed that CCA patients with high serum BMAG (N = 20) exhibited a significantly shorter survival period than those with low serum BMAG (N = 63) ( p = 0.011, c). The median survival times of the patients with high and low serum BMAG were 117 days (95% CI = 117.5–196.4) and 221 days (95% CI = 221.5–372.5), respectively. High serum BMAG with a hazard ratio (HR) of 1.960 and the tumor stage IVB with a HR of 2.198 were indicated by the Cox regression model for univariate analysis ( ). Using multivariate Cox regression analysis, serum BMAG was found to be an independent poor prognostic indicator for CCA with HR of 1.873, regardless of the tumor stage of the CCA ( ).
BMAG expression in CCA patients’ tissues was determined using BMA histochemistry. BMAG signal was undetectable in normal bile ducts, hepatocytes and inflammatory cells, while it was highly detected in hyperplastic/dysplastic (HP/DP) bile ducts and CCA cells ( a). In CCA tissues, the positive signal of BMAG was observed mainly in the cytoplasm of CCA cells ( a,b). In some cases, BMAG signal was also detected in the luminal substances ( a, inlet), suggesting the possibility of detecting BMAG in patient serum and bile. Patients’ CCA tissues displayed different levels of BMAG expression, from negative to strong positive staining ( b). The positive signal of BMAG was found in 91.1% (51/56) of HP/DP bile ducts and 89.2% (66/74) of tumors in CCA tissues. BMAG expression was also found in HP/DP bile ducts from benign biliary diseases (BBD) ( c). In contrast to BBD and CCA, hepatoma rarely expressed BMAG, as the positive staining for BMAG was seen only in 25% (2/8) of the hepatoma cases ( c,d). To analyze the correlation between tissue BMAG expression and clinicopathological data, CCA patients were classified into (<107) and high (≥107) according to the median of tissue BMAG score. Using χ 2 analysis, no correlation was observed between tissue BMAG expression and age, sex, tumor size, tumor stages or histological types ( ).
To investigate the association of BMAG expression and CCA genesis, BMA histochemistry was performed in hamster CCA tissues obtained from Opisthorchis viverrini (OV)-induced CCA hamsters. As shown in a, BMAG was undetectable in normal bile ducts and hepatocytes in all treatment groups. The BMAG signals, however, were observed in pathologic (HP/DP) bile ducts, presented in the OV infected and OV + N -Nitrosodimethylamine (NDMA) treated groups as early as three months post-treatment, and the strong signal was observed in all CCA tissues developed in the OV + NDMA group at three and six months post-treatment. Hamster tissues in all groups were categorized based on the morphology and pathology of bile duct epithelia into: normal, HP/DP, and CCA. The expression levels of BMAG in these categorized tissues were compared, as shown in b and the . The expression of BMAG in CCA (103.5 ± 20.3) was significantly higher than those of HP/DP (50.3 ± 39.9) and normal bile ducts ( p < 0.001), indicating the involvement of BMAG during CCA development.
The positive signal of BMAG was observed in the luminal surface of CCA tissues from both patients ( a, inlet), and hamsters ( a), suggesting a secretory nature of BMAG glycoconjugate that may be detected in the sera. To explore this possibility, a double BMA-enzyme linked-lectin assay was established and determined BMAG levels in the sera from 83 CCA patients and 287 non-CCA control subjects, including 68 healthy control subjects, 31 OV infected (OV), 103 benign biliary diseases (BBD), and 85 other gastrointestinal cancers (OCA). The serum BMAG level of CCA patients was significantly higher than those of each control group ( p < 0.01, and a). Analysis of serum BMAG among cancer cases revealed that the average level of serum BMAG from CCA patients was higher than those of other cancer groups tested but showed statistical significance only with those with colon cancer ( ). Receiver operating characteristic (ROC) analysis revealed that BMAG can be a potential serum marker which significantly distinguished CCA patients from the controls with the area under curve of 0.712 ( p < 0.001, b). The cut-off value of serum BMAG at 26.6 AU/mL was suggested by the Youden index, providing 55.4% sensitivity, 81.9% specificity, 46.9% positive predictive value (PPV), 86.9% negative predictive value (NPV), 18.1% false positive, 44.6% false negative and 76.0% accuracy for the differentiation of CCA from non-CCA controls. In addition, BMAG levels in 10 pairs of corresponding pre- and post-operative sera were determined. The result showed that the BMAG level was significantly reduced after tumor removal ( c, p < 0.05, paired t -test), indicating the association of serum BMAG with tumor origin. As serum CA19-9 is a standard marker for CCA, the diagnostic power of serum BMAG with serum CA19-9 were determined in the matched cases of 42 CCA and 231 non-CCA controls. As shown in , serum BMAG provided a comparable diagnostic value with those of serum CA19-9.
Several lectin-based approaches have been developed for detection of CCA-associated glycans in tissues and/or sera of patients [ , , , , ]. In this study, BMA, a lectin from Butea monosperma seeds, was used to detect the GalNAc/Gal glycans in clinical specimens from CCA patients, and the potential usage of BMA to discriminate CCA patients from non-CCA subjects was investigated. Using lectin-histochemistry, BMA binding glycans (BMAG) were detected in pre-neoplastic (HP/DP) bile ducts and CCA tissues from patients, but not in the normal bile ducts. The significance of BMAG in CCA development was demonstrated by the OV-induced CCA hamster model. High levels of BMAG could be detected in the sera of CCA patients and distinguished from those of non-CCA controls. Moreover, a high serum BMAG level was associated with a shorter survival time of CCA patients. These findings indicate a potential serum glycobiomarker of BMAG for diagnosis and prognosis of CCA. Over the past decade, plant lectins have been used as tools for searching new glycobiomarkers for CCA . Butea monosperma is a medicinal plant that exhibits many pharmacological effects, such as anti-microbial, anti-helminthic, anti-diabetic effects etc. . In this study, BMA, a GalNAc/Gal binding lectin from the seeds of Butea monosperma , was used to detect BMAG in the tumor tissues and sera from CCA patients, and evaluate for the potential of being a glycobiomarker for diagnosis and prognosis of CCA. In contrast with hepatoma, the majority of patient CCA tissues tested exhibited BMAG signal with differential BMAG scores. The BMAG scores, however, were not associated with clinicopathological findings of CCA patients in this setting. As the BMAG signal was positive in almost all CCA cases but not hepatoma, lectin histochemistry of BMAG may be used to differentiate CCA from hepatoma. A double BMA-enzyme linked-lectin assay was successfully developed to detect BMAG in patients’ sera. The levels of serum BMAG from CCA patients were significantly higher than those of the non-CCA groups and could differentiate CCA patients from the control groups with a comparable diagnostic power with the previous reported glycobiomarkers, such as CA19-9, SJA-binding glycan, and Wisteria floribunda agglutinin-positive mucin-1 . A sensitivity of 63–68% and specificity of 73–88% was gained by these individual glycan markers. It is well known that the diagnostic power of individual test can be elevated when using a combination of two to three markers . Combining serum BMAG with other CCA markers should be further investigated to improve the diagnostic power of CCA. In addition, high serum BMAG and tumor stage IVB were independently associated with the short survival of CCA patients in this cohort. The expression levels of BMAG in bile ducts with HP/DP and CCA were not significantly different. BMAG levels in the serum of CCA patients, however, were significantly higher than those of BBD cases and hence serum BMAG could be used to distinguish between benign and malignant conditions. There are two possible explanations for this discrepancy. First, the number of BMAG producing cells in tumor (CCA cells) may be higher than those in HP/DP cells, resulting in the higher level of BMAG in the serum of CCA patients than those of BBD patients. Second, BMAG-carrier proteins expressed in CCA might be different from those expressed in hyperplasia/dysplasia. Identification of carrier proteins in hyperplasia/dysplasia and CCA is needed to address this point. Alteration of glycosylation has been shown to be involved in tumor growth, metastasis and therapeutic resistance . Several specific glycans are applicable as markers for diagnosis, monitoring and prognostic prediction of many cancer types . In CCA, several CCA-associated glycans are involved in tumor progression and are suggested to be the targets for CCA treatment . Indramanee et al. used histochemistry of 14 commercially available lectins to detect alteration of glycans in patient CCA tissues, and found that almost all lectins tested gave positive signals with normal bile duct, HP/DP and CCA, including hepatocytes and stromal cells . SJA was the only GalNAc/Gal binding lectin that bound to HP/DP and CCA but not normal bile duct and stromal cells. Similar to BMA, SJA binding Gal/GalNAc glycan (SNAG) could also be detected in the patients’ sera with a lower sensitivity (59.5%) and specificity (73.6%) . BMA recognizes glycan structures similar but not identical to SJA. SJA has a specificity toward T antigens (Gal-β-1,3-GalNAc) , whereas BMA had the highest affinity to GalNAc . As shown in c, a high serum BMAG level was associated with poor prognosis of CCA, implying an association of BMAG with CCA progression. The functional analyses in CCA cell lines, namely KKU-213, KKU213-L5, KKU-214, and KKU-214L5, showed that BMA significantly reduced the migration and invasion ability of CCA cell lines ( p < 0.05) . The mechanism underlying the involvement of BMAG in CCA metastasis should be further investigated. In conclusion, this present study demonstrated the elevation of a novel CCA-associated glycan, BMAG, in the pre-neoplastic (HP/DP) bile ducts and CCA. BMAG was demonstrated to be a potential glycobiomarker for diagnosis and prognosis of CCA. Further studies on the mechanism and significance of BMAG in CCA development may provide a better understanding of glycobiology in cancer, which could possibly lead to the curative treatment of CCA.
4.1. Patients’ Tissues and Sera 4.2. CCA Hamster Tissues 4.3. Purification and Biotinylation of Butea monosperma Agglutinin 4.4. BMA Histochemistry 4.5. Double BMA Sandwich Enzyme-Linked Lectin Assay 4.6. Statistical Analysis Statistical analyses were performed using SPSS 17.0 software (SPSS, Chicago, IL, USA) and graphs were generated using GraphPad Prism 8 (GraphPad Inc., La Jolla, CA, USA). Expression levels of BMAG in CCA tissues and sera were presented as the mean with standard deviations (SD). The differences of BMAG levels between groups were compared using Student’s t -test. Correlation between BMAG levels and clinicopathological data of CCA patients was analyzed using a Chi-square (χ 2 ) test. The correlation between BMAG levels and continuous variables, age and tumor size, were analyzed using a Pearson correlation. Survival analysis was performed using a Kaplan–Meier plot and log-rank test. A Cox proportional-hazards regression was used to analyze the hazard ratio and multivariate survival analysis. The variables with p < 0.1 in the Univariate Cox-Regression analysis were selected for a Multivariate Cox-Regression analysis. p < 0.05 was considered as statistical significance.
Paraffin-embedded tissues and serum samples were obtained from the specimen bank of the Cholangiocarcinoma Research Institute, Khon Kaen University, Thailand. Informed consent was obtained from each subject and the research protocol was approved by the ethics committee for human research of Khon Kaen University (HE611318). Paraffin-embedded liver sections from patients with 74 CCA, 10 BBD and 8 hepatomas were included in this study. Pre-operative serum samples were collected from the histologically proven cases of 83 CCA, 103 BBD (12 biliary adenoma, 4 biliary cyst/papillomatosis, 12 cholecystitis/cholangitis, 2 cirrhosis, 49 periductal fibrosis, 1 hemangioma, 2 gall stone, 9 biliary inflammation, and 12 others), and 85 OCA (14 ampulla of vater, 15 colorectal, 5 gall bladder, 29 hepatoma, 13 stomach, and 9 pancreas). Post-operative sera obtained at least 3 months after surgical treatment from 10 CCA patients were collected. The OV infected subjects (N = 31) were identified by fecal OV egg examination. Sera of asymptomatic healthy control subjects (N = 68), defined by normal fasting blood glucose and liver functions (aspartate amino transferase, AST; alanine amino transferase, ALT; alkaline phosphatase, ALP), were collected during the annual check-up at Srinagarind Hospital, Faculty of Medicine, Khon Kaen University. Serum CA19-9 was obtained from the records of the routine analysis of Srinagarind Hospital (Roche Diagnostics GmbH) according to the International Federation of Clinical Chemistry and Laboratory Medicine (IFCC).
Paraffin-embedded hamster liver sections were obtained from the previous study , under an approval by the Animal Ethics Committee of Khon Kaen University (AEMDKKU1/2558). Samples were collected from 4 groups of experimental hamsters: non-treated control, OV infected, NDMA treated and OV + NDMA treated groups, as previously described . BMAG expression was examined in 5 representative tissue sections per group at 1, 3 and 6 months after the beginning of experiments.
BMA was purified from Butea monosperma seeds using 50–80% ammonium sulfate precipitation and GalNAc-agarose affinity chromatography, as described previously . Biotinylated BMA was prepared using an EZ-Link Sulfo-NHS-LC-Biotin kit (Pierce/Thermo Fisher Scientific, Rockford, IL, USA) according to the manufacturer’s recommendation.
The expression of BMAG in CCA tissues from CCA patients and experimental CCA in the hamster liver was determined using lectin histochemistry . After deparaffinization and blockage of non-specific reactivity using 0.5% BSA in phosphate buffer saline (PBS), the sections were incubated with 3 µg/mL of biotinylated BMA at room temperature overnight, followed by 1:100 dilution of horseradish peroxidase (HRP)-conjugated streptavidin (streptavidin-HRP, Invitrogen, Camarillo, CA, USA), at room temperature for 40 min. The staining signal was developed using 25 mg/mL of diaminobenzidine tetrahydrochloride (Sigma Aldrich, St. Louis, MO, USA) and counterstained with Mayer’s hematoxylin (Bio-Optica, Milano, Italy). Slides incubated with PBS instead of biotinylated BMA or streptavidin-HRP were used as negative controls of the detection system. The specificity of the lectin was shown by the negative signal in the slide incubated in the presence of 200 mM GalNAc ( ). The expression level of BMAG was presented as a BMAG-score (0–300), calculated from the multiplication of staining intensity (negative = 0, weak = 1, intermediate = 2, strong = 3), with a percentage of positive cells (0–100%) .
The level of serum BMAG was determined using double BMA sandwich enzyme-linked lectin assay. Each well of a 96 well-plate was coated with 50 µL of 0.25 µg/mL of BMA at 4 °C overnight. After washing with 0.1% Tween-20 in PBS pH 7.4 (PBST) and blocking non-specific binding with 3% BSA in PBST at room temperature for 1 h, the plate was incubated with 50 µL of 1:10 serum sample in 1% BSA-PBST at room temperature for 1 h. After incubation, the sample was removed, and the plate was washed 5 times with PBST. BMAG was detected by incubating with 0.5 µg/mL of biotinylated-BMA at room temperature for 1 h, followed by 1:10,000 diluted streptavidin-HRP (Invitrogen, Camarillo, CA, USA) at room temperature for 1 h. After 5 times washing, the signal was developed by incubating with tetramethylbenzidine (TMB)-substrate (Sigma Aldrich, St. Louis, MO, USA) in the dark at room temperature for 15 min. The reaction was stopped using 50 µL of 1 N H 2 SO 4 and the absorbance was measured at 450 nm. The serum BMAG level was calculated and presented as arbitrary units (AU)/mL based on the standard control prepared from cell lysate of KKU-213A, a human CCA cell line . The level of BMAG in each sample was an average value from duplicate measurements.
|
Radiotherapy alters expression of molecular targets in prostate cancer in a fractionation- and time-dependent manner | 74efd527-d38b-4140-9715-445e64a870e7 | 8894377 | Internal Medicine[mh] | Although the 5-year survival rates for prostate cancer are more than 90% it is still one of the leading causes of cancer-related death among men in the United States due to its high incidence . Approximately 1 in 8 men is diagnosed with prostate cancer at one point in his life and 1 in 41 men will die of it. Most patients have local disease but around 16% of prostate cancers have already spread to regional lymph nodes or other organs at the time point of diagnosis . Between 20 and 30% of patients will have tumor relapse after curative treatment which can be either local or metastatic recurrence . Interestingly, it has also been shown that local recurrence can occur by tumor cells originating from metastases repopulating the primary tumor site . Multiple factors play a role in disease prognosis and guide therapy decisions such as patient age, Gleason score, and PSA level . The evolving treatment options for primary prostate cancer are surgery and a variety of radiotherapeutic options which can be used alone or in combination. These regimens include standard fractionated, moderately hypofractionated , or ultrahypofractionated (such as stereotactic ablative radiotherapy—SABR, also known as stereotactic body radiation therapy—SBRT) external beam radiotherapy in addition to temporary high dose or permanent low dose rate brachytherapy through implantation of radioactive sources into the malignant tissue. Proton therapy is also used but proton biology is not the topic of this report. Especially for patients with disease recurrence after radiation therapy, there is a high need for additional treatment options, since the possibilities to re-irradiate are often limited. In recent years, targeted therapy was implemented into multimodal cancer treatment regimens generally used before or simultaneously with radiation . The underlying hypothesis is that while conventional chemotherapy impacts proliferation and survival of both malignant and normal tissue, targeted therapeutics aim to exploit the abnormal molecular signaling often found in cancer cells and to target and kill tumors more specifically – . Targeted therapy is often directed against kinases overexpressed in cancer driving survival and proliferation such as receptor tyrosine kinases (RTK) and associated molecules. Pre-clinical studies show that inhibition of the epidermal growth factor receptor (EGFR) reduces tumor growth and results in radiosensitization of different cancer types – . Similar to EGFR, insulin-like growth factor type 1 receptor (IGF-1R) and platelet-derived growth factor receptor (PDGFR) also play a major role in tumor development and progression , . Therefore, it is not surprising that inhibitors of RTKs were among the first approved targeted therapeutics in radiation oncology , . Through activation by their ligands, growth factor receptors control a network of downstream signaling molecules. One central mediator is the serine kinase AKT which is linked to DNA repair, apoptosis, protein translation, and the cellular radiation response – . In lines with this, the AKT inhibitors Ipatasertib and Capivasertib increase radio- and chemosensitivity in in vivo studies and are currently in clinical trials for targeted cancer therapy – . Further, the MAPK pathway downstream of growth factor receptors also contains several promising targets for molecular inhibitors. Trametinib targeting MEK1/2 has been approved for metastatic melanoma and is under evaluation for solid tumors harboring BRAFV600 mutations . Some of the known key driver mutations in cancer cells which are targeted by molecular inhibitors have been shown to be maintained throughout the disease course and affect tumor characteristics, radiosensitivity, and the metastatic potential . However, despite several early promising results, some clinical trials found no significant benefit in adding targeted therapy to the standard-of-care cancer treatment or even observed increased normal tissue side effects , . One potential reason for these negative results is that often pre-therapeutic molecular analyses are used to select molecular therapy for treatment without taking into consideration that irradiation can alter the expression and activity of molecular targets and thereby affect the efficacy of pharmacological inhibitors , . One research focus of our group is to examine if radiation-induced target expression can be exploited to increase the efficacy of targeted drugs , , . The modulation of target expression can occur at both the genetic and epigenetic levels. A recent study showed that radiotherapy can increase the histone H3 methylation and lead to a stable upregulation of stem cell markers in prostate cancer cells . Histone deacetylase (HDAC) inhibitors such as Pabinostat and Vorinostat affecting histone acetylation and methylation reduce tumor radioresistance and are under clinical evaluation as anti-cancer drugs , . Building on our work with radiation-inducible molecular targets , , , here, we show that radiotherapy impacts gene expression and protein phosphorylation of molecular targets in a fractionation- and time-dependent manner and that some of these radiation-induced changes persist for several months. Further, a 10 Gy single dose of radiation leads to an upregulation of molecular targets and increases the sensitivity of prostate cancer cells to clinically used molecular inhibitors providing a potential novel approach to using radiation plus drug treatment.
Cell culture Radiation exposure and long-term cultures Colony formation assay Whole-genome gene expression analysis Pathway analysis with IPA Phospho-proteomic array Real-time PCR Immunofluorescence staining Data analysis PC3 cells were obtained from the NCI tumor bank and used up to a passage number of 15. Asynchronously and exponentially growing cells were cultured at 37 °C and 5% CO 2 in RPMI 1640 containing GlutaMAX (Invitrogen) supplemented with 10% fetal bovine serum (FBS, Invitrogen). Cells were regularly tested for mycoplasma contamination.
Irradiation was performed at room temperature using single doses or multiple fractions of 320 kV X-rays with a dose-rate of 2.3 Gy/min (Precision X-Ray Inc.). Multifractionated radiation was carried out as described before with two times 1 Gy per day (with a 6 h time interval between both radiations) . At 24 h after the final radiation dose or 6 d after the first radiation dose (Supplementary Figure S1), total RNA was extracted for short-term (ST) gene analysis (Fig. A). For long-term PC3 cultures, irradiated and unirradiated cells were passaged twice a week and cultured for at least 8 weeks after irradiation before the cells were used for long-term (LT) gene analysis or inhibitor experiments (Fig. A, Supplementary Figure S1).
Colony formation assays were performed as previously described . Briefly, cells were trypsinized, counted and seeded in six-well plates. Treatment with inhibitors (Table ) was started at 24 h after plating. DMSO treated cells were used as control. The inhibitor was removed after 24 h incubation. Cells were cultured for a total of 12 days after plating. After fixation and staining with 0.4% crystal violet, cell clusters with > 50 cells were counted with a stereomicroscope (AmScope). Surviving fractions were calculated as follows: (colony number treated x cells plated untreated/ colony number untreated x cells plated treated).
Total RNA was extracted from three replicates using a QIA shredder spin column (Catalog no. 79654, Qiagen) as published previously . The RNeasy mini kit (Qiagen) was used to purify the extracted RNA. Microarray analysis was done using CodeLink Whole Genome Bioarrays representing 55,000 probes. Scanned images from arrays (gridding and feature intensity) were processed with the CodeLink Expression Analysis software (GE Healthcare), and the data generated for each feature on the array were analyzed with GeneSpring software (Agilent Technologies). Raw intensity data for each gene on every array were normalized to the median intensity of the raw values from that array.
Activation of molecular pathways was analyzed using Ingenuity Pathway Analysis (IPA) software (Qiagen) as described before . Differentially expressed genes and corresponding p values were uploaded to the IPA platform. Each gene identifier was then mapped with its corresponding gene object in the Ingenuity Pathway Knowledge Base and an activation z-score was calculated which increased or decreased depending on the known activating or inhibiting function of pathway molecules.
For analysis of the phospho-proteome, cells were plated and irradiated with 10 Gy SD, or with 10 fractions of 1 Gy dose per fraction with two fractions per day (Fig. A). At 30 min (ST) and at 2 months (LT) after irradiation, the cells were lysed from plates in T-Per (ThermoFisher Scientific) mixed 1:1 with 2X SDS Tris–Glycine buffer (Invitrogen, Carlsbad, CA) + 2-mercaptoethanol (final concentration = 2.5%). Reverse phase protein microarrays were performed as previously published . In brief, samples were diluted and printed in duplicates onto nitrocellulose slides. HeLa cell lysates (with or without pervanadate) were used as positive and negative controls (Supplementary Figure S2). Microarrays were stained with specific and validated antibodies and analyzed with a biotin-linked signal amplification system (DAKO). The total protein amount of the sample was determined with the SYPRO Ruby stain (ThermoFisher Scientific).
Real-time PCR was performed as previously published . In brief, one microgram of total RNA was reverse transcribed using an RT2 First Strand synthesis kit (Qiagen, 330401). qPCR assays were performed using RT2 SYBR Green ROX qPCR Mastermix (Qiagen, 330520) and RT 2 qPCR Primer Assays (Qiagen; product no. 330001) for FGFBP1, TGFBI, PIK3CD, FGF1, PGF , and IGFBP1. GAPDH , 18S , and Rplp0 were used as normalizing genes. Real-time PCR reactions were performed in the Applied Biosystems' thermal cycler (Quant Studio 3). PCR steps included the holding stage at 95 °C for 15 min, followed by 40 cycles of alternate denaturation at 95 °C for 15 s, annealing/extension at 60 °C for 1 min. A melt curve analysis was performed to ensure the specificity of the corresponding RT-PCR reactions. Fold change = 2-ddCt, where ddCt = dCt (test)—dCt (control); dCt = Ct (gene) – Ct (mean of GAPDH, 18S , and Rplp0 ); and Ct is the threshold cycle number.
Immunofluorescence was performed as recently described . Cells were fixed with 3% formaldehyde/PBS for 30 min, permeabilized with 0.5% Triton X-100/PBS for 10 min and blocked with 3% BSA/PBS for 1 h. Staining of cleaved caspase 3 (Cell Signaling, #9664) was carried out overnight at 4 °C and with anti-rabbit secondary antibody for 2 h at room temperature. After several washes with PBS, samples were covered with Vectashield/DAPI mounting medium (Vector Labs). Images were acquired using an AxioImager.Z1/ApoTome microscope (Zeiss).
Data were analyzed with Microsoft Excel 2019 . Fold change was calculated by normalizing the measured values to the corresponding control. Genes were considered upregulated if the fold change was greater than 1.5 and downregulated if the fold change was below 0.66. The unpaired, two-sided Student’s t-test was used to test for statistical significance. The irradiated samples were compared to the corresponding unirradiated controls. Results were considered statistically significant if the P value was less than 0.05.
Fractionation impacts gene expression in a time-dependent manner SD results in a long-term upregulation of molecular targets Cell functions and pathways can be activated by different gene expression patterns SD leads to a long-term increase in phosphorylation of the molecular targets AKT and MET Prostate cancer cells surviving SD irradiation are more sensitive to molecular targeted drugs SD irradiation reduces HDAC expression and increases the resistance of cancer cells to HDAC inhibitors To analyze the short- and long-term effects of fractionation on the gene expression of molecular targets, we irradiated PC3 prostate cancer cells with either a single dose of 10 Gy (10 Gy SD) or a multifractionated regimen of ten 1 Gy fractions (10 × 1 Gy MF) and performed whole genome microarrays at 24 h (short-term, ST) and 2 months (long-term, LT) after the final radiation dose (Fig. A). Interestingly, while immediately after irradiation (ST), 10 × 1 Gy MF irradiation had a stronger impact on gene expression, we found that 10 Gy SD irradiation resulted in more long-term (LT) expression changes (Fig. B). From the 669 genes that were significantly (P < 0.05) upregulated (fold change > 1.5) or downregulated (fold change < 0.66) at 24 h (ST) after radiotherapy compared to the unirradiated controls, only 26 were affected by both 10 Gy SD and 10 × 1 Gy MF irradiation (Fig. Ci). At 2 months (LT), the expression of 206 genes was changed by both regimens, 3188 genes only by 10 Gy SD and 190 genes by 10 × 1 Gy MF irradiation (Fig. Cii). Further, 10 Gy SD had an impact on 18 genes at both time points (24 h-ST, 2 months-LT)(Fig. Ciii) and 10 × 1 Gy MF on 40 genes (Fig. Civ). Results showed an overlap of 284 genes which changed shortly after MF irradiation and were also differentially expressed in the long-term SD cells. Since multifractionated irradiation is delivered over 5 days in contrast to single-dose irradiation which is completed within minutes, we additionally examined the radiation-induced expression of selected genes at 6 days after start of irradiation (Supplementary Figure S1). The majority of genes showed similar expression but some genes for example IGFBP1 were strongly upregulated after 6 days but not after 24 h (Supplementary Figure S1).
Next, we examined the short- and long-term effects of irradiation on expression of genes regulating important cellular functions and pro-survival molecular pathways with Ingenuity Pathway Analysis. The pathways and cell functions which were most strongly affected by irradiation are presented in Fig. A and B. At 24 h (ST) after 10 × 1 Gy MF irradiation and at 2 months (LT) after 10 Gy SD irradiation, genes involved in cell movement, invasion, proliferation and survival were upregulated, while death- and apoptosis-related signaling was decreased in comparison to the unirradiated controls (0 Gy SD, 0 Gy MF)(Fig. A). Accordingly, growth factor-related pathways including IGF1, ErbB, PDGF, PI3K, and MAPK signaling were activated under these conditions (Fig. B). However, the expression of the actual target or receptor of these signaling pathways was only upregulated after 10 Gy SD but not after 10 × 1 Gy MF irradiation (Fig. C). It is important to note that although there were significant (P < 0.05) gene expression changes (fold change < 0.66 or > 1.5) at 24 h (ST) after 10 Gy SD irradiation compared to the unirradiated control, these did not substantially affect the activity of the selected pathways and cell functions shown in Fig. A and B.
To evaluate the similar changes in cell functions and pathways immediately after 10 × 1 Gy MF irradiation and at 2 months (LT) after 10 Gy SD further, we compared the affected genes under each condition. Although both types of irradiation led to a strong activation of cell movement, the genes causing this activation only partially overlapped (Fig. A, B). 214 genes were significantly (P < 0.05) altered under both conditions compared to the unirradiated controls, 152 genes were uniquely changed in PC3 cells shortly (ST) after an 10 × 1 Gy MF irradiation and 770 genes showed a differential expression in long-term (LT) 10 Gy SD cells (Fig. B). Similar results were obtained for other cell functions and pathways (Figs. B, ).
Since molecular targets are often regulated by protein modifications such as phosphorylation, we next examined the effects of 10 Gy SD and 10 × 1 Gy MF irradiation on the phospho-proteome (Fig. A). Interestingly, most of the changes in phosphorylation occurred during the first 24 h (ST) after irradiation with a peak at the 30 min time point (Fig. A, B, Supplementary Figure S4). Nevertheless, our analyses showed that the phosphorylation of AKT and MET was still enhanced at 2 months (LT) after 10 Gy SD indicating that irradiation can not only stably alter the expression but also the activity of molecular targets in a fractionation-dependent manner (Fig. C).
To evaluate whether the observed overexpression and increased activation of molecular targets affects the efficacy of molecular inhibitors, we examined the sensitivity of long-term (LT) PC3 cultures to a panel of clinically used targeting drugs (Table ). As shown in Fig. , PC3 cells at 2 months (LT) after 10 Gy SD irradiation showed significantly (P < 0.05) decreased survival after treatment with the AKT inhibitors Capivasertib and Ipatasertib compared to the unirradiated controls (Fig. ). Similar results were obtained when we targeted IGF-1R, MET, VEGFR2 or MEK signaling, while 10 Gy SD irradiation had no effect on the efficacy of Lapatinib (Fig. ). It is important to note that treatment with inhibitors induced only minimal apoptosis indicating that there might be another form of cell death as underlying mechanism for the differential survival rates (Supplementary Figure S3).
As epigenetic modifications can impact gene expression, we next examined the expression of histones and HDACs after 10 Gy SD and 10 × 1 Gy MF irradiation. While at 2 months (LT) after 10 Gy SD irradiation several histone clusters were upregulated (Fig. A), HDAC levels were decreased (Fig. B, C). In parallel, long-term (LT) 10 Gy SD cells were more resistant to HDAC inhibition with Pabinostat or Vorinostat than the unirradiated controls (0 Gy SD) (Fig. D).
Recurrent disease in prostate cancer patients after curatively-intended treatment can be clinically challenging , . While after total prostatectomy conventionally fractionated radiotherapy has been shown to be effective and safe , re-irradiation after prior radiotherapy carries the risk for high toxicity rates . Recently, the use of stereotactic ablative irradiation with one or a few high doses has increased for both recurrent prostate cancer after prostatectomy or primary radiotherapy, as well as for metastatic disease , . Still, there is clinical need for additional therapeutic strategies to improve patient outcome after disease relapse. During the last two decades, targeted therapy has evolved as promising approach for multiple cancer types for either monotherapy or in combination with irradiation or chemotherapy resulting in improved tumor response and patient survival . However, some tumors have an intrinsic resistance to targeted therapy or develop resistance during treatment . Since radiotherapy can modulate gene expression and protein phosphorylation, this effect can potentially be exploited to increase or restore the efficacy of targeted therapy , . Using the post-radiation adaptation of tumors to enhance efficacy of radiation therapy is different and complementary to using radiation and drugs simultaneously , , . We demonstrate here that irradiation leads to long-term expression changes of multiple molecular targets and their associated pathways in surviving prostate cancer cells and that these adaptive changes are impacted by the time interval and fractionation regimen. It is important to note that both SD and MF irradiation regimens differ in their biologically equivalent doses (BED), although they have the same total radiation dose . The BED is based on the linear quadratic (LQ)-model and functions as a parameter for the biological effect of irradiation by taking into account the dose-per-fraction, total dose and treatment time . It has been shown that the BED can affect gene expression and therefore may contribute to the differential results between SD and MF irradiation which we observed . Among others, IGF signaling was strongly activated at 2 months after 10 Gy SD but not after 10 × 1 Gy MF. Interestingly, high IGF-1R expression has been associated with high prostate cancer recurrence after primary radiotherapy indicating a potential role for IGF-1R for the adaptive tumor response . Further, inhibition of IGF-1R sensitizes cancer cells to chemotherapy and irradiation identifying it as a promising target for molecular therapy , . Besides the higher IGF-1R expression, long-term 10 Gy SD cultures were also more sensitive to treatment with the IGF-1R inhibitor BMS-754807 which is in line with observations from Litzenburger and colleagues showing a correlation between IGF-1R expression and BMS-754807 efficacy in triple-negative breast cancer cell lines . In contrast to targets such as IGF-1R and AKT3, SD irradiation reduced the expression of HDACs and in parallel increased the resistance to the HDAC inhibitors Pabinostat and Vorinostat. By modulating histone acetylation and methylation, targeting HDACs has been shown to radiosensitize cancer cells even when the inhibitors are applied up to 24 h after irradiation , , . Interestingly, fractionated irradiation of 10 × 2 Gy doses results in elevated HDAC activity in breast cancer cells at 21 days after the final dose which correlated with enhanced cellular radioresistance indicating that HDACs are affected by irradiation depending on the fractionation regimen . Similar to gene expression, protein phosphorylation of the target also strongly affects the efficacy of targeted therapy and can be exploited to sensitize resistant cancer cells . While we saw more pronounced modulation of gene expression at later time points, the radiation-induced alterations in protein phosphorylation occurred mainly within the first 24 h after irradiation and were more often transient than permanent. Targets showing increased long-term phosphorylation and activation included the kinases AKT and MET. Elevated phosphorylation of AKT promotes survival and radiation resistance and is a negative prognostic marker for poor clinical outcome in prostate cancer patients – , , . Combined treatment with the AKT inhibitor Ipatasertib and abiraterone significantly increased progression-free survival of metastatic castration-resistant prostate cancer patients compared to abiraterone alone especially in patients with PTEN-loss tumors and activated PI3K/AKT signaling . Similar results were found in a randomized phase II trial examining AKT inhibition with Capivasertib in combination with Paclitaxel for women with metastatic breast cancer indicating that stimulating AKT activity before targeting it may increase the drug efficacy . Overall, our data show that radiotherapy especially large single doses can lead to stably elevated gene expression and activity of molecular targets and hereby sensitize cancer cells to the corresponding pharmacological inhibitors. Since the use of stereotactic ablative radiotherapy applying one or a few high doses has substantially increased, this may be a unique approach to improve the therapy outcome for recurrent and locally advanced prostate cancer patients.
Supplementary Information.
|
Management of Post-Stroke Cold Sensations: A Case Study on Sympathetic Nerve Ablation | be16f466-8a63-4729-9225-795d7a8a7684 | 11877399 | Surgical Procedures, Operative[mh] | Stroke frequently results in a complex array of sequelae, including limb stiffness, spasms, and pain . The pain can arise from various sources, such as central post-stroke pain, musculoskeletal pain, and neuropathic pain . Joint or soft tissue pain is often caused by abnormal muscle tone, spasticity, or prolonged immobility due to paralysis, significantly impacting the lower limbs . Additionally, spasticity in the muscles can lead to circulatory disorders in the affected limbs, contributing to cold-induced allodynia and pain . Furthermore, post-stroke pain is common, with cold allodynia reported in 8% to 30% patients after stroke . These symptoms are largely attributed to disruptions in peripheral circulation following a stroke. However, conventional treatments such as physical exercise, hyperbaric oxygen therapy, and pharmacological treatments have demonstrated limited efficacy in alleviating persistent limb coldness and associated symptoms caused by peripheral circulation disorders . Computed tomography (CT)-guided sympathetic radiofrequency ablation has been proven to effectively regulate peripheral circulation, offering therapeutic benefits for conditions such as palmar and plantar hyperhidrosis, cold and clammy extremities, and Raynaud’s phenomenon . Additionally, disturbances in peripheral circulation and cold sensations in the limbs have been linked to functional changes in specific brain regions, including the insular cortex, anterior cingulate cortex, thalamus, primary somatosensory cortex, and prefrontal cortex . This report presents the case of a 65-year-old woman with post-stroke peripheral circulation disorders, characterized by reduced limb temperature and cold-induced pain. Following CT-guided sympathetic radiofrequency ablation, she experienced significant improvements in peripheral perfusion and limb temperature, as well as enhanced functional activity in temperature-related brain regions. This case highlights the potential of this approach as an effective therapeutic strategy for managing post-stroke circulatory disorders.
A 65-year-old woman with a history of hypertension and over a year of cerebral infarction sequelae presented with abnormal cold sensation and increased muscle tension in her left upper and lower extremities. A decrease in ambient temperature triggered cold-associated pain in her limbs, leading to a visual analog scale pain score of 5. Consequently, she was admitted to the Pain Department of Jiaxing First Hospital. Physical examination revealed decreased muscle strength on the patient’s left side, with grade 3 strength in the left upper limb and grade 4 in the left lower limb. The patient demonstrated increased muscle tone in both the upper and lower limbs, with an Ashworth score of 2, along with restricted shoulder joint mobility. The temperature of the left arm was 4°C lower than that of the healthy extremities. The patient was taking amlodipine 5 mg, atorvastatin 20 mg, and clopidogrel bisulfate 75 mg daily after experiencing stroke. Magnetic resonance imaging (MRI) results indicated pontine malacia foci on the right side of the brain, interpreted as sequelae of a pontine infarction, along with bilateral periventricular ischemia . CT angiography showed signs of arteriosclerosis in the lower extremity artery and reduced peripheral circulation in the left lower extremity . Infrared thermography revealed that the temperature of the left leg was approximately 4°C lower than that of the right leg . No significant abnormalities were observed in the blood routine or coagulation function tests. However, the patient had low high-density lipoprotein cholesterol level of 0.87 mmol/L and an elevated lipoprotein(a) level of 799.1 mg/L. We used CT-guided radiofrequency ablation of the thoracic sympathetic nerve to treat post-stroke peripheral circulation disorders in the patient. All treatments were conducted with the patient’s informed consent and approval from the ethics committee. The patient was positioned prone on the CT table, and a scan was performed to identify the T3-4 vertebral space on the left side and the L3 vertebral body. We then designed an optimal puncture line targeting the gap between the left fourth rib head and the vertebral body . Additionally, we located the lumbar sympathetic nerve at the level of the third lumbar vertebra using a scan with a 3-mm layer thickness. The calculated injection depth and angle allowed for precise targeting of the injection point, ensuring avoidance of the lungs, kidneys, small intestine, as well as surrounding nerves and blood vessels . Following the designed trajectory, the needle was advanced through the T3-4 paravertebral space, crossing the costotransverse joint to reach the posterolateral edge of the T4 rib on the vertebral body . As the needle approached the anterior aspect of the L3 vertebral body, it was carefully guided into the space between the abdominal aorta and the vertebral body under CT guidance . After confirming the needle’s correct position via CT scan, we performed tests using a radiofrequency electrode to prevent unintended nerve damage. The resistance of the nerve tissue at the needle tip was maintained between 250 and 500 Ω. Electrical stimulation was applied at 0.8 mA, 0.3V, and 100 Hz, with no additional sensory responses observed. Further stimulation at 1 V and 2 Hz, and the absence of muscle twitches in the lower extremities and buttocks, confirmed that no motor responses were present, indicating that the needle tips were safely distanced from the spinal nerve. We then initiated standard radiofrequency therapy, setting the temperature to 75°C. Thermocoagulation was maintained for 300 s, concluding the procedure. Throughout the operation, we continuously monitored vital signs, pulse oxygen saturation, peripheral perfusion index (PI), and palm temperature using a monitor. As a result, our monitoring indicated a significant increase in the peripheral perfusion index (PI) in the patient’s left palm and feet immediately after surgery. The temperature of the left hand rose from 28°C to 33°C, while the temperature of the left foot increased from 27°C to 32°C. At 24 h after surgery, the temperature of the left limb continued to rise, surpassing that of the healthy side . Additionally, the perfusion index increased from 0.7 to 3.4 in the palm and from 0.6 to 2.5 in the feet, further confirming the procedure’s success. Further significant clinical improvement was observed during the follow-up visit 1 month after the sympathetic nerve radiofrequency ablation. The patient reported a subjective increase in warmth in the arms and legs, reduction in cold-induced pain, and visual analog scale score of 1. Postoperative CT angiography showed improved collateral circulation, compared with that of the preoperative state . Notably, an MRI performed 1 month after surgery revealed a significant reduction in pontine edema in the right side of the brain, indicating a positive change in the pontine malacia foci . In this case report, we used resting-state functional magnetic resonance imaging (rs-fMRI) to analyze changes in brain function after sympathetic nerve radiofrequency ablation surgery. The amplitude of low-frequency fluctuations (ALFF) analysis method was used to calculate the blood oxygen level-dependent signal, reflecting the spontaneous activity level of each voxel in the brain. ALFF is a reliable and repeatable measure of brain functional activity under various physiological states . The fMRI experimental procedure began with the acquisition of MRI sequences. T1-weighted sequences (repetition time=6.7 ms, echo time=2.9 ms, slice thickness=1 mm, number of slices=192, field of view=256×256 mm) were obtained and used as a structural reference for fMRI acquisition. The fMRI sequences were structured using a block paradigm, consisting of 36 volumes (number of slices=43, slice thickness=3.2 mm, repetition time=2000 ms, echo time=30 ms, field of view=220×220 mm, flip angle=90°, matrix size=64×64 mm). We used SPM8 ( https://www.fil.ion.ucl.ac.uk/spm/ ) and DPABI ( http://rfmri.org/dpabi ) for pre-processing, which included slice timing correction, realignment, motion correction, co-registration of T1 images to a human brain atlas, and spatial normalization of functional data. Images showing head movement exceeding 2 mm or 2° were excluded from the analysis. The images were then smoothed with a Gaussian kernel of 6×6×6 mm full-width at half-maximum to enhance the signal-to-noise ratio. The ALFF was extracted from the blood oxygen level-dependent signal images. We performed T-contrast analyses between pre-operation and post-operation phases, identifying regions with significant activation ( P <0.005) and a cluster size exceeding 10 pixels. In this patient, we observed heightened and reduced bilateral brain functional activation with statistical significance following sympathetic radiofrequency surgery, as illustrated in . Specifically, we identified 5 regions of interest exhibiting increased neural activity 1 month after surgery:. Additionally, our analysis revealed decreased neural activation in 3 regions of interest: the anterior commissure, left parahippocampal region (LPH), and right parahippocampal region, as summarized in and . These brain regions can be associated with abnormal cold-induced pain sensation in patients with stroke. The observed brain function changes can help elucidate the underlying mechanisms of sympathetic radiofrequency surgery.
The key takeaway from this case report is the presentation of post-stroke peripheral circulation disorders, characterized by increased spasticity, cold-induced pain, and a significant drop in temperature in the paralyzed limbs. This case underscores the importance of recognizing and addressing such complications and highlights the effectiveness of CT-guided sympathetic radiofrequency ablation. This intervention significantly improved the peripheral perfusion index, increased palm and foot temperature, and alleviated pain. These findings emphasize the need for proactive management of post-stroke sequelae, as this approach can significantly enhance patient outcomes. Abnormal cold sensations in spastic limbs following a stroke are often overlooked, as patients and clinicians tend to prioritize the management of spasticity . The diagnosis of post-stroke limb coldness primarily relies on subjective reports, with some patients also reporting cold-induced pain. However, these symptoms are far from rare, with studies indicating that approximately 53% of post-stroke patients experience an unpleasant cold sensation in the hemiplegic arm, significantly affecting their quality of life and causing considerable discomfort . In the presented case, the cold sensation in the hemiplegic limb progressively intensified over time, with a marked worsening observed 6 months after stroke. Standard warming interventions, such as hot compresses, proved inadequate in relieving symptoms, and repeated attempts at medical treatment failed to provide effective solutions. The condition worsened further in winter, with exposure to cold air triggering cold-induced pain in the affected hand. This case underscores the critical need for heightened awareness and targeted therapeutic strategies to address these often-overlooked symptoms, thereby improving the quality of life and reducing the burden of post-stroke sequelae. The causes of post-stroke limb coldness are multifactorial and involve a complex interplay of vascular, neurological, and muscle mechanisms. Reduced blood flow to the extremities, often influenced by vasomotor dysfunction, is a primary contributor to the cold sensation experienced by patients after stroke. This is further exacerbated by abnormal coagulation function and widespread vascular endothelial damage, which are common in patients after stroke and contribute to peripheral circulation disorders . Additionally, irreversible neurological damage in the brain diminishes its inhibitory control over the sympathetic nervous system, leading to abnormal sympathetic excitation . This heightened sympathetic activity increases vasoconstriction, reduces blood flow, and lowers skin temperature on the hemiplegic side . Other contributing factors include sensory or perceptual disturbances caused by the stroke, reflex sympathetic dystrophy, and muscle disuse atrophy, all of which further intensify the cold sensations . Understanding these mechanisms highlights the need for targeted interventions that address vascular and neurological dysfunctions, to improve outcomes in patients with post-stroke limb coldness. Theoretically, sympathetic modulation could serve as a viable treatment option for post-stroke peripheral circulation disorders. There have been reports of using axillary sympathetic nerve blocks with ropivacaine to promote upper limb recovery in patients with frostbite . Radiofrequency ablation of the sympathetic nerve is widely used in clinical practice for conditions such as hyperhidrosis, limb coldness, Raynaud syndrome, and lower extremity atherosclerosis obliterans . Radiofrequency ablation has fewer severe complications than does chemical denervation with anhydrous ethanol and offers significant benefits, including improved peripheral temperature regulation and relief of symptoms, such as dampness and coldness . However, its potential application in managing post-stroke peripheral circulation disorders remains unexplored. In this report, we used peripheral perfusion index mapping of the patient’s left palm and feet during surgery to evaluate the therapeutic effect in real time . The perfusion index is a measure that assesses the adequacy of blood flow to the extremities by reflecting the ratio of pulsatile to non-pulsa-tile (static) blood flow at peripheral sites. A higher perfusion index indicates better peripheral perfusion and blood flow. Following radiofrequency ablation of the sympathetic nerve, the perfusion index increased from 0.7 to 3.4 in the palm and from 0.6 to 2.5 in the feet, confirming the success of the procedure. Additionally, the temperature of the left hand and foot gradually increased during surgery, and the symptoms were alleviated postoperatively. Subsequent infrared thermography and CT angiography confirmed that radiofrequency ablation of the sympathetic nerve improved collateral circulation and elevated limb temperatures. These findings demonstrate the potential of radiofrequency ablation as a treatment for post-stroke peripheral circulation disorders. In this case report, we investigated the spontaneous neural activity in the brain using fMRI in a patient with post-stroke peripheral circulation disorders. Our findings demonstrated a significant increase in neural activity after radiofrequency ablation of the sympathetic nerve, particularly in the left ventral anterior cingulate cortex, left angular gyrus, left cuneus, and right visual association cortex, compared with preoperative levels. Notably, the ventral anterior cingulate cortex plays a critical role in thermal allodynia, hyperalgesia, and anxious behaviors in neuropathic pain models . Improvements in visual function were also observed 1 month after surgery, as evidenced by increased intrinsic neural activity in the left angular gyrus and right visual association cortex. Moreover, distinct changes were detected in memory-related regions: the left cuneus, associated with working memory, exhibited heightened activity, while the bilateral para hippocampal regions showed reduced signals, indicating a shift towards spatial memory processing . These alterations in brain activity suggest individualized responses, underscoring the importance of further research with larger sample sizes to identify specific brain regions that correlate with therapeutic efficacy in patients after stroke.
CT-guided radiofrequency ablation of the thoracic and lumbar sympathetic nerves effectively alleviates post-stroke peripheral circulation disorders, increases limb temperature, and improves collateral circulation. It also induces significant neural activity changes in brain regions associated with cold allodynia, highlighting its potential as an innovative therapeutic approach.
|
Effect of aridity on the β-diversity of alpine soil potential diazotrophs: insights into community assembly and co-occurrence patterns | ba2ad182-2a2b-43d1-85d7-51208257909f | 10804954 | Microbiology[mh] | Microbial diversity is essential for maintaining ecosystem functions (e.g., climate mediation, aboveground productivity, and soil fertility) ( ). However, such benefits have been threatened by unprecedented losses in microbial diversity, owing to globally increasing aridity ( ). Soil diazotrophic communities, which are the most significant source of N in natural ecosystems and play a significant role in the fixation of atmospheric N 2 ( ), are significantly impacted by the aridity level ( ). Aridity directly alters plant and soil attributes (plant biomass, soil moisture level, organic substance input, and redox potential), which in turn affects the ecological processes (e.g., species pools, assembly processes, and co-occurrence patterns) of diazotrophic communities and ultimately indirectly influences biogeographic patterns (β-diversity) ( ). Clarifying how aridity affects the diversity and structure of potential diazotrophs is crucial for a full comprehension of the causes and impacts of environmental changes on the functioning of terrestrial ecosystems ( ). Regional species pools (γ-diversity) have been widely recognized as an underlying determining factor of β-diversity ( ). A larger species pool offers a greater number of candidate species available for colonization, which can lead to the establishment of more diverse biotic interactions within the community, leading to higher β-diversity ( , ). Some studies investigating microbial biogeographic patterns highlighted the roles of assembly processes and argued that microbial community assembly is affected by both deterministic processes (environmental selection) and stochastic processes (dispersal and drift) ( ). For example, one of the most influential deterministic processes in the assembly of soil bacterial and fungal communities is the environmental filter, which involves a variety of environmental variables like the emergence or survival of species in a specific area ( ). Stochastic processes, which include ecological drift (random birth and random extinction), dispersal limitation (limitations or obstacles encountered by species during spatial diffusion or migration), and homogenizing dispersal (species diffusion or migration leads to community homogenization) ( ), also have a significant impact on how microbial communities are distributed and produce species composition patterns that are similar to randomly generated patterns ( ). The co-occurrence patterns uncover how organisms co-occurr in a special environment and influence the colonization of microorganisms and their response to the environment through competition and collaboration, thereby affecting β-diversity ( , , ). Microbial co-occurrence patterns have recently been reported to be potential drivers of microbial β-diversity patterns ( , ), especially in stressful environments, where the limited niche space would lead to low species richness and fierce competition between microbial taxa, particularly when their requirement of resources is the same ( ). However, the relative importance of microbial co-occurrence patterns in the diversity and composition of microbial communities remains poorly understood compared with species pools and community assembly, particularly in fragile ecosystems where biodiversity is vulnerable to environmental changes. Alpine ecosystems, which occur above the tree line in montane environments, cover ~20% of the earth’s terrestrial surface area and play a vital role in keeping global climate balance and in biodiversity conservation ( ). These ecosystems are vulnerable and sensitive to climate change owing to extreme environmental stresses (e.g., low temperatures, high winds, and low oxygen) and are one of the most understudied terrestrial ecosystems because of their remoteness and inaccessibility ( ). The soil microbial diversity in alpine ecosystems varies significantly owing to the heterogeneity of climate and biogeography, and this variation can be strengthened under climate changes ( ). An alpine ecosystem is thus an ideal region to investigate microbial biogeographic patterns along the climate gradient. Because N limitation varied with water limitation in alpine ecosystems ( ), clarifying the potential diazotrophic biogeographic patterns along the aridity gradient is essential for the prediction of climate-induced environmental impacts ( ). Here, we conducted a transect soil survey from 60 sites spanning an aridity gradient (−0.2 to 1.0) across the Tibetan Plateau, the largest terrestrial alpine ecosystem of the planet, to investigate the β-diversity patterns of soil potential diazotrophs and to reveal their potential drivers. We hypothesized that (i) assembly processes, especially deterministic processes, are more important in shaping potential diazotrophic β-diversity in more arid environments because water deficiency restricts primary productivity and thus restricts soil resources in arid habitats ( ) and that stochastic processes, such as microbial dispersion, are more important in less arid environments; (ii) species pools and co-occurrence patterns also shape the potential diazotrophic β-diversity; and (iii) increasing aridity changes the relative importance of assembly processes, species pools, and co-occurrence patterns to β-diversity.
Potential diazotroph community diversity and structure Local species pool and community assembly Co-occurrence network patterns Underlying drivers of potential diazotrophic β-diversity The results of the random forest analysis indicated that community assembly (e.g., βNRI1 and RC1) and network topological features (e.g., NE, NV, AD, MOD, APL, CC, and DEN) played greater roles than other predictors in determining diazotrophic β-diversity and, together, accounted for 48.2% of the variance in diazotrophic β-diversity ( ). Furthermore, the relative importance of assembly processes decreased with increasing aridity, whereas the importance of network topological features (MOD, DEN, and NV in humid habitats; NV, DEN, and MOD in semi-arid habitats; DEN, NV, and AD in arid habitats) increased ( ). Furthermore, the established partial least squares path model (PLS-PM) explained 86.3%, 71.1%, and 40.5% of the variation in β-diversity in humid, semi-arid, and arid habitats, respectively ( ). Integrating the contributions of diazotrophic community assembly (path coefficients = −0.609, –0.511, and −0.213 in humid, semi-arid, and arid habitats, respectively; P < 0.01; ) and co-occurrence patterns (path coefficients = −0.049,–0.114, and −0.490 in humid, semi-arid, and arid environments, respectively; P < 0.01; ) significantly influenced β-diversity. Soil properties (path coefficients = 0.157, 0.453, and 0.239 in humid, semi-arid, and arid habitats, respectively; P < 0.01; ) also directly and significantly affected β-diversity, whereas vegetation characteristics and soil properties indirectly and significantly influenced β-diversity by affecting community assembly and co-occurrence networks ( P < 0.01; ). In the three habitats, aridity has no direct and significant impact on β-diversity ( ). These findings indicate that aridity indirectly shapes β-diversity by influencing soil properties (total nitrogen, soil organic carbon, soil moisture, and pH), vegetation characteristics (coverage, plant richness, aboveground biomass, and belowground biomass), stochastic processes (βNRI1 and RC1), and species co-occurrence patterns (NV, APL, DEN, and MOD). Notably, the path coefficients of the assembly processes decreased (from −0.609 to −0.213) with increasing aridity, whereas those of the co-occurrence network increased (from −0.049 to −0.490; ). This finding suggested that the importance of ecological processes in shaping the potential diazotrophic β-diversity appeared to be driven by aridity level, which indicated that aridity changes soil properties, community assemblage processes, and network stability, which is the same as hypothesis (iii).
A total of 6,010,201 diazotroph sequences were assigned to 15,195 operational taxonomic units (OTUs) and then grouped into 28 bins based on phylogenetic relationships (Fig. S2). Alphaproteobacteria accounted for most (mean = 59.0%) of the nif H sequences, followed by Deltaproteobacteria (5.4%), Opitutae (2.8%), and Betaproteobacteria (2.4%; Fig. S2). Community similarity was negatively associated with geographic distance and exhibited significant distance-decay relationships ( ). However, the slopes of the distance-decay curves varied among habitat types ( ), with steeper slopes observed in humid habitats (−0.63) than in semi-arid or arid habitats (−0.26 and −0.14, respectively; P < 0.001; ). Both diazotrophic richness (i.e., Chao1 estimator) and abundance (i.e., quantitative PCR [qPCR]-based copy number) were decreased with site aridity ( ), and the results of the nonmetric multidimensional scaling analysis indicated that diazotrophic community structure shifted along the aridity gradient (permutational multivariate analysis of variance tests, P < 0.01; ; ). In addition, diazotrophic β-diversity was significantly ( P < 0.05) higher at the humid sites than at either the semi-arid or arid sites ( ). The ordinary least squares linear regression model (OLSLRM) results indicated that aridity was significantly and negatively correlated with both α-diversity and β-diversity, not γ-diversity, of the diazotrophic communities (Fig. S3).
The simulation model revealed that the expected β-diversity increased with increasing γ-diversity, regardless of the number of individuals ( ). However, contrary to our hypothesis (ii), which suggested that β-diversity is affected by the species pool, the relationship between the observed γ-diversity and the observed β-diversity did not follow the expected pattern ( ), which indicated that the species pool has little influence on the potential diazotrophic β-diversity in alpine ecosystems. The results of the phylogenetic bin-based null model analysis (iCAMP) suggested that stochastic processes, mainly dispersal limitation and ecological drift, made the largest donation to diazotrophic community assembly, together accounting for 87.7% of the total variance in diazotrophic community composition, whereas heterogeneous and homogenous selection accounted for only 2.9% and 9.1%, respectively ( ). Furthermore, the role of dispersal limitation decreased with increasing aridity, whereas that of ecological drift increased ( ). The neutral community model indicated that the species migration rate in arid habitats was relatively low ( ) but had a greater environmental niche breadth ( ). A total of 15,195 OTUs were classified into 28 bins based on phylogenetic relationships. Ten OTU bins that were most strongly affected by assembly processes were selected, including five bins regulated by dispersal limitation (group 1) and five regulated by ecological drift (group 2, ). The two groups contributed 61% and 74% to the drift and dispersal limitation ( ). The most abundant taxa contained Actinobacteria, Alphaproteobacteria, Betaproteobacteria, Deltaproteobacteria, and Opitutae ( ).
The co-occurrence networks across the three habitats followed a power-law distribution pattern (Fig. S4). The humid, semi-arid, and arid habitat networks captured 4,753 edges among 662 vertices, 1,583 edges among 208 vertices, and 196 edge counts among 93 vertices ( ). Among the subnetworks, the number of edges (NE), number of vertices (NV), and average degree (AD) significantly decreased with increasing aridity ( P < 0.05; Fig. S5a through c), and the density (DEN), clustering coefficient (CC), average path length (APL), and modularity (MOD) in semi-arid habitats were significantly lower than those in humid and arid habitats ( P < 0.05; Fig. S5d through g). In addition, keystone taxa (potential diazotroph module and connector hubs) in the vertices were affiliated with Alphaproteobacteria (47 OTUs), Deltaproteobacteria (48 OTUs), Betaproteobacteria (9 OTUs), and Opitutae (6 OTUs) in humid habitats; Alphaproteobacteria (24 OTUs), Deltaproteobacteria (6 OTUs), Betaproteobacteria (3 OTUs), and Opitutae (2 OTUs) in semi-arid habitats; and Alphaproteobacteria (25 OTUs), Deltaproteobacteria (3 OTUs), Betaproteobacteria (2 OTUs), Bacilli (2 OTUs), and Opitutae (2 OTUs) in arid habitats (Fig. S6a through c). The arid co-occurrence networks harbored the lowest positive/negative edge ratios among the three habitats ( ). Since the positive and negative edges of co-occurrence networks may be indicative of species cooperation and competition, the lower positive/negative edge ratio probably indicated stronger species competition in arid habitats. The OLSLRM between network topological features and β-diversity showed that the subnetwork topological features of potential diazotrophs were significantly related to their β-diversity ( P < 0.05; Fig. S7). Topological features, such as NV, NE, APL, and MOD, were significantly and positively related to β-diversity ( P < 0.05; Fig. S7). These findings demonstrated that a longer path length and loose connectivity in the network would lead to a heterogeneous diazotrophic community, which is consistent with our hypothesis (ii) that indicated that co-occurrence patterns contribute to diversity. Co-occurrence networks were further divided into smaller coherent modules, and their eigengenes were strongly correlated with pH ( ). Diazotrophic β-diversity was negatively correlated with module 1 in the humid habitats and module 2 in the semi-arid habitats and was positively correlated with module 4 in the arid habitats ( ). The genera Geobacter (Deltaproteobacteria) and Paenibacillus (Bacilli) were identified as keystone taxa ( ), which indicated a significant association between the connected module members and diazotrophic β-diversity ( ).
To the best of our knowledge, this is the first study to report the spatial distribution patterns of potential diazotrophic communities in alpine ecosystems along a large environmental gradient. The findings of the study indicate that global increasing aridity significantly affects the biogeographic patterns of alpine soil potential diazotrophic communities. Distinct potential diazotrophic community compositions and distance-decay relationships were observed at different aridity levels, and increasing aridity appeared to both reduce diazotrophic richness through negative effects on resource inputs (i.e., available C) and alter the β-diversity of diazotrophic communities by regulating the assembly processes and co-occurrence patterns. Aridity alters geographical patterns and assembly processes Aridity alters co-occurrence patterns The relative importance of the stochastic processes and co-occurrence patterns changing with aridity shaped β-diversity Conclusion The distance-decay relationship slopes of the potential diazotrophic communities were remarkably higher (−0.63 to −0.14, ) than those reported for the potential diazotrophic communities of southern China (−0.067) ( ). This is likely due to the environmental conditions of the Qinghai-Tibet Plateau’s soil, which is typically frozen for more than 180 days per year, with permafrost and seasonally frozen soil jointly accounting for 80% of the total area ( ). Indeed, long-term and extensive freezing can limit species migration in soil and can weaken species turnover ( ). The potential diazotrophic communities investigated by the present study exhibited steeper distance-decay relationship slopes in humid habitats (fitness R 2 < 0.1) than in semi-arid and arid habitats ( ), which suggests that the spatial structure and turnover of soil potential diazotrophic communities decreased with increasing aridity. These results support previous reports and can be caused by the physical barrier that aridity (i.e., water stress) imposes on the dispersal capacity of potential diazotrophs in high-aridity habitats ( , ). Previous studies have reported that soil biota use dormancy as a metabolic strategy to deal with environmental stresses ( ), and many microbes (e.g., Firmicutes, Actinobacteria, and Actinobacteriota) can transition from active to dormant states under water-deficient conditions in order to sustain their growth and, thereby, limit species turnover ( ). Stochastic processes indicate an equal chance of dispersal for each species and are mainly dominated by homogenizing dispersal, dispersal limitation, and ecological drift ( ). Contrary to our first hypothesis, deterministic processes are more important in shaping diazotrophic β-diversity, and the application of the iCAMP model revealed that stochastic processes (e.g., dispersal limitation and ecological drift), rather than deterministic processes (e.g., heterogeneous and homogeneous selection), play a dominant role in shaping the potential diazotrophic community assembly of alpine soils ( ). Diazotrophs, which have smaller bodies, are usually considered to be less influenced by dispersal limitation than larger organisms and have stronger dispersal ability, which is inconsistent with our results ( ). The reason could be that the soil freezing of the Tibetan Plateau hindered the migration of microorganisms and increased the physical barrier by reducing the fluidity of water and the activity of microorganisms ( ). This weak environmental effect (deterministic processes) may be caused by the dormancy strategies of belowground microorganisms in alpine ecosystems, which could enhance microbial resistance and alleviate selection pressures ( , , ). However, it is important to note that our study also revealed a higher environmental effect in more arid habitats. This finding suggests that aridity can actually enhance the selection effect on the potential diazotrophic community, despite the presence of dormancy strategies that enhance the adaptability of potential diazotrophs. Although dispersal limitation and ecological drift contribute significantly to diazotrophic β-diversity, the contribution of the two processes shifts along the aridity gradient, with the importance of dispersal limitation decreasing with increasing aridity and the importance of ecological drift increasing ( ). This finding may be explained as follows. Microbial dispersal is typically a passive process that involves transportation by air, flow, and hitchhiking ( ). However, this passive dispersal process was restricted in the more arid regions of the Tibetan Plateau, not only because of soil freezing, which presents a physical obstacle, but also because the lower water availability hinders microbial movement in the soil, especially for those inactive microorganisms ( , ). Furthermore, dormancy may increase the resistance of diazotrophs to water stress, resulting in weak long-distance dispersal ( ). For instance, Betaproteobacteria from the genus Polaromonas were widely dispersed in alpine habitats, and their dormancy might prevent them from spreading from soil to air ( ). Previous studies have demonstrated that ecological drift is more important in shaping microbial community structure under weak environmental selection, low microbial richness, and low member abundance, such as in host-associated environments ( , ). In arid habitats, diazotrophs with low richness and abundance are more vulnerable and sensitive to drift because a slight reduction in their abundance can result in their extinction ( ). A larger number of vegetation patches observed in semi-arid and arid habitats (data were not shown) also reinforce the roles of ecological drift in the establishment of the diazotrophic community ( ). That is because the lower number of individuals in plant patches should increase the risk of local species extinction and enhance the impact of ecological drift on communities ( , ). When a species’ population size decreases, it becomes more vulnerable to environmental fluctuations and chance events, as there are fewer individuals available to contribute to the gene pool or ecological interactions ( ). Functional redundancy, which refers to the similar or identical functions of different species, appears to be quite high in the microbial communities of stressed environments because redundant species are needed to maintain ecological functionality in the face of species extinction ( ). Consequently, in arid areas with strong functional redundancy, microorganisms become more susceptible to ecological drift. This susceptibility arises from the reduced selective pressures and increased stochasticity associated with functional redundancy ( , ), allowing random fluctuations and neutral processes to have a greater influence on community dynamics ( ). However, it should be noted that it is difficult to determine the extinction of microbial taxa in the community, which makes it hard to directly detect drift in the community. The dormancy of microorganisms can prevent them from extinction and weaken the influence of drift, which can also increase the difficulty of directly detecting drift ( ). Both the iCAMP approach and the neutral model detected a significant proportion of deterministic assembly factors (e.g., homogeneous selection) ( ; ), which implies that deterministic processes also play an important role. Thus, even though stochastic processes dominate the potential diazotrophic β-diversity in alpine soils, deterministic processes should not be neglected.
Aridity affects co-occurrence patterns ( ). For example, the potential diazotrophic co-occurrence networks from semi-arid or arid habitats included more negative correlations than those from humid habitats ( ), which could indicate greater interspecific competition in high-aridity habitats. Such interspecific competition could be related to the limited C resources of arid habitats, which are characterized by sparse vegetation and low primary productivity due to water deficiency ( ). This finding is supported by a previous report of greater competition in bulk soil, when compared to rhizospheric soil, due to reduced C substrate availability ( ). Since N fixation is an energy-expensive process, the lower C availability of arid soils may have a more pronounced effect on potential diazotrophic competition than did the humid soil, with the strength of competition increasing with aridity ( , ). The distribution of microbial taxa depends on both their survival ability and their persistence ability after establishment in a new environment. Interspecific interactions also affect which species occur and how co-occurring species are organized as a community ( ). However, our results indicate significant correlations between co-occurrence network topology (i.e., NV, NE, APL, and MOD) and potential diazotrophic β-diversity (Fig. S7). More specifically, greater β-diversity in higher-aridity habitats was associated with greater vertex count and path lengths and reduced connectivity. These findings indicate that co-occurrence patterns are strongly correlated with potential diazotrophic biogeography. Some studies suggest that co-occurrence patterns are a deterministic process that regulates the intensity of intra-species and inter-species competition, generates niche partitioning, and limits community similarity, resulting in high β-diversity ( , ). However, our results suggest that the contribution of co-occurrence patterns to β-diversity in arid habitats is greater than that of deterministic processes, even community assembly ( ), indicating that it is inappropriate to include the role of co-occurrence patterns in deterministic processes. Additionally, there are studies indicating that co-occurrence patterns can impact later species colonization through environmental changes or preferential effects ( , , ). Other studies indicate that co-occurrence species participate in various interactions, such as quorum sensing, interference (toxin secretion), and developmental competition ( , ). The adaptability and population dynamics of individual species are influenced by the abundance changes in the species that directly interact with them ( ). This impact, amplified through interaction networks, may serve as a driving factor in shaping community diversity ( ). However, our understanding of how co-occurrence patterns, such as commensalism, mutualism, and parasitism, shape microbial communities remains limited ( ). This is primarily due to the inherent challenges associated with observing and documenting such patterns in microbial communities ( ).
According to the contemporary coexistence theory derived from macroecology, the community β-diversity pattern is shaped by a combination of factors, including the species pool, community assembly, and co-occurrence patterns ( , , ). However, in the present study, the species pool contributed little to the biogeographic patterns of the alpine soil potential diazotrophic communities ( ), which is inconsistent with our hypothesis (ii). These findings correspond with the results of Qian et al. ( ) and Xu et al. ( ), who reported that community assembly processes drive β-diversity at a local scale, rather than the species pool, but contrast with the results of Wang et al. ( ), who reported that the species pool explains more variations in potential diazotrophic β-diversity over vast biogeographic regions. Thus, the relative contribution of species pools to β-diversity may depend on the organism type or may be influenced by the interplay of multiple ecological mechanisms, such as environmental selection, dispersal limitation, and biological co-occurrence. At large scales, community assembly processes have a greater impact on β-diversity due to changes in spatial and environmental factors, and the importance of co-occurrence patterns increases with decreasing scale ( ). The increased effects of assembly processes and co-occurrence patterns could weaken the role of species pools. Even though both stochastic processes and co-occurrence patterns played significant roles in shaping potential diazotrophic communities in the present study, the relative importance of the stochastic processes and co-occurrence patterns changed with increasing aridity, with the contribution of assembly processes decreasing with increasing aridity and the contribution of co-occurrence patterns increasing ( ). This identified our hypothesis (iii). The resource utilization assumption supports that resource competition largely drives community diversification, especially when microbial taxa have similar resource requirements or niches in a resource-poor environment ( ). The poor supply of nutrients (e.g., C and N) in arid environments enhances interspecific resource competition, which is, in turn, commonly expected to promote diversity, as evidenced by the more negative correlation in arid microbial networks ( ). Co-occurring and competing taxa may stimulate community performance to improve resource use efficiency ( ). Soils harboring interspecific competition may contribute to the biogeographic pattern in natural ecosystems ( , ). Network topology indicated that Paenibacillus as keystone taxa were observed exclusively in arid diazotrophic networks (Fig. S6). When the networks were divided into modules, the diazotrophic keystone taxa showed fierce competition with the contacted members in the corresponding modules ( ). The presumed keystone taxon ( Paenibacillus ) exhibits competitive traits and advantages for utilizing limited resources more efficiently. It is noteworthy that cooperation or competition does not always correspond to “well” or “poor” ecosystem functioning because higher functionality was observed in a dry ecosystem where fierce microbial competition exists ( ).
Our study identified aridity-driven mechanisms that underlie spatial patterns of potential diazotrophic β-diversity in alpine ecosystems. Increasing aridity is associated with reduced β-diversity and low species turnover. Aridity affects the co-occurrence patterns and community assembly of potential diazotrophs by regulating soil and vegetation characteristics. The observed aridity-induced β-diversity patterns were largely affected by the community assembly processes (e.g., dispersal limitation and ecological drift) and co-occurrence patterns, rather than by the species pool, and the relative importance of the stochastic processes (i.e., assembly processes) and co-occurrence patterns decreased and increased, respectively, with increasing aridity. These results provide novel insights into the biogeographical patterns of functional microbes and provide a basis for predicting the response of alpine soil biodiversity and functions to climatic change. However, it is necessary to determine whether this formation mechanism of β-diversity exists at a larger research scale and in other ecosystems and to further explore the effect of this mechanism on soil functions.
Study sites Soil and vegetation sampling Soil analysis Microbial DNA extraction and real-time qPCR Amplicon sequencing and phylogenetic classification Ecological processes Co-occurrence network construction Statistical analysis The Chao1 index and total OTU richness were used to represent potential diazotrophic α-diversity and γ-diversity (species pool), respectively. The first axis of the PCA, which was based on the Bray-Curtis dissimilarity of potential diazotrophic OTUs, was defined as the potential diazotrophic β-diversity. Distance-decay relationships, which represent variation in community structure along a gradient in space or environment, were used to quantify changes in potential diazotrophic β-diversity over geographic distance ( ). Distance-decay relationship slopes, which represent spatial species turnover rates, with steeper slopes indicating higher turnover rates, were estimated by the OLSLRM using the vegan package ( ), and the significance of distance-decay relationships between habitats was evaluated using the F test. Nonmetric multidimensional scaling analysis was performed to evaluate the differences between the compositions of potential diazotrophic communities among the aridity habitats, followed by permutational multivariate analysis of variance tests ( P < 0.05) using the vegan package ( ). A post hoc Tukey’s test was used to evaluate the significance of changes in the relative abundance of taxonomic groups along the aridity gradient using the stats package ( ). The relationships between α-diversity, β-diversity, and γ-diversity were estimated with the OLSLRM using the stats package ( ). To assess the effect of the potential diazotroph species pool on β-diversity, the expected β-diversity among communities was calculated using an assembly simulation of the communities, as described by Kraft et al. ( ). The association between co-occurrence network topologies and β-diversity was evaluated using the Spearman coefficient. Random forest analysis was performed to quantify the relative contributions of various predictors [e.g., soil properties (e.g., pH, soil bulk density, and soil moisture) and plant characteristics (aboveground biomass, belowground biomass, coverage, plant richness, and height)], community assembly (βNRI1 and RC1), and co-occurrence networks (NE, NV, AD, CC, DEN, MOD, and APL) to community β-diversity. Path analysis diagrams using the PLS-PM were used to identify direct and indirect effects of aridity, soil properties, and plant characteristics on potential diazotrophic β-diversity. Each latent variable was chosen according to the values of Cronbach’s alpha, Dillon-Goldstein’s rho, loadings, and cross-loadings, including aridity for climate factors; pH, soil organic carbon, total nitrogen, and SM for soil properties; aboveground biomass, belowground biomass, coverage, and plant richness for vegetation variables; and NV, APL, MOD, and DEN for co-occurrence patterns. Path coefficients represent the direction and strength of the linear relationships between the latent variables and the explained variability ( R 2 ). The goodness of fit was used to assess the predictive power of the established PLS-PM, with a goodness of fit of >0.6 considered acceptable. The PLS-PM was constructed using the “plspm” package ( http://www.gastonsanchez.com/PLS_Path_Modeling_with_R.pdf ) in R. All analyses were performed using R 4.2.0 ( ).
Field surveys were conducted at 60 sites spanning an aridity gradient (−0.2 to 1.0 aridity level) across the Tibetan Plateau (Fig. S1). Site aridity was represented by 1 − AI , where AI represents the ratio of precipitation to potential evapotranspiration and was obtained from the Global Aridity Index and Potential Evapotranspiration Climate database version 2 ( http://worldclim.org/ ). The elevation, mean annual precipitation, and mean annual temperature of the study sites ranged from 3,500 to 4,900 m, from 89 to 540 mm, and from 0.3°C to 7.6°C, respectively. The sites were arranged with intervals of 50 to 100 km, and to minimize the effects of human disturbance, the study sites were each located ≥50 km from human habitation and ≥1 km from major roads. In our study, the sites were grouped into humid (aridity < 0.35), semi-arid (0.35 < aridity < 0.8), and arid (aridity > 0.8) habitats based on the aridity level ( ), and the corresponding vegetation included alpine meadows (e.g., Kobresia , Stipa , Cleistogenes , and Astragalus species), alpine steppe (e.g., Stipa , Kobresia , Cleistogenes , and Oxytropis species), and alpine desert steppe (e.g., Stipa , Ajania , Draba , and Oxytropis species).
Soil and vegetation sampling were conducted in August 2020. Six 1 m × 1 m plots were established at each of the 60 sites (23 humid, 21 semi-arid, and 16 arid). In each plot, after recording the vegetation coverage (CO), the height, and the number of plant species richness (PR), the aboveground parts were clipped and dried to obtain the aboveground biomass (AGB), and the roots were washed with tap water and dried to obtain the belowground biomass (BGB). For soil sampling, soil cores were collected from five points (four corners and center) in each plot and then fully blended to produce a composite soil sample, and a total of 360 soils were collected. After roots and stones were removed, each of the composite soil samples was separated into two halves after passing through a 2-mm filter. One part of each soil sample was stored at –80°C for subsequent DNA extraction, and the second part of each soil sample was air-dried to facilitate the measurement of physiochemical properties.
Soil moisture (SM) was estimated by oven-drying the samples at 105°C for 24 h. Soil bulk density was then estimated using the soil cores (volume, 100 cm 3 ), and clay (<0.002 mm) content was analyzed using a laser particle size analyzer (Mastersizer 2000; Malvern Instruments, Malvern, UK). Soil pH was determined using a pH meter and a soil-to-water ratio of 5:2 (vol/wt). Total nitrogen was measured using the Kjeldahl digestion method, and soil NH 4 + -N and NO 3 − -N contents were analyzed by a segmented flow autoanalyzer system (AutAnalyel; Bran+Luebbe GmbH, Norderstedt, Germany) after being extracted with 2 mol/L KCl (1:10 wt/vol) ( ). Dissolved organic nitrogen, dissolved organic carbon, phosphorus, and soil organic carbon were determined following the method of Wang et al. ( ). Soil available phosphorus was measured by molybdate ascorbic acid and a UV spectrophotometer (Camspec, Cambridge, UK). Available potassium was determined using a flame photometer after extraction with 1.0 M ammonium acetate (CH 3 COONH 4 ) ( ).
Microbial DNA was extracted from each of the composite soil samples (0.5 g) using a FastDNA SPIN Kit, according to the instructions of the manufacturer (MP Biomedicals, Cleveland, USA), and the quality and concentration of each of the resulting DNA samples were measured using a NanoDrop 2000 spectrometer (Thermo Scientific, Wilmington, DE, USA). The nif H gene was analyzed by high-throughput qPCR on an ABI Prism 7500 Real-Time qPCR system (Applied Biosystems, Foster City, CA, USA) using the primer pairs nif H-F (5′-AAAGGYGGWATCGGYAARTCCACCAC-3′) and nif H-R (5′-TTGTTSGCSGCRTACATSGCCATCAT-3′) according to the procedures described by Zhang et al. ( ). The details of amplification are shown in Appendix S1 in the supplemental material.
After DNA extraction, the primers nif H-F/ nif H-R with a 12-bp barcode to identify the different samples were used to amplify the nif H gene sequences. The PCR program was as follows: 95°C for 3 min, 35 cycles of 95°C for 30 s, and annealing at 55°C for 30 s, followed by extension at 72°C for 10 min. The PCR products were purified using 2.0% agarose gels and purified using the AxyPrep DNA Gel Extraction Kit (Axygen Biosciences, USA). Purified amplicons were combined at equimolar concentrations and paired-end sequenced (2 × 300 bp) using an Illumina MiSeq platform (Illumina, San Diego, CA, USA) following standard protocols. The resulting sequences were processed using USEARCH v11 to merge paired-end sequences, eliminate primer sequences, and filter out low-quantity sequences (quality score < 20, containing ambiguous nucleotides or not matching the prime). The filtered sequences then were clustered into OTUs with 97% sequence similarity using UPARSE, and taxonomic identities were assigned to the OTUs using the RDP classifier (version 2.2; https://sourceforge.net/projects/rdp-classifier/files/rdp-classifier/rdp_classifier_2.2.zip/download ). Before statistical analysis, OTU tables were rarefied to an even number of sequences per sample. The details of sequence quality control, PCR amplification, and taxonomy assignment are described in Appendix S1.
A recently proposed assembly approach, entitled infer community assembly mechanisms by phylogenetic bin-based null model analysis (iCAMP), was used to evaluate the contribution of ecological processes of the alpine soil potential diazotrophic community assembly ( ). OTUs were first divided into bins based on their phylogenetic relationships created by using FastTree ( ). The relative contributions of heterogeneous selection, homogeneous selection, homogenizing dispersal, drift, and dispersal limitation in each bin were measured using the within-bin beta net relatedness index (βNRI) and modified Raup-Crick metric (RC Bray ). Within each bin, significant deviations (βNRI > +1.96 or βNRI < –1.96) were interpreted as the dominance of heterogeneous and homogeneous selection, respectively. The remaining pairwise comparisons with |βNRI| ≤ +1.96 were divided by RC Bray . As opposed to those with RC Bray > +0.95, which were considered dispersal limitation, pairwise comparisons with RC Bray < –0.95 were viewed as homogenizing dispersal. The remaining uncategorized pairwise comparisons were used to estimate the relative importance of ecological drift. To explain the effect of the assembly process on β-diversity, the migration rate and environmental adaptability of the potential diazotrophic community were estimated using a neutral community model and Levin’s niche breadth index ( ). The first axes of the PCA based on βNRI (βNRI1) and RC Bray (RC1) were used to represent the community assembly.
Co-occurrence networks of potential diazotrophs in humid, semi-arid, and arid habitats were constructed using the SparCC method ( ). Rare OTUs (<20 sequencing reads) were eliminated. Robust and significant Spearman’s correlations ( ρ > 0.3, P < 0.05) were chosen to construct co-occurrence networks. A set of metrics (NE, NV, and positive/negative edge ratio) was calculated to describe the network topologies. The vertices in the networks represent OTUs, and the edges mean Spearman’s associations between vertices (i.e., OTUs). Co-occurrences across samples were represented by the interacted vertices. Subsequently, subnetworks were extracted, and NE, NV, AD, CC, MOD, DEN, and APL were calculated to describe the topology of each subnetwork. The vertices with high Z - or P -scores were identified as keystone taxa, such as peripherals (Zi < 2.5, Pi < 0.62), connectors (Zi < 2.5, Pi > 0.62), network hubs (Zi > 2.5, Pi > 0.62), and module hubs (Zi > 2.5, Pi < 0.62) ( ). An interactive Gephi platform ( http://gephi.github.io/ ) was used to visualize the networks. The first PCA axis of the standardized module expression data was viewed as the network’s module eigengene ( ), and the relationships between the module eigengene, soil properties, plant characteristics, assembly processes, and β-diversity were evaluated based on their Spearman’s coefficients.
|
Correlation between oxygen reserve index monitoring and blood gas oxygen values during anesthesia in robotic total prostatectomy surgery | a6bf2d63-8c00-4ff6-aa71-49e353982302 | 11770949 | Surgical Procedures, Operative[mh] | In recent years, laparoscopic interventions with a robotic approach have been more prevalently preferred . After the FDA (Food and Drug Administration) approved it for prostate surgery in 2001, the da Vinci Surgical System (Intuitive Inc. Sunnyvale, CA, USA) became the leading technology in urology over the next twenty years. The continuous development of alternative platforms, such as the Hugo™ Robot-Assisted Surgery RAS system, is an important step toward making robotic surgery more widely accessible . Radical prostatectomy is the most commonly performed procedure in urology using a minimally invasive approach . Robotic-assisted radical prostatectomy (RARP) is preferred over open surgery due to factors such as its minimally invasive approach, shorter postoperative recovery times, and lower rates of blood transfusion requirement . The main treatment modality in prostate cancer is radical prostatectomy. The purpose of oncological surgery is to ensure the excision of the tumor while providing the best possible functional results . Many studies have shown that RARP reduces mortality and morbidity rates . To utilize the effects of gravity to move the abdominal viscera away from the surgical site during RARP, the Trendelenburg position is preferred. The implementation of RARP with the steep Trendelenburg lithotomy position and the insufflation of the abdomen (pneumoperitoneum) with carbon dioxide (CO 2 ) may lead to cardiological, pulmonary, and neurohumoral side effects during anesthesia . The increase in blood volume causes physiological stress in the right ventricle, and the workload of the heart increases. Lung volumes decrease by 20%, and an imbalance occurs in ventilation/perfusion (V/Q). Respiratory mechanics are disrupted. With the increase in intraabdominal pressure and the cephalic displacement of the diaphragm, pulmonary compliance, vital capacity, and functional residual capacity (FRC) decrease. In volume-controlled ventilation, to maintain volume per minute, the peak airway pressure and plateau pressure rise . In patients with low preoperative respiratory reserves, it becomes difficult to maintain normocarbia and normal acid-base status. Because of compression atelectasis, V/Q incompatibility develops, dead space increases, pulmonary compliance decreases, and all these factors can cause hypoxia . Pulmonary edema can develop through the absorption of crystalloid fluids . Plateau pressure is recovered by bringing the patient to the supine position after the operation, but it remains slightly above the baseline value before pneumoperitoneum and the steep Trendelenburg position . The main reasons for partial pressure of carbon dioxide (PaCO 2 ) to increase include peritoneal CO 2 absorption, increased dead space and metabolism, inadequate ventilation, subcutaneous emphysema, and/or CO 2 embolism . As a result of hypercarbia, acidosis, tachycardia, arrhythmias, and increased intracranial pressure are observed. Therefore, to simultaneously maintain normocarbia and low airway pressures, ventilator parameters must be adjusted. End-tidal CO 2 (ETCO 2 ) pressure may not be compatible with PaCO 2 . For this reason, it is needed to monitor PaCO 2 in arterial blood gases . Although arterial blood gas measurement is an effective method as demonstrated in various studies, its usage in continuous monitoring is limited because it is an invasive method. The oxygenation levels of the patient may have changed when the results of arterial blood gas measurements arrive. The monitoring of oxygenation using a saturation probe, while non-invasive, has limited usage because it has a delayed response to decreasing partial pressure of oxygen (PaO 2 ) levels in the blood and cannot reflect hyperoxic PaO 2 values. While oxygen inhalation is given to prevent hypoxia during anesthesia, hyperoxia can develop. Hyperoxia can lead to an increase in oxygen radicals and oxidative stress. The negative effects of hyperoxia have been shown in children, newborns, and the elderly . It is associated with acute pulmonary injury and atelectasis . It is known that prolonged exposure to FiO 2 1.0% can lead to resorption atelectasis and trigger an inflammatory response in the lungs. Additionally, it may cause coronary and peripheral vasoconstriction, a decrease in cardiac output, or direct damage through oxidative stress . As arterial blood gas measurements for monitoring hyperoxia are invasive and unable to show hyperoxia in oxygen saturation values, the Oxygen Reserve Index (ORI) was developed as another measurement method allowing the perioperative monitoring of oxygen values. ORI is a measurement method that involves the usage of non-invasive hemoglobin sensors . The measurement of this novel variable in patients who are receiving supplementary oxygen has become possible through the ability of multi-wavelength pulse co-oximetry to analyze changes in both arterial and venous pulsatile blood perfusion based on light passing through a finger. When oxygen is provided, PaO 2 reaches a value of > 100 mmHg, and saturation of peripherial oxygen (SpO 2 ) is maximized to a value close to 100%. However, when PaO 2 reaches a value of approximately 200 mmHg, the venous oxygen saturation (SvO 2 ) at the site of measurement continues to increase until stabilization (at a saturation rate of approximately 80%) By combining the Fick and oxygen content equations, the change in light absorption over this PaO 2 range constitutes the basis of ORI calculations. Thus, ORI is an index with no unit that can take values between 0.00 and 1.00, it is a relative indicator of changes in PaO 2 in a moderately hyperoxic range (approximately 100–200 mmHg), and it is intended to be used in patients who are given additional oxygen . A drop in the ORI value can provide an early warning of impending desaturation events . ORI is recommended in obesity surgery, which has the potential for atelectasis development, laparoscopic interventions, the Trendelenburg position, and single-lung ventilation . During anesthesia induction, it is known that ORI increases due to preoxygenation, and after intubation, when the patient is positioned in the Trendelenburg position, ORI fluctuates in a downward direction. There are studies indicating that ORI is a useful reference for predicting oxygenation and that ORI measurement facilitates early interventions, such as adjusting PEEP levels and performing recruitment maneuvers . In our study, we aimed to check arterial blood gas measurements and ORI values during RARP performed in the Trendelenburg position and present the relationship between them. In the prediction of a shift toward hyperoxia by an increase in PaO 2 through oxygen inhalation, as a non-invasive measure, ORI can reduce the number of arterial blood gas measurements. Because the Trendelenburg position constitutes a situation that disrupts respiratory patterns, we wanted to find out whether ORI, which is an easier measurement method in comparison to arterial blood gas measurements, can allow safe monitoring against hyperoxia.
After obtaining ethics committee approval (decision date: 28.06.2024, decision number: 390), the study was carried out with adult male patients over the age of 18 who underwent robotic-assisted radical prostatectomy (RARP) operations at a university hospital The inonger than 2 h, patients with a BMI below 35, patients who can undergo ORI monitoring, patients that have invasive arterial monitorization, and the patients classified as American Society of Anesthesiologists (ASA) physical classes 1, 2, or 3 .The exclusion criteria were as follows: Male patients scheduled for radical prostatectomy with robotic surgery that are expected to last less than 2 h patients with a BMI above 35 (BMI > 35 kg/m 2 ), patients who cannot undergo ORI monitoring, patients who can cause hemodynamic instability and respiratory distress, and are classified as ASA 4. After pre-anesthesia evaluations, in the operating room, five-lead electrocardiography (ECG), SpO 2 measurement, and non-invasive blood pressure monitoring were performed in each patient. After peripheral venous cannulation, 0.05 mg/kg midazolam (iv) was given, and invasive hemodynamic monitoring was performed by radial artery cannulation. Preoxygenation was provided to the patients at a rate of 6 L/min. For anesthesia induction, the patients were given propofol at 1–2 mg/kg, fentanyl at 1 mcg/kg, ketamine at 1-1.5 mg/kg, and rocuronium at 0.6 mg/kg (iv), they were intubated using cuffed endotracheal tubes, and they were connected to a mechanical ventilator. A 50% oxygen-air mixture was given during anesthesia maintenance. Considering airway pressures and ensuring that ET CO 2 values would be 30–35 mmHg, volume-controlled ventilation was provided by implementing 5–8 ml/kg tidal volume, a respiratory rate of 12–14/min, and 5 mmHg positive end-expiratory pressure (PEEP). In anesthesia maintenance, in addition to intravenous remifentanil infusion and 50% O 2 -air and sevoflurane inhalation, intermittent iv bolus doses of rocuronium were used. After general anesthesia induction, because the patient was going to be put in a lithotomy and steep Trendelenburg position during RARP, he was fastened to the operation table using chest straps and soft shoulder supports to prevent him from sliding. The body parts of the patient that could be exposed to compression such as his elbows, axilla, shoulders, and back were supported with soft pads. During robotic surgery, muscle relaxants were administered every 35–40 min, taking into account the patient’s muscle relaxant metabolism. This approach was necessary to ensure adequate muscle relaxation throughout the procedure, as the robotic arms require the patient to remain still and immobile. Due to the prolonged surgery and repeated doses of muscle relaxants, the patients were extubated with sugammadex (4 mg/kg) after they had warmed up sufficiently in the intensive care unit and cleared the relaxant. No pulmonary complications occurred in any of the patients. Recruitment maneuvers were not routinely performed on every patient. For the ORI monitoring of the patient, an ORI pulse oximeter was placed on his index finger. The moment when ORI values were first read was considered the baseline, and arterial blood gas and ORI values were recorded simultaneously at the baseline (T1), 30 min later (T2), 1 h later (T3), 3 h later (T4), and 5 h later (T5). The correlations between the simultaneously recorded ORI and arterial blood gas values were analyzed. Statistical analysis The collected data were analyzed using SPSS (Statistical Package for the Social Sciences) version 26 (IBM Corp, USA). The normality of the variables was tested using visual methods (histogram and probability plots) and analytical methods (Kolmogorov-Smirnov/Shapiro-Wilk tests). The results are presented as mean ± standard error values. For analyzing the data on PaO 2 < 240 mmHg ( n = 90), correlation analyses were performed using Pearson’s correlation coefficient, and a simple linear regression analysis was performed. The threshold of PaO 2 < 240 mmHg was chosen according to a previous study that investigated the relationship between ORI and PaO 2 [Applegate RL 2nd, Dorotta IL, Wells B, Juma D, Applegate PM.The relationship between ORI values and values of arterial partial pressure of oxygen during surgery was examined. Anesth Analg. 2016;123:626–33. 10.1213/ANE.00000 00000 00126 2.]. Additionally, 95% prediction intervals were determined in the data on PaO 2 < 240 mmHg ( n = 90). To obtain the optimal cut-off ORI value to detect PaO 2 ≥ 150 mmHg, a receiver operating characteristic (ROC) curve analysis was performed. In order to confirm the predictability and diagnostic ability of the ROC curve, the area under the curve (AUC) and its 95% confidence interval (95% CI) were also determined. To assess the trending ability of ORI, a four-quadrant plot analysis was performed. For the analysis, all 120 datasets, that is, 90 changes (5 changes per case) in PaO 2 (ΔPaO 2 ) and ORI (ΔORI), were used.
sample of the study included 24 male patients. The mean age of the patients was 63.30 ± 7.74, their mean BMI (kg/m 2 ) was 26.64 ± 2.84, and their mean duration of operation was 351.52 ± 48.72 min (Table ). The ASA scores were as follows: 66.6% (n: 16) of the patients were ASA 2, and 33.3% (n:8) were ASA 3.The values of ORI, pO 2 , pH, and pCO 2 obtained at different measurement times are presented in Table . The lowest mean ORI value was 0.26 ± 0.07 at T2, while the highest mean ORI value was measured as 0.49 ± 0.09 at T5. The mean ORI value in all measurements was 0.36 (median: 0.28, SD: 0.3694). The lowest and highest mean PO 2 values were 135.60 ± 8.35 at T2 and 168.91 ± 19.92 at T5. Among all measured PO 2 values, the lowest PO 2 value was 58.40, whereas the highest one was 483. The mean PO 2 value in all measurements was 154.53 (SD: 66.60). The lowest mean pH value was 7.31 ± 0.02 at T5, whereas the highest PCO 2 value was measured also at T5 (Table ).Towards the end of the operation, the patients’ pH levels were decreasing, while their CO 2 , ORI, and PO 2 levels were increasing. Cut‑off ORI value for hyperoxemia Relationship between ORI and PaO 2 Figure shows the relationship between the ORI and PaO 2 values of all 100 datasets, and Fig. shows the scatter diagram of ORI values obtained when PaO 2 < 240 mmHg ( n = 90). The results of the simple linear regression analysis are shown in Table .
Figure shows the ROC curve that was used to obtain the optimal cut-off ORI value to detect PaO 2 ≥ 150 mmHg. The AUC was 0.901 (95% CI: 0.821–0.981), and the cut-off value obtained based on the ROC curve (cut point ORI) was 0.220 (sensitivity: 0.826, specificity: 0.771).
In this study, we reported the relationship between PaO 2 values in arterial blood gas measurements and ORI values in patients who underwent RARP in the Trendelenburg position. The mean arterial gas PaO 2 value of the patients was 154.5352 (median: 141, min: 58.40, max: 483.00), while their mean ORI value was 0.3676 (median: 0.28: 0, max: 1). It can be stated that in patients given 50% oxygen in the Trendelenburg position, a relatively safe range was achieved by keeping PaO 2 at a mean value of 154 and ORI at a mean value of 0.36 (Table ). In all measurements except for T2, increases in PaO 2 and those in ORI were observed simultaneously. In our study, ORI decreased at T2 (0.26 ± 0.07), while PaO 2 also dropped to 135.60 ± 8.35. T1 was the time point at which ORI values were first read, and it reflected the synchronization of the baseline blood gas values and ORI values. The lower ORI and PaO 2 values at T2 compared to other time points can be explained by the intubation process.In a similar study, ORI values were found to identify momentary changes in PO 2 values during intubation sooner than saturation readings . In this study, during rapid sequence intubation, the change in ORI values was observed before the decrease in oxygen saturation. Hence, the simultaneous monitoring of ORI values and saturation readings is recommended. In another similar study conducted on obese patients, the ORI values at the end of preoxygenation were not significantly different from those at the beginning of intubation (0.49 ± 0.18 vs. 0.41 ± 0.09), but they slightly decreased once ventilation resumed (0.36 ± 0.12). In contrast, the ORI values of normal BMI patients decreased between the end of preoxygenation and the beginning of intubation (0.67 ± 0.25 vs. 0.57 ± 0.26), and further decreased once ventilation resumed (0.43 ± 0.18). The ORI values of obese patients were lower at both the end of preoxygenation and the beginning of intubation compared to normal BMI patients . In our study, the average BMI of patients was 26.64 ± 2.84, which is close to the normal range. We were unable to find any studies that involved constant oxygen inhalation among studies on ORI. In this study, we aimed to present the relationship between oxygen and ORI by using a constant oxygen value. In future studies, FiO 2 values can be optimally adjusted with reference to ORI by taking into account the correlation between PaO 2 and ORI when oxygen is kept constant. For instance, in the study conducted by Yoshida et al., FiO 2 values were changed so that ORI would be 0.5,0.2,0, and correlations of these values with blood gas measurements were examined . A PaO 2 value of 150 mmHg has been suggested as an acceptable target to identify hyperoxia in many studies. In our study, for the identification of a target PaO 2 value of ≥ 150 mmHg, we found the optimum ORI cut-off value to be AUC 0.901 (95% CI: 0.821–0.981), and the cut-off value obtained based on the ROC curve was 0.22 (sensitivity: 0.826, specificity: 0.771) (Fig. ). It seems that keeping ORI below 0.22 in cases where blood gas measurements are not made would prevent hyperoxia. While precautions taken against hypoxia under general anesthesia are important, FiO 2 values that are kept high can result in hyperoxia. ORI measurements can be helpful in the determination of suitable FiO 2 values to prevent hyperoxia. In cases of hyperoxia, conventional oxygen saturation monitoring may not provide clear information about PaO 2 . When saturation is at 100%, PaO 2 may have a value higher than 128 mmHg. Saturation would never exceed 100% despite further elevations in PaO 2 values . This demonstrates that saturation monitoring is inadequate in the prevention of hyperoxia. ORI can be used along with saturation monitoring to predict hyperoxia. In a similar study, in the ROC analysis they conducted to identify PaO 2 ≥ 150 mmHg, Yoshida et al. found the cut-off point of ORI as 0.21 (sensitivity: 0.950, specificity: 0.755), which was very close to the value found in our study. While the results of the linear regression analysis for ORI and PaO 2 (PaO 2 < 240 mmHg) in our study were [simple linear regression, n = 90; r 2 = 0.505, p < 0.001], in the study conducted by Yoshida et al., the result was r 2 = 0.706. In other studies supporting this relationship, a strong relationship was identified between ORI and PaO 2 at PaO 2 values below 240 mmHg . Our study and other studies have demonstrated a significant connection between ORI and PO 2 values in the context of their simultaneous interpretation at PO 2 values below 240 mmHg. Regarding the variation between ORI and its matching PaO 2 , Applegate et al. also reported that there was no correlation between ORI and PaO 2 when PaO 2 was 240 mmHg or higher (r 2 = 0.0016). Because ORI has low sensitivity to PaO 2 in cases of severe hyperoxia, blood gas analyses would be needed. While ORI is not an alternative to blood gas analyses, if it is greater than zero in robotic or other surgical procedures performed in the Trendelenburg position, it may be considered that the patient is not hypoxic. In cases where respiration is compromised such as those where general anesthesia leads to atelectasis as a consequence of the Trendelenburg position compromising lung capacity, practitioners tend to increase oxygen inhalation levels to prevent potential hypoxemia and hypoxia. With the help of ORI monitoring, if ORI values are greater than 0, they can assume that FiO 2 values are sufficient. FiO 2 can be adjusted based on PaO 2 values by checking blood gases in the next step. Although ORI is not an alternative to blood gas measurements, it can reduce the frequency of blood gas measurements. However, the need for an appropriate monitor and probe for ORI monitoring can increase the cost and limit its usage. The limitations of this study included the fact that our sample size was small, and there were variations in some parameters of the patients such as hemoglobin levels, body temperature, tissue perfusion, age, and BMI Conducting a power analysis in our study constitutes another limitation.There is a need for new studies with a larger number of patients in multicenter settings.
Although it is not an alternative to blood gas measurements, ORI can be recommended in the prediction of mild hyperoxemia in a safe range in cases that compromise respiration such as the Trendelenburg position. By ORI monitoring, unnecessary FiO 2 elevations and hyperoxia development can be prevented to a certain extent. In the future, the use of ORI monitoring in bariatric surgery, robot-assisted surgeries, procedures requiring single-lung ventilation, and other operations performed in the Trendelenburg position could contribute to the identification of more effective and safer treatment methods, enabling the administration of sufficient oxygen without causing hyperoxia.
|
Effects of patent foramen ovale in migraine: a metabolomics‐based study | acfcf5e5-80f9-4d8c-9624-604f0c2ecac5 | 11826071 | Biochemistry[mh] | A patent foramen ovale (PFO) is a deformity of the atrial septum present in ∼20% of the general population (Calvert et al., ). It allows some unoxygenated venous blood and small molecules, such as paradoxical microemboli and neuroactive substances, to directly enter the systemic circulation via the intracardiac right‐to‐left shunt (RLS) (Kutty et al., ). These bloodborne substances, avoiding pulmonary degradation through PFO, can lead to dysfunction of the cerebral vasculature and cortical spreading depression (CSD) (Bigal et al., ). Thus, PFO is considered a risk factor of many neurological disorders and described as a ‘back door to the brain’ (Latson, ). Migraine, with a prevalence of ∼20%, is one of the most common disabling disorders worldwide (Lipton et al., ). Migraine, particularly migraine with aura, may be related to PFO. It is reported that PFO has a prevalence of ∼40% in migraineurs, and up to 60% in migraineurs with aura, though some investigations showed it is similar in individuals without migraine and in migraineurs without aura (Anzola et al., ; Rundek et al., ). Furthermore, PFO increased the morbidity risk of migraine without aura by 1.71‐fold [95% confidence interval (CI) 1.19–2.47] (Tang et al., ), and migraine with aura by 7.78‐fold (95% CI 2.53–29.30) (Schwerzmann et al., ). The overall estimated odds ratio for the association between PFO and migraine was 5.13 (Schwedt et al., ). Moreover, a significant reduction in the frequency and severity of migraine attacks owing to PFO closure has been reported (Vigna et al., ). Previous studies have suggested a correlation between PFO and migraine; however, the reason for this correlation remains unclear. There are several hypotheses regarding PFO‐induced migraine. First, hypoxaemia can occur when haemodynamic or anatomical changes predispose to increased intra‐atrial RLS (Tobis et al., ). Normobaric hypoxia is an effective trigger of migraine attacks including aura (Frank et al., ). Second, small neuroactive molecules that circumvent pulmonary metabolism are involved in the modulation of regional cerebral blood flow (Lau et al., ). Abnormal cerebral blood flow plays an important role in the neurobiology of migraine (Youssef et al., ). Based on these findings, we hypothesized that some specific metabolites might bypass PFO into the brain leading to migraine. Metabolomics is a high‐throughput technology widely used for the simultaneous analysis of a large number of molecules (Aroke & Powell‐Roach, ). Metabolomic studies have shown that glucose, amino acid metabolism and small high‐density lipoprotein subspecies are altered in the blood of patients with migraine compared with healthy controls (Harder et al., ; Onderwater et al., ). Amino acid metabolites and peptides also show substantial alterations in the brains of migraine mice (Carreira et al., ). On the other hand, metabolomic studies have revealed that PFO can alter the metabolism of serotonin (5‐hydroxytryptamine; 5‐HT), pro‐inflammatory bradykinins and neurotensin, with a proven aetiopathogenic relationship between PFO and irritable bowel syndrome (Alvarez‐Fernandez et al., ). This has also been supported by metabolic differences in purine, nicotinamide, tetradecanedioic acid and bile acid between the pulmonary vein and pulmonary artery (Guo et al., ). To elucidate possible mechanisms behind the effect of PFO on migraine, we designed a project combining a clinical study and metabolomics.
Ethical approval Research overview Case‐finding study Closure study PFO animal model study Metabolomics study Statistical analysis This study was approved by the Ethics Committee of West China Hospital of Sichuan University (No. 202058) and registered at the Chinese Clinical Trial Register (ChiCTR2000031591, http://www.chictr.org.cn/showproj.aspx?proj=51929 ) in accordance with the Declaration of Helsinki . All participants provided written informed consent (parental informed consent for minors). The animal experiment was approved by the Institutional Animal Care and Use Committee of Sichuan University (No. 20211503A) and adhered to the ARRIVE guidelines. All procedures used in this study are compliant with the ethical principles of The Journal of Physiology .
We first conducted an observational clinical study of migraine and compared the clinical and biochemical features of migraineurs with and without PFO (case‐finding study). Next, we followed up patients with PFO who underwent PFO closure, and compared the characteristics of migraine attacks with those of patients who did not undergo closure. We compared the pre‐ and postoperative features of migraineurs with PFO closure operations (closure study). We used untargeted aqueous metabolomics to compare the pre‐ and postoperative plasma of migraineurs with PFO (metabolomics study) to explore the mechanisms of their clinical and biochemical differences. To further explore the metabolic shifts in the brain, mice with and without PFO were identified (PFO animal model study) and then metabolic differences were tested in the plasma and brain using untargeted aqueous metabolomics (metabolomics study). To validate the metabolic alternations in the blood and brain of PFO patients, targeted LC‐MS/MS was used to compare the pre‐ and postoperative plasma of migraineurs, and an enzyme‐linked immunosorbent assay (ELISA) was performed on the brain tissues of mice with and without PFO. Finally, desorption electrospray ionization mass imaging (DESI‐MSI) was used to localize metabolic differences in the brain (metabolomics study).
Subjects, setting and clinical evaluations PFO detection We conducted a prospective, observational, single‐centre study of patients with migraine who were consecutively recruited from the outpatient clinic of the Department of Neurology, West China Hospital (WCH), between February 2020 and October 2023. The inclusion criteria were as follows: (1) diagnosis of migraine by two neurologists independently according to the criteria of the International Classification of Headache Disorders, 3rd edition (ICHD‐3) ; (2) aged between 14 and 55 years; and (3) consented to participate in the study. The exclusion criteria were as follows: (1) history of head trauma with the loss of consciousness, or abnormal neuroimaging; (2) hypertension or cardiovascular diseases; (3) coexistence of active infection, distinct respiratory, gastrointestinal, haematological, urinary, autoimmunity or endocrine/metabolic diseases and syndromes; (4) pregnant or breast‐feeding women; (5) incomplete contrast‐transthoracic echocardiography (cTTE); and (6) underwent PFO closure before taking part in this study. Clinical data recorded from patient interviews included age, sex, educational level, body mass index (BMI), smoking, alcohol consumption, regular coffee consumption, the type of migraine (with or without aura), approximate mean headache frequency per month (within the last year), the intensity of headaches measured as the average score on a visual analogue scale (VAS), duration of migraine in years (from onset until interview), history of medical overuse headache (MOH) and family history of headache obtained from self‐reports. MOH was classified by the MOH criteria (code 8.2) according to the ICHD‐3. Educational level was classified as low (≤12 years) or high (>12 years). Smoking was defined as having at least one cigarette per day for more than 1 year. Alcohol consumption was defined as having at least one drink per month for more than half a year. Regular coffee consumption was defined as drinking coffee at least once a week for more than half a year. Blood samples were obtained from the right cubital vein between 08.00 and 11.00 h after overnight fasting. The patients did not use any prophylactics for at least 4 weeks and were at least 1 day headache‐free and medication‐free before biochemical and blood routine examinations. Routine blood analysis was done via an automatic blood analyser (Sysmex XN, Japan). A Roche Cobas c702 (Rotkreuz, Switzerland) was used for the analysis of biochemical indicators including fasting glucose, total cholesterol, triglyceride (TG), high‐density lipoprotein (HDL), low‐density lipoprotein (LDL), total bilirubin, bile acid, urea, cystatin‐C (cys‐C), sodium, potassium, calcium and phosphorus.
Migraineurs underwent cTTE after clinical evaluations. First, TTE was performed by 1–5 or 3–8 MHz multiplane transducers on a Philips IE33 ultrasound system. Patients were excluded if other cardiac diseases were identified by two experienced sonographers during the TTE examination. Second, micro‐bubble contrast medium generated by mixing 1 ml of blood and 1 ml of air with 8 ml of saline was injected into the antecubital veins (Silvestry et al., ). Inter‐atrial shunt was assessed at rest, during the Valsalva manoeuvre and coughing. PFO was diagnosed if at least three microbubbles appeared in the left atrium, either spontaneously or after provocative manoeuvres, within three cardiac cycles after complete opacification of the right atrium (Mas et al., ). The degree of RLS was quantified based on the number of detected microbubbles per frame in the left atrium: Grade 1, fewer than 10 microbubbles; Grade 2, 11–30 microbubbles; and Grade 3, over 30 microbubbles (Silvestry et al., ). The video records of each patient were reviewed by two sonographers and a consensus diagnosis was reached.
Follow‐up Sample collection Migraineurs from the cohort with large inter‐atrial shunts (RLS = Grade 3) suitable for PFO closure were continuously traced via hospitalization records in the Department of Cardiology, WCH. Migraineurs willing to undergo PFO closure were evaluated for clinical indications of transcatheter PFO closure by experienced cardiologists according to the consensus among Chinese experts (CSo, ). Those who completed percutaneous transcatheter interventional surgery for PFO closure received anti‐platelet therapy (aspirin 100 mg/day for 6 months in combination with clopidogrel 75 mg/day for the first 3 months) strictly after PFO closure to avoid thrombosis. Patients were assessed for adverse effects of PFO closure (including chest pain, atrial fibrillation, paroxysmal supraventricular tachycardia, thrombosis, aortic dissection, out of position occluder and other related problems) during follow‐up in the Department of Cardiology, WCH. Biochemical and blood routine examinations were also performed 12 months postoperatively. Headache follow‐up evaluations, including the frequency of migraine attacks per month, VAS score and use of headache remedies, consisted of outpatient visits with neurologists and telephone assessments administered by neurologists every 3 months for 1 year. A ≥50% reduction in attacks 1 year after PFO closure was considered a significant improvement. Migraineurs with large RLS who were unwilling to receive PFO closure were also followed up through telephone or outpatient visits 1 year after the first assessment. The average frequency of migraine attacks per month with the year was also recorded. The proportion of migraineurs with decreased frequency was compared between patients with and without PFO closure.
To explore metabolic differences after PFO closure, 5 ml of blood was also collected in EDTA‐containing collection tubes (Yuli, Jiangsu, China) 1 week before surgery and 12 months after surgery. It was also collected among migraineurs with large PFO who did not have PFO closure at their outpatient clinic visit and at the 1 year follow‐up outpatient visit. Patients were at least 1 day headache‐free and 1 week medication‐free before blood collection for metabolomics. The overnight‐fasting venous blood samples were processed within 30 min and plasma was collected after centrifugation (1,600 g for 15 min at 4°C), and then frozen at −80°C.
Identification of PFO ELISA Small pieces of brain tissue were weighed and mechanically homogenized in 9× volume of normal saline on an ice bath to obtain homogenates for biochemical ELISA. A commercial kit for glutathione (GSH)/oxidized glutathione (GSSG) (NJJC BIO A061‐1, Nanjing, China) was used to measure the concentration of total GSH and GSSG in mouse brains. Assays were performed following the manufacturer's protocol.
The incidence of PFO in 129 T2/SvEms mice has been reported as ∼75% and the blood of PFO‐positive mice can pass from the right to left atrium through PFO after pressurization of the right atrium (Kirk et al., ). Breeding pairs of 129 T2/SvEms mice were purchased from the Jackson Laboratory (Bar Harbor, ME, USA), and 24 offspring mice aged 6–8 weeks were used for the experiment, with weights ranging from 15 to 20 g. All mice were group‐housed (four per cage) with food and water under a 12 h dark–light cycle (20.00–08.00 h). Anaesthesia was induced with 3% isoflurane (at a flow rate of 1 l/min medical oxygen) and maintained with 1–2% isoflurane in the oxygen (RWD Life Science, Shenzhen, China). The depth of anaesthesia was checked by pinch and corneal reflexes. The blood sample was collected from the retro‐orbital sinus under isoflurane anaesthesia via a cone mask. Mice were then immediately decapitated, and the brains were quickly extracted and stored at −80°C until further metabolomics analysis. Centrifugation and storage of mouse plasma were performed consistently with the methodology used for human plasma. Whole hearts were dissected and fixed in formalin for 24–36 h before being embedded in wax. Continuous wax sections of the hearts containing fossa ovalis (FO) were stained with haematoxylin and eosin (H&E) according to a previous study (Elliott et al., ) to identify the presence or absence of PFO. After histological sectioning of the heart, 129 T2/SvEms mice were divided into PFO‐positive or PFO‐negative groups, and the plasma and brain tissues of the corresponding mice were used in subsequent experiments.
Untargeted aqueous metabolomics Targeted LC‐MS/MS validation Desorption electrospray ionization mass imaging Frozen tissue sections from different regions of mice brains were mounted on glass microscope slides and stored at −80°C prior to analysis. A quadrupole time‐of‐flight tandem mass spectrometer (Q‐TOF‐MS, Synapt G2‐Si, Waters) coupled with a DESI source (Waters) was applied to the mass imaging analysis of the tissue sections. Specifically, acquisition was performed in negative mode with a mass range from 50 to 1200 m / z and the source voltage was −4.5 kV. The spray solvent consisted of 80% acetonitrile and 20% water spiked with 0.1% leucine‐enkephalin (Waters) for real‐time mass calibration. The setup parameters of the DESI source were optimized as follows: inlet‐to‐surface distance, 0.5 mm; spray‐incident angle, 60°; and spray‐to‐inlet distance, 5 mm. For electrospray nebulization, the flow rate of solvent was 3 µl/min, and the sheath gas (N 2 ) flow was ∼ 72 psi. Finally, the tissue sections were scanned at a rate of 160 µm/s on a 2D moving stage in horizontal rows with a spatial resolution of 80 µm, which resulted in a 0.5 s scan time for each acquisition. The raw data were then processed using HDI (v1.5, Waters) to obtain the intensity distribution of each detected mass in tissue sections, which allowed the transformation of mass spectral files (raw data) to images.
Aqueous metabolites in plasma and tissue samples were extracted as previously described (A et al., ; Lu et al., ). Briefly, 50 µl of patient or mouse plasma was mixed with 250 µl spiked methanol (the NSK‐A stable isotope‐labelled internal standards kit from Cambridge Isotopes Laboratories, Andover, MA, USA), and vortexed at 200 g for 30 min at 4°C. The mixed solvent was centrifuged at 17,000 g for 20 min at 4°C. Finally, 200 µl of supernatant was dried in a vacuum concentrator at 30°C. For tissue sample extraction, exactly 10 mg of brain tissue was homogenized in 200 µl of spiked 80% methanol. Then, 800 µl 80% methanol (precooled to −80°C) was added and vortexed at 1500 rpm for 10 min at 4°C. The mixture was centrifuged at 13,300 rpm for 20 min at 4°C. Finally, 700 µl supernatant was transferred and dried in a vacuum concentrator at 30°C. The extracted residuals of either plasma or tissue were reconstituted in 200 µl mixed solvents consisting of 20% solvent A (water) and 80% solvent B (acetonitrile/water, 90:10, v/v), both containing 10 m m ammonium formate and 0.15% formic acid, prior to LC‐MS analysis. LC‐MS analysis was performed on an Ultimate 3000 ultra‐performance liquid chromatograph coupled with a Q Exactive Plus Q‐Orbitrap HRMS (Thermo Fisher Scientific, Waltham, MA, USA). A BEH amide column (2.1 × 100 mm, 1.7 µm; Waters, Milford, MA, USA) was applied to the chromatographic separation at a flow rate of 0.3 ml/min maintained at 40°C. The mobile phase consisted of the above‐mentioned solvent A and solvent B. The linear‐gradient elution was as follows: 0–2 min, 100% B; 2–9 min from 100% B to 85% B; 9–14 min from 85% B to 50% B; 14–17 min, 50% B for column cleaning; 17–25 min, 100% B for column equilibration. The injection volume was 2 µl and the solvent for needle wash was 0.15% formic acid in methanol/water (50:50, v/v). An untargeted MS analysis was performed under data‐dependent acquisition (DDA) mode. The acquisition parameters were set according to the following specifications: mass range, 60–900 m/z ; TopN, 8; intensity threshold to trigger MS/MS acquisition, 2e 4 ; dynamic exclusion, 5 s; stepped NCE at 20, 30 and 40 eV. The source parameters were as follows: sheath gas at 35 arb, auxiliary gas at 10 arb, spray voltage at +3.2 kV in positive mode, capillary temperature at 320°C, auxiliary gas heater temperature at 350°C and s‐lens RF level at 60%.
Targeted metabolites were extracted using methanol. Briefly, 500 µl methanol was spiked into 100 µl plasma of patients, and vortexed at 4°C and 200 g for 20 min. Samples were then centrifuged at 17,000 g for 10 min at 4°C. Finally, 500 µl supernatant was transferred to a new 1.5 ml tube and dried in a vacuum concentrator at 30°C. Prior to LC‐MS/MS analysis, residues were reconstructed with 200 µl mixed solution of methanol/water (50:50, v/v). Sample acquisition was carried out on an AB SCIEX 6500+ triple quadrupole mass spectrometer coupled with an HPLC system. The ion source parameters were as follows: ion spray voltage, 4000 V; temperature, 650°C; gas 1, 50 psi; gas 2, 40 psi; collision gas, 8 units; curtain gas, 35 psi. The MRM transition was as follows: tryptophan, m / z 205 → 146; 5‐HT, m / z 177 → 160; 5‐hydroxyindoleacetic acid (5‐HIAA), m / z 192 → 146; GSH, m / z 306 → 143; GSSG, m / z 611 → 306. Chromatographic separation was performed using a Waters ACQUITY HSS T3 column (100 Å column, 2.1 × 100 mm, 1.7 µm) at a flow rate of 0.3 ml/min maintained at 40°C. Mobile phase A was water solution containing 0.1% (v/v) formic acid, and mobile phase B was methanol. The elution gradient was as follows: 0.0–1.0 min, 10% B; 1.0–2.0 min, 10–20% B; 2.0–6.0 min, 20–50% B; 6.0–6.5 min, 50–95% B; 6.5–8.0 min, 95% B; 8.0–8.5 min, 95–10% B; 8.5–12.0 min, 10% B. The injector volume was 1 µl.
Binary and ordinal scales are presented as frequencies and percentages, and continuous data are presented as mean and standard deviation (SD). Differences between migraineurs with and without PFO were evaluated using the chi‐square test for categorical variables, Wilcoxon rank sum tests for ordinal variables, and t test for measured variable means with an appropriate test of the normal assumptions required. Multivariate logistic regression analyses including all variables (baseline, headache and laboratory variables) were carried out. To compare migraineurs with PFO closure to those without PFO closure, Student's t test, Wilcoxon rank sum test, chi‐squared test ( n > 5) or Fisher's exact test ( n ≤ 5) was performed. For the PFO closure data, pre‐ and postoperative data sets of patients undergoing PFO closure were analysed using a paired‐sample t test. The Bonferroni correction was used for paired comparisons. All statistical analyses were conducted using R (v4.0.1). A P ‐value (two‐tailed) of <0.05 was considered significant. Qualification and quantitative analyses of untargeted aqueous metabolomics were performed using Progenesis QI (v2.4.6; Waters). The identified compounds with relative intensities (peak areas) were exported as metabolite datasets for statistical analysis. All metabolomic data analyses were performed using R (v4.0.2), including data pre‐processing, univariate statistical analysis (hypothesis test and fold‐change calculation) and multivariate statistical analysis. Specifically, data pre‐processing was implemented including batch correction, imputation of missing values, normalization and standardization. Batch correction was based on the LOESS algorithm, and the imputation of missing values was based on the KNN ( k ‐nearest neighbours) algorithm. The median metabolite intensities in each sample were used for normalization, and standardization was based on the Pareto algorithm. In addition, principal component analysis (PCA) was carried out to check the data distributions and remove outliers. Then, a supervised model, orthogonal partial least squares discriminant analysis (OPLS‐DA), was carried out to establish a classifier model for the further selection of differential metabolites. A permutation test was applied to assess whether the OPLS‐DA models were overfitted. Based on the variable importance in projection (VIP) of each metabolite, differential metabolites that contributed significantly to the model differences were selected. Subsequently, differential metabolites were identified among the selected latent variables by integrated screening based on P ‐values and fold changes. To identify significantly altered metabolites, P ‐values were calculated using the paired t test (for metabolites whose intensities followed a normal distribution) or the Kruskal–Wallis test (for metabolites whose intensities did not follow a normal distribution). P ‐values ≤0.05 were considered statistically significant, and P ‐values >0.05 to <0.2 were considered a trend in untargeted metabolism analysis. In targeted metabolic validation, P ‐values <0.05 were considered significant. Fold changes were also calculated to reveal the degree and direction of metabolic variations. The score plot and boxplot were created by the ggplot2 package, and column graphs of differential metabolites were plotted by GraphPad Prism (v8.0.2). Heatmaps were plotted using the chiplot web tool ( https://www.chiplot.online/ ). Pathway analysis of differential metabolites was implemented by the Metaboanalyst 5.0 platform. Correlation analysis between clinical indicators and metabolites was based on canonical correlation analysis (CCA). Specifically, correlation coefficients between the clinical and metabolite data matrix were calculated using Spearman's rank correlation test. The metabolite data were normalized by the sum intensity of each metabolite in all samples to eliminate the influence of dimensions. For correlation coefficients between the change of the metabolites and clinical indicators, the ratio (PFO non‐closure to PFO closure) was used for calculation. In addition, the association between mouse model and patients was also calculated by ratio (PFO non‐closure to PFO closure, PFO‐negative to PFO‐positive mouse). Correlation heatmaps were generated via an online tool ( https://cloud.majorbio.com/page/tools.html ).
Clinical study indicates lower levels of cys‐C and calcium in patients with PFO Untargeted aqueous metabolomics profiles of migraineurs with PFO Consistent alterations among patients and mice detected by metabolomics Verification of crucial metabolites by targeted metabolomics Localization of metabolic alterations in the brain To further investigate the distributions of the differential metabolites in the brain, we conducted DESI‐MSI on different cross‐sections of the mouse brain, including frontal region, hippocampus layer and occipital region (schematic sections of these regions are shown in Fig. ). DESI‐MSI demonstrated that 5‐HT was elevated in the PFO‐positive mice in the posterior head (occipital region and hippocampus layer), while there was no difference in the frontal region (Fig. ). Meanwhile, 5‐HIAL, the degradation metabolite of 5‐HT, was significantly increased in the posterior cortex of PFO‐negative mice compared with that of PFO‐positive mice (Fig. ). The ratio of 5‐HIAL/5‐HT was also higher in the posterior head of PFO‐negative mice than that of PFO‐positive mice (Fig. ). DESI‐MSI also illustrated that levels of GSH and ARA were mainly elevated in the posterior structure of the PFO‐positive models (Fig. ). GSH was also highly expressed in the frontal region of PFO‐positive mice (Fig. ).
Fig. shows the flow of participants in this study. A total of 371 migraineurs (226 with PFO and 145 without PFO) were included in the analyses. The mean age of migraineurs with PFO was 34.19 (9.86) years and migraineurs without PFO had a mean age of 35.68 (9.66) years. There were no significant differences in sex, BMI, education level, or consumption of cigarettes, alcohol or coffee. Regarding headache characteristics, a higher proportion of migraineurs with PFO had aura than those without PFO (18.6% vs . 5.5%, P = 0.001, Table ). There were no significant differences in frequency, age of headache onset, intensity, family history, proportion of chronic migraine or MOH between migraineurs with and without PFO. Additionally, migraineurs with PFO showed significantly lower levels of cys‐C (0.78 ± 0.09 vs . 0.82 ± 0.11 mg/l, P < 0.001, Table ), calcium (2.33 ± 0.10 vs . 2.37 ± 0.09 mmol/l, P = 0.001, Table ) and potassium (4.15 ± 0.26 vs . 4.21 ± 0.33 mmol/l, P = 0.048, Table ) compared with those without PFO in the biochemical examination. Multivariate regression analysis revealed that patients with PFO had significantly more migraine with aura than those without PFO [adjusted odds ratio (OR a ) = 4.64, 95% confidence interval (CI): 1.91–11.23, P < 0.001] (Fig. ). The association between PFO and lower levels of cys‐C (OR a = 0.02, 95% CI: 0.00–0.33, P = 0.007) and calcium (OR a = 0.03, 95% CI: 0.00–0.79, P = 0.036) also remained significant in the regression model (details in Fig. ). However, the difference in potassium levels was not significant ( P = 0.211) after regression analysis (Fig. ). There were 56.6% (128/226) migraineurs with large PFO (Grade 3) among all migraineurs with PFO (Fig. ). Of the 97 migraineurs with large PFO who did not undergo percutaneous PFO closure at WCH, 61 finished 1 year of follow‐up. The other 31 migraineurs with large shunts underwent percutaneous PFO closure at the Department of Cardiology, WCH (Fig. ). We excluded four patients who were followed up for less than 1 year, six individuals who were lost to follow‐up, one patient who was diagnosed with lung cancer 11 months after PFO closure and one patient who had Hashimoto's thyroiditis with hyperthyroidism 6 months after PFO closure. Finally, we followed up 19 patients (two men and 17 women) who completed percutaneous PFO closure. None of the patients with PFO closure experienced any postoperative complications during the hospitalization and follow‐up. None of them developed MOH or other types of headaches, or took preventive treatments during the follow‐up. The mean age (SD) at operation of these patients was 35.33 (8.83) years, and four patients (21.05%) had aura. During 1 year of follow‐up, more patients with PFO closure reported a dramatic decline than those without PFO closure (63.2% vs . 31.3%, P = 0.026; Table ). Of the patients with PFO closure, 78.9% (15/19) experienced a decreased frequency of migraine attacks, whereas this occurred in only 45.9% (28/61) of those without PFO closure ( P = 0.024; Table ). Moreover, 13 patients with PFO closure achieved significant improvement 1 year after PFO closure. Details of the changes of frequency and severity (VAS score) in patients with and without PFO closure during the follow‐up are shown in Fig. . We then focused on those patients who underwent PFO closure and compared their clinical features before and after surgery. Table shows that the frequency of migraine attacks per month non‐significantly decreased after PFO closure ( P = 0.061). However, VAS was significantly decreased after PFO closure ( P < 0.001, Table ) and this result survived Bonferroni correction ( P adj = 0.010). Meanwhile, there was an improvement in cys‐C ( P = 0.007, P adj = 0.147, Table ) and calcium ( P = 0.011, P adj = 0.231, Table ) levels after PFO closure, though these differences did not pass the conservative Bonferroni correction.
To explore the potential underlying molecular mechanisms, we focused on migraineurs with PFO closure. Paired plasma samples from 10 patients who underwent PFO closure were included in the metabolic study. All of them exhibited a reduction of more than 50% in migraine attacks. Untargeted metabolomics identified 914 compounds in total from the patient plasma, including 530 in the positive mode and 384 in the negative mode. OPLS‐DA modelling, a typical supervised multivariate analysis, showed an obvious separation between patients before and after PFO closure surgery (Fig. ). Ten‐fold cross‐validation was performed to confirm the OPLS‐DA model with good robustness. Then, based on a comprehensive consideration of VIP in the OPLS‐DA model, fold change and P ‐value, 56 differential metabolites were identified and these are listed in Table . The heatmap in Fig. shows the up‐ and down‐regulation of the differential metabolites. Finally, pathway enrichment analysis based on the differential metabolites revealed that tryptophan metabolism was significantly down‐regulated after PFO closure. Meanwhile, sphingolipid metabolism and biosynthesis of unsaturated fatty acids were up‐regulated after surgery (Fig. ). To verify the metabolite shifts and provide clues for the specific effects of PFO on the brain, plasma and brain tissues from mouse models were also analysed by untargeted metabolomics under the same conditions after the identification of PFO by heart section (Fig. , upper panel, PFO‐positive; lower panel, PFO‐negative). After peak picking and database matching, 802 (positive mode = 615, negative mode = 187) and 941 (positive mode = 631, negative mode = 310) compounds were identified in the plasma and tissue analyses, respectively. Using the same approach, 110 and 53 differential metabolites from plasma and tissue, respectively, were selected for subsequent enrichment analysis (Tables and ). Pathway analysis indicated that GSH, cysteine and tryptophan metabolism pathways were enriched in both the plasma and brain tissue of PFO mice (Fig. for plasma and Fig. for brain tissue).
Based on the results of the above metabolomics analyses, differential metabolites and enriched pathways shared among the three analysed groups were selected (Fig. ). First, we observed that tryptophan metabolism pathways were enriched in both patients and mice. In detail, the metabolite of 5‐HT [5‐hydroxyindoleacetaldehyde (5‐HIAL)] was elevated significantly after PFO closure. Meanwhile, the precursor of 5‐HT [5‐ hydroxytryptophan (5‐HTP)] showed a decreased trend after PFO closure (Fig. ). The PFO‐positive mouse model also revealed increased 5‐HTP in both plasma and brain, and decreased 5‐HIAL in the brain, although this was not significant (Fig. ). Collectively, these results suggested the 5‐HT regulation was potentially associated with PFO closure. We also found that GSH, a marker of redox status, tended to be lower in the post‐operative patient plasma than in the pre‐operative plasma though the difference was not significant (Fig. ). However, levels of GSH in the plasma and brain tissue of PFO‐negative mice were significantly lower than those in PFO‐positive mice (Fig. ). In the ELISA validation of mouse brains, the GSH/GSSG ratio was significantly lower in PFO‐negative mice, indicating a higher redox state in PFO mice (Fig. ). Additionally, in concordance with findings of cys‐C (a cysteine proteinase inhibitor) in PFO closure, PFO‐positive mice showed a lower trend of cysteine and higher trend of gamma‐glutamyl‐cysteine (γ‐Glu‐Cys) than PFO‐negative mice (Fig. ). There was also a decreased trend of γ‐Glu‐Cys in the plasma of patients after PFO closure (Fig. ). Furthermore, metabolites associated with redox reactions including 8‐hydroxyguanine, arachidonic acid (ARA) and 15‐deoxy‐delta‐12,14‐prostaglandin J2 (15‐d‐PGJ2) also showed a higher trend in PFO‐positive mice (Fig. ). Fig. also shows that the metabolite alterations in patients after PFO closure correlated with those in mice (PFO negative to positive), indicating that the differential metabolites in the tryptophan and GSH metabolism pathways can be used to distinguish PFO mice.
Given the limitations of untargeted assays on quantitative analyses, we then performed targeted aqueous metabolomics. To further verify the alteration of metabolites caused by PFO closure, we also performed a self‐controlled comparison in the PFO closure group (pre‐operation and 1 year post‐operation) and the non‐closure group (baseline and 1 year follow‐up). There was a significant reduction of 5‐HT after PFO closure (left panel in Fig. ). There was no change in 5‐HT after 1 year of follow‐up in patients without PFO closure (right panel in Fig. ). Although the level of 5‐HIAA showed no considerable differences in both PFO closure and non‐closure groups during follow‐up (Fig. ), the ratio of 5‐HIAA to 5‐HT (reflecting tryptophan metabolism) increased notably after PFO closure, and also the ratio did not change in patients without closure during follow‐up (Fig. ). In addition, there were no differences in levels of GSH (Fig. ) and GSSG (Fig. ) in patients with or without PFO closure during follow‐up. However, the ratio of GSH to GSSG was significantly higher in the pre‐operative plasma than in post‐operative plasma (left panel in Fig. ), and it did not change in patients without PFO closure after 1 year of follow‐up (right panel in Fig. ). Collectively, PFO closure could increase the metabolism of 5‐HT and decrease redox activity. Our study also explored the correlation between 5‐HT, 5‐HIAA/5‐HT, GSH/GSSG and clinical indicators. Overall, in all patients with PFO (PFO with closure and non‐closure), no migraine indicators were associated with 5‐HT, 5‐HIAA/5‐HT or GSH/GSSG, though aura showed a mild positive correlation with GSH (Fig. ). In patients who underwent PFO closure (pre‐ and post‐operative dataset), there was a strong positive correlation between calcium and 5‐HIAA/5‐HT (Fig. ). Similarly, the frequency of migraine was positively correlated with 5‐HT and negatively correlated with 5‐HIAA (Fig. ). In terms of clinical and metabolic changes after PFO closure (post‐operative vs . pre‐operative), the decreases of VAS score and frequency were moderately correlated with an increase of 5‐HIAA/5‐HT and a decrease of 5‐HT (Fig. ). However, there was no obvious correlation between GSH/GSSG and the changes of frequency or VAS (Fig. ).
This study addressed the clinical differences and metabolic changes after PFO closure. We showed that: (1) PFO was independently associated with more aura and lower levels of cys‐C and calcium; (2) migraineurs with PFO had headache improvement along with an increase of cys‐C and calcium after PFO closure; (3) 5‐HT and GSH metabolism might be associated with PFOs and the shift of the 5‐HT and GSH metabolism pathways might reflect the migraine changes after PFO closure; and (4) there was more 5‐HT and a higher redox level, mainly in the posterior head, among those with PFOs. Potential biochemical biomarkers of migraineurs with PFO 5‐HT metabolism and PFO Redox metabolism and PFO PFO and impact on metabolites in the posterior head PFO is a residual but transitory communication between the right and left atria that occurs in approximately one‐fifth of the population (Giblett et al., ), and its prevalence is much higher in migraineurs (Nozari et al., ). Percutaneous closure of a PFO is considered a safe and curative surgery, and migraineurs with aura can show a great reduction and even complete migraine cessation following PFO closure (Mojadidi et al., ; Qi et al., ; Zhang et al., ). It has been hypothesized that PFO closure can allow small molecules associated with neurological diseases into the pulmonary circulation (Kutty et al., ); however, the substances bypassing PFO and avoiding pulmonary metabolism and associated mechanisms are still unclear. In our study, we found in the biochemical tests that the calcium level was associated with PFO independently and significantly altered after PFO closure. Calcium exists in the blood in three forms: bound to proteins, complexed with anions and in ionized form. Ionized calcium, accounting for about half of the total calcium, is the biologically active form in many disorders (Al‐Hakeim et al., ). Aberrant calcium levels (high or low) can lead to neuropsychiatric disorders (Al‐Dujaili et al., ). Although a potential correlation between serum calcium levels and migraine has not been systematically examined by large‐scale epidemiological studies (Yin et al., ), it has been shown that abnormal regulation of calcium concentrations can alter neurotransmitter release and other cellular functions, which may explain the pathophysiological mechanisms in neuropathic pain and migraine (Montagna, ). S100B protein, a calcium‐binding protein, is significantly increased during migraine attacks, suggesting that abnormal calcium levels are associated with astrocyte damage and blood–brain barrier (BBB) dysfunction (Teepker et al., ). In addition, some calcium channel blockers can be used to treat migraine headaches, revealing a close relationship between calcium and migraine (Bethesda, ). Similarly, we noted that the level of cys‐C, a cysteine proteinase inhibitor, was also significantly lower in migraineurs with PFO than in those without PFO, and levels were elevated after PFO closure. It has been reported that cys‐C levels are independently associated with migraine without aura, and that cys‐C may be a potential biomarker for migraine (AkdaG & Uca, ). Nevertheless, to our knowledge, the level of cys‐C has never been explored in migraineurs with PFO who have more aura. Cys‐C is a neuroendocrine polypeptide that can be damaged by oxidative stress and is considered to play a role in many vascular diseases and various neural diseases (Amin et al., ; Tanriverdi et al., ). Lower levels of cys‐C were reported in patients in the early stage of Alzheimer's disease than in controls, which indicated that cys‐C was connected to early neurological injury (Simonsen et al., ). Meanwhile, the reduction of cys‐C levels in patients with multiple sclerosis, amyotrophic lateral sclerosis and Creutzfeldt–Jakob disease also suggested that cys‐C had a role in modulating neuro‐metabolism (Nagai et al., ; Sanchez et al., ). Collectively, the lower levels of calcium and cys‐C might be related to PFO in migraine, and we further found that 5‐HT and redox metabolism might partially explain the changes in patients with PFO by metabolomics.
Previous studies found that the synthesis of 5‐HT increases in the brains of migraine patients, especially those with aura (Kalaycioglu et al., ). When 5‐HT accumulates at a high concentration in the brain, it might cause vasoconstriction, leading to neuronal signs of aura and then inducing a migraine attack (Frederiksen et al., ). Additionally, as an inflammatory mediator, 5‐HT could bind to 5‐HT receptors (5‐HTRs) to activate calcium responses causing chronic inflammatory pain and vascular contractions (Samaddar et al., ; Selli & Tosun, ). However, 5‐HT in the blood could be metabolized by monoamine oxidase (MAO) when passing through the lung, and mainly converted into 5‐HIAA and further into 5‐MIAA as well as sulphates or glucuronide conjugates (Jinsmaa et al., ). These metabolites of 5‐HT could exert neuroprotective effects against hypoxia–ischaemia brain damage and oxidative stress (Lee et al., ; Luo et al., ). Nevertheless, PFO allowed parts of the venous blood containing 5‐HT to avoid pulmonary circulation by RLS, which leads to longer retention of 5‐HT in the blood and brain (Kumar et al., ). Recent preclinical studies have shown that 5‐HT may be involved in pain facilitation (Bardin, ). Our results also showed that the metabolites of 5‐HT were elevated after PFO closure, and the increase of 5‐HT metabolism was correlated with the relief of migraine, suggesting improved clearance of 5‐HT in plasma by the pulmonary circulation after PFO closure, which might explain the effect of PFO closure in the treatment of migraine. Our study also demonstrated that PFO closure elevated calcium levels, which positively correlated with 5‐HIAA/5‐HT. It has been reported that 5‐HT could produce a strong calcium mobilization response (Haas et al., ). 5‐HT would induce an increase in cytosolic calcium coming from an influx of extracellular calcium in neurons (Genet et al., ). Furthermore, external calcium can modulate the activity of 5‐HTRs (Thompson & Lummis, ). High 5‐HT levels along with high S100B levels are also closely related to impaired neurological function (Jin et al., ). On the other hand, plasma cys‐C was increased with a decline of 5‐HT in the rats injected with Aβ, which reduced pain perception (Morgese et al., ). Cystatin produced by the CNS and crossing the BBB to the vascular space might bind endogenous MAO inhibitors to indirectly enhance the function of MAO and play a role in decreasing the concentration of 5‐HT (Amin et al., ; Medvedev et al., ). Collectively, the existence of PFO had certain links to a weakened 5‐HT degradation metabolism, and the improved clearance of 5‐HT by the lung after PFO closure might be one reason for the alleviation of migraine symptoms.
PFO‐mediated hypoxaemia occurs when deoxygenated venous blood from the right atrium mixes with oxygenated arterial blood (Mojadidi et al., ). Moreover, the lung is an important organ for the metabolism of reactive oxygen species (ROS) (Panus et al., ), while the transient opening of RLS caused by PFO might give ROS an opportunity to avoid pulmonary metabolism. GSH, the most abundant endogenous antioxidant, is an effective molecule for neutralizing ROS and often serves as a biomarker for a high redox status (Aquilano et al., ; Wu et al., ). Our results showed that aura had a mild positive correlation with GSH and that GSH was highly expressed in the brains of PFO patients, which might suggest there was a higher redox status in PFO patients. GSH would be further oxidized to GSSG and then take part in protein cross‐linking (PSSP) which would be finally degraded by cysteine protease (Giustarini et al., ; Sun et al., ). GSH/GSSG was elevated, suggesting adaptive up‐regulation of enzymes involved in GSH synthesis and recycling in response to increased oxidative stress. The disturbance of GSH/GSSG balance can also regulate the activity of cystatin (Katunuma, ). It has been proven that cys‐C is involved in regulating the oxidative state of neurons (Nishiyama et al., ). Higher levels of cys‐C and lower levels of antioxidant enzymes were found in rodents with a reduction of inflammatory and oxidative biomarkers in previous studies (Al et al., ). On the other hand, inhibiting the expression of cytoplasmic cys‐C could reduce damage caused by oxidative stress to balance redox reactions (Shen et al., ). Cys‐C could protect neurons from hypoxia by binding with neuroglobin (Wakasugi et al., ). Additionally, corticosteroids involved in suppressing the redox reaction could also activate MAO and elevate cys‐C without any decrease in the glomerular filtration rate (Risch & Huber, ; Zhu et al., ). Moreover, there is a correlation between calcium homeostasis and oxidative stress. Calcium omission could induce intense oxidative stress within cells (Reed et al., ). Calcium could also reverse the lead‐induced effects on antioxidant enzymes which are major contributing factors to neurotoxicity (Prasanthi et al., ), and vitamin D supplementation can also enhance brain energy homeostasis and modulate the redox state (Briones & Darwish, ). In our metabolomics analysis, GSH/GSSG, ARA, 15d‐PGJ2 and 8‐hydroxyguanine, which could reflect oxidative conditions (Charles et al., ; Lin et al., ; Wu et al., ), were also much lower in the brains of PFO‐negative mice than in those of PFO‐positive mice. In patients with PFO closure, we only found GSH/GSSG went down after PFO closure, whereas there was no correlation between GSH/GSSG and migraine frequency or severity. Collectively, we hypothesized that a higher redox status might explain the aura in migraineurs with PFO but not symptom relief in PFO closure.
To observe and localize the differential metabolites in the brain, we analysed mouse tissue by DESI‐MSI. The imaging results demonstrated significant variations in 5‐HT‐related metabolites and redox‐related metabolites across distinct brain regions, providing additional support for untargeted metabolomics analysis. Our animal samples showed that PFO‐related metabolites were mainly differentially expressed in the posterior head, supporting the impact of PFO on the posterior head to some extent. Previous clinical studies revealed that PFO had an effect on posterior circulation, which is one of the characteristics of PFO‐induced stroke (He et al., ). A clinical imaging study also demonstrated that PFO was associated with white matter lesions in patients with migraine, especially in those with occipital lesions and visual aura (Signoriello et al., ). PFO‐induced CSD often occurs initially in the posterior region of the head, which is an important pathogenic mechanism underlying migraine attacks (Nozari et al., ). Vasoactive substances bypassing PFO could also cause CSD associated with occipital cortex dysfunction, leading to visually evoked potential prolongation, visual field sensitivity reduction and visual aura, among others (Calic et al., ; Nozari et al., ; Shatillo et al., ). Furthermore, intermittent hypoxia can induce oxidative stress responses in the occipital region and increase levels of redox‐associated proteins that protect the brain in neurological chronic diseases with episodic manifestations (Dong et al., ). A sufficient blood oxygen supply is also important for shortening the duration of CSD and improving the local redox state of the brain in migraineurs (Takano et al., ; Wang et al., ). Collectively, it was suggested that the posterior region of the brain was the main area responsible for PFO‐induced migraine, and that PFO closure might ameliorate the accumulation of 5‐HT and redox state in the posterior head. Our study has some limitations. First, cohort study designs have inherent bias and the potential effects of some confounders, such as diet, hormonal influence and genetic heterogeneity, were extremely difficult to control. Second, the sample size of migraineurs who choose PFO closure was relatively small, and patients with more severe headache (based on VAS) chose to undergo PFO closure and completed the follow‐up in our study, which may limit the generalizability of the results to patients with moderate headache. These results should be further investigated in larger PFO closure studies. Third, although we demonstrated metabolic differences were mainly located in the posterior head, which was in accordance with previous studies, the sources of differential metabolites in the brain should be explored in future studies. Fourth, qualitative and quantitative analysis from mass spectra may result in the loss of information to some extent (Collins et al., ; Wieder et al., ). Fifth, blood gases and haemodynamics would be beneficial for understanding the physiological characteristics of patients before and after PFO closure, which should be explored in a future study. In addition, the development of PFO is influenced by many factors, which could affect the discrepancy between PFO‐positive and PFO‐negative mice. In future studies, PFO closure should be performed in large animals to study the effects of PFO on the brain. Despite the limitations of this study, our findings lay pivotal groundwork for future exploration of the mechanism underlying PFO‐associated migraine. Furthermore, the integration of clinical interventions, animal models, biochemical indicators, untargeted metabolomics, targeted metabolomics and DESI‐MS imaging promises a more comprehensive understanding of PFO closure in migraine and identified biomarkers of potential therapeutic value.
In this study, we integrated metabolomics analysis of samples from patients and mouse models to explore the correlations between PFO and migraine, and the potential mechanisms of the mitigation of migraine symptoms after PFO closure. Our findings suggested that aura and levels of cys‐C and calcium could be biomarkers of migraineurs with PFO, and they should be recommended for further assessment by cardiologists. Clearance from the pulmonary circulation of 5‐HT and deoxygenated blood might be the primary reason for the improvement of migraine symptoms in PFOs. More in‐depth research focused on the mechanism of connection between PFO and migraine is needed, which may help in the evaluation and treatment of PFOs.
The authors have declared no conflicts of interest.
B.S.D., Y.S.T., W.L.L., R.Q.Y. and H.L. contributed to collecting clinical data and plasma of patients with migraine under the supervision of L.C. Y.J.L. and Y.C.C. contributed to screening of PFO. B.S.D. contributed to collecting samples of mice and extracting metabolites. B.S.D. and S.M.J. contributed to analysing clinical data. L.Z. and L.L.G. contributed to doing untargeted aqueous metabolomics under the supervision of M.G. X.L. and G.L. contributed to targeted LC‐MS/MS validation under the supervision of M.G. W.Z. and X.L. contributed to compound identification and metabolomics data analysis. B.S.D. and X.L. contributed to interpretation of data and drafted the work. B.S.D., X.L., A.J.P., M.G. and L.C. revised the manuscript critically. All authors approved the version to be published.
This research was supported by the National Natural Science Foundation of China (82271500) and West China Hospital of Sichuan University (2021HXFH066).
Peer Review History
|
Patient-Centered Care for Adolescents and Young Adults with an Uncertain or Poor Cancer Prognosis: A Secondary Analysis of What Is Needed According to Patients, Caregivers, and Healthcare Providers | edf77efe-08d2-4013-b13d-76fc65c24c7d | 11854352 | Patient-Centered Care[mh] | Over the years, the healthcare system has evolved, with an emphasis on the individual and what is meaningful to them. Patient-centered care (PCC) implies that “an individuals’ values and preferences are elicited and once expressed, it guides all aspects of their healthcare and supports their realistic health and life goals” . A literature review by Scholl and colleagues identified 15 domains as components of PCC ( ). These domains encompass principles (i.e., fundamental propositions, which lay the foundation of PCC), enablers (i.e., elements, which foster PCC), and activities (i.e., specific PCC behavior) essential for providing tailored care to patients. They address the characteristics of healthcare professionals (HCPs), holistic patient perspectives, effective communication between patients and physicians, adequate support, patient and informal caregiver involvement, and the necessary conditions of the healthcare system . While PCC has the potential to enhance the quality of healthcare, implementing it can be challenging. As it focuses on a specific individual, tailoring is required to fit a specific situation. This puts pressure on an already overburdened system, where time is limited and the workload is high. Currently, there is not an optimal comprehensive care pathway for addressing all psychosocial needs. It can be difficult to determine who is responsible for taking care of certain issues, and long-term care is not always well coordinated or available . A good example of PCC is the care provided to adolescents and young adults (AYAs—those diagnosed with cancer for the first time between 18 and 39 years old in the Netherlands). These patients report age-related issues, including difficulties in establishing an identity, impaired self-esteem, social isolation, issues with fertility, and financial hardship. Age-specific care programs have been developed to meet patients’ unique needs in order to provide appropriate care tailored to the developmental phase of an AYA patient, including a focus on psychosocial aspects . Dutch AYA care is nurse-led and utilizes specific educational modules for HCPs . Many AYA cancer patients report similar priorities during treatment: being able to live a normal life, accomplishing developmental milestones, and spending time with those they care about . For the majority of AYAs, age-specific care is provided during curative treatment or survivorship. However, 15 to 20 percent of AYA patients live with an uncertain or poor cancer prognosis (UPCP)—that is, those with advanced cancer, without a reasonable hope of cure, who will likely die prematurely from their disease but do not face an immediate threat of premature death. Because this group is a small subset within AYA oncology, little is known about what these patients perceive as important in their healthcare . Current programs do not specifically focus on the needs of this recently defined heterogeneous subgroup of AYA cancer patients. Our recent studies show that the uncertainty experienced by these patients results in distinct disease trajectories and coping mechanisms when compared to other patients, potentially leading to unique care needs . Moreover, HCPs also encounter difficulties when caring for this group and their informal caregivers report various challenges in their daily life due to their role as caregiver . Although optimal PCC requires a holistic approach, no research has combined the perspectives of the AYAs with a UPCP, their informal caregivers, and HCPs. Since the literature on AYAs with a UPCP is scarce, a qualitative approach can be used for the in-depth exploration of the unique experiences and needs associated with PCC of patients, informal caregivers, and HCPs to further enhance the patient-centered care model for this specific AYA group. This paper provides a secondary analysis of the input from the earlier studies conducted on AYA patients with UPCP, informal caregivers, and their HCPs [ , , ], and uses this to expand Scholl’s model of PCC for this specific patient group. The aim was to synthesize insights from patients, informal caregivers, and their HCPs to provide an overview of the preferences and needs related to delivering and receiving care for this unique patient group and those around them. These data can be used to provide practical guidelines for the current healthcare, highlighting specific considerations for AYA patients alongside general factors essential for effective care. The focus is on what preferences and needs reported by the three stakeholders are specific to AYA patients, but this paper also considers generic factors that may still be important in providing appropriate and sufficient care.
2.1. Sample and Procedure 2.2. Data Analysis For the primary analysis of the interview transcripts, elements of the grounded theory of Corbin and Strauss were used . For primary and secondary analysis, we used reflexive thematic analysis by Braun and Clarke was used . In the primary analysis, data from the interview transcripts concerning the impact of the disease, healthcare experiences, and challenges in caring for these patients were coded by two reviewers (MR, VB) [ , , ]. Additionally, to achieve the aim of this article, a secondary analysis of these codes was conducted to identify those that were related to healthcare. These were then categorized as either specific to (or to have more impact at the age of) AYA patients or as more generic themes applicable to all (cancer) patients (MR, WG). This was discussed until consensus was reached. Afterwards, codes were categorized into themes, and all themes were divided among the fifteen domains of Scholl’s model. Themes could be added to more than one domain if they were related to multiple aspects of PCC. This method aligns with the Framework Analysis by Ritchie and Spencer . QSR NVIVO was used to conduct qualitative analysis . Descriptive analyses were conducted using SPSS v29.0.
In this study, data from the INVAYA-study were used, in which interviews were conducted with AYAs with a UPCP, their informal caregivers, and HCPs. The INVAYA-study aimed to gain insight and understand experiences of the three subgroups through in-depth interviews. The methodology used to conduct the interviews with AYA patients, their informal caregivers, and HCPs was reported in previous articles [ , , ]. Patients were signed up via their healthcare provider and were invited by the researcher (VB—psychologist) to participate. Patients were allowed to invite their informal caregivers, who were contacted by the researcher after 1–2 weeks. HCPs were invited via purposive sampling. Interviews were planned and conducted either face to face or via Microsoft Teams due to the COVID-19 pandemic. The interview guides for all three stakeholder groups are reported in the ( , and ). In total, 46 AYA patients with a UPCP participated in the interviews. Subsequently, 39 informal caregivers were interviewed, including 13 partners, 12 parents, 7 friends, and 7 siblings. Lastly, 49 HCPs with different specializations were interviewed. The interviews with patients and informal caregivers primarily focused on their healthcare needs and the impact of disease. The HCPs were mainly addressing the challenges they face while caring for AYA patients with a UPCP. This study was approved by the Institutional Review Board of the Antoni van Leeuwenhoek hospital in Amsterdam, the Netherlands (IRBd20-205).
The sociodemographic characteristics of AYA patients with a UPCP, their informal caregivers, and HCPs are reported in . Codes that were age-specific among AYA patients with UPCP, informal caregivers, and their HCP are presented in . The associated quotes are shown in . General codes are reported in . Some topics in the tables are negatively phrased. These topics have not yet been sufficiently implemented in the current system of care to ensure good PCC. Once they are properly applied, they function as part of the section they are a part of (principle, enabler, or activity). 3.1. AYA-Specific Care Needs 3.2. General Care Needs Aside from the age-specific preferences, several general themes emerged from the interviews ( ). The impact of the general topics could be more extensive for AYA patients with a UPCP; however, these needs could also be existent among other cancer patients, such as AYAs treated with curative intent and older adult patients. Nevertheless, it is important to address these topics, as these do seem important to AYA patients with a UPCP. HCPs who invest time and are familiar with the AYAs’ situation, are empathetic, can be trusted, and adopt a holistic approach are important for PCC in AYAs patients with a UPCP. To receive appropriate care, patients adjust their behavior accordingly to ensure HCPs will maintain their efforts. Patients want the opportunity to discuss alternative treatment options, as well as the end of life and their prognosis, at a time that is appropriate to them. An essential aspect of PCC is knowing who to approach in case of questions or the need for additional support. Furthermore, they would like information on which support is available for them. Shared decision-making is another important aspect. Lastly, it is important that HCPs ask specific questions that are relevant and tailored to the AYAs’ situation, compared to asking general questions. Informal caregivers expect knowledge and empathy from HCPs. PCC should focus on the quality of life and the possibilities of the patients. Informal caregivers do not want to draw too much attention to themselves. They want to know what support is available and who to turn to in case of any questions. It is important to have regular check-ins by HCPs regarding their own well-being and to ensure they feel acknowledged and supported. Additionally, they want information on how to support the patients. When necessary, they would like their own psychological trajectory independent of the patient. They want the end of life to be discussed, but only when unavoidable. When HCPs provide PCC, they mention that being unable to give patients any certainty is challenging. Balancing being open versus maintaining a professional distance is difficult, and they also struggle with knowing what needs to be addressed. They feel like they lack the knowledge that is expected (e.g., estimating whether a patient’s reaction is normal or pathological, or information about alternative therapies). They feel unaware of available support, and it can be difficult to assess what is necessary. HCPs aim to avoid putting too much pressure on the patient and seek opportunities to empower them where possible. They also fear taking away all patients’ hope, and can struggle to work with multiple HCPs, as it can be unclear to determine which responsibility one HCP has.
PCC among AYAs with a UPCP should focus on addressing all impacted domains of life without feeling resistance (e.g., using soft or hard drugs, sexuality or fertility), as these topics are often overlooked or undiscussed by HCPs due to the poor prognosis. Furthermore, they want tailored information and support, which is an important factor of PCC. Both these factors should be applicable to their situation: premature mortality and a young age. AYA patients with a UPCP report a need for clear communication from their HCPs regarding their disease, prognosis and end-of-life care. This can help them to make age-related life decisions. PCC for informal caregivers should focus on receiving appropriate information regarding the specific issues and situation of the AYA patient. This information can help informal caregivers to understand the AYAs, give them insight into how to support the AYAs adequately, and enable them to provide emotional support. Furthermore, this can help them to be able to support the patients with making decisions. Regarding care for themselves, partners of the AYAs want to be able to discuss fertility with an HCP but also receive information on how to cope with the disease and treatment as an informal caregiver. HCPs are motivated to provide the best care for these patients but face multiple challenges when trying to apply PCC. They perceive AYAs with a UPCP as a challenging group and would like specific training on how to provide the best possible care. Furthermore, they express a need for emotional support, as they empathize with this group and find it burdensome to take care of young patients who might die prematurely. HCPs struggle with balancing between asking enough questions and providing enough support, without belittling the AYA. The tailored information (e.g., life expectancy and possibilities to start a new study, job, or hobby) requested by patients and informal caregivers is not always available, making it impossible for HCPs to provide this information and advice. Sensitive or personal topics can be difficult to discuss or do not seem appropriate due to poor prognosis, leading to some of them not being brought up by HCPs. In addition, providing PCC can be challenging when managing the dynamics of different stakeholders in the room.
This study highlights the distinct experiences and challenges faced by AYAs with a UPCP, their informal caregivers, and HCPs. We aim to guide adaptations in the healthcare for AYAs, focusing on three key pillars: education, healthcare, and research ( ). By creating a “self-learning healthcare system”—where these pillars continuously give each other input to generate development—a solid foundation for these three stakeholders can be established, ensuring their voices are heard and their needs are addressed. To meet the needs of AYAs with a UPCP and their informal caregivers in clinical practice, three steps are essential: (1) to ensure HCPs are aware of these patients’ and informal caregivers’ potential needs, are confident in conducting need assessments, provide tailored care, and have knowledge regarding available services and referral pathways; (2) to integrate AYAs and their informal caregivers into the current care model and allocate appropriate time for them; (3) to empower AYAs to actively express their healthcare needs and preferences to receive sufficient care . The first key pillar is education, aimed at equipping HCPs to identify AYA patients with a UPCP, understand key discussion topics, and know how to provide the best care for these patients and their informal caregivers. This can be achieved by developing an e-learning, focusing on patients’ unique challenges: being young and dealing with premature death, loss, uncertainty (e.g., which milestones can one accomplish), and feeling alone within the healthcare setting and among peers. HCPs should learn to initiate discussions of topics like fertility, sexuality, and premature death, and consider how to discuss life decisions and wishes with patients. Building trust, sharing decision-making, and learning how make appropriate referrals are essential for PCC in AYAs with a UPCP and their informal caregivers. These factors are important among all cancer patients; however, among AYAs with a UPCP, these are crucial. Having a relationship built on trust can guide patients through the uncertainty and the healthcare system, in which most of them are unfamiliar . With proper training, HCPs can become more confident in their own abilities, causing less of a burden and making this type of healthcare more manageable . To effectively integrate care for AYAs with a UPCP, as mentioned in pillar two, we can learn from the Dutch AYA care model and its care pathway, designed by the Dutch AYA Care Network . This provides HCPs with a checklist for essential questions and actions. However, it requires adaptations for AYA patients with a UPCP, starting with a tailored checklist used to define this group. Then, these patients should be referred to multiple disciplines for additional support ( ). It can be challenging for an healthcare provider and AYA to talk about the end of life, which can complicate the referral to a palliative care team . However, this is necessary to obtain a holistic understanding of one’s needs and quality of life aspects, which should then guide the medical trajectory, rather than solely focusing on the remaining time left. Preferably, all AYAs with a UPCP should be referred at some time to a psychologist and/or palliative care physician specialized in AYA patients. A similar AYA-specific palliative care model, developed by the Princess Margaret Cancer Centre, has shown promising results, including improvements in symptom management and end-of-life planning . To implement an effective referral network, it is crucial to clearly define the responsibilities of each healthcare provider. Since AYA patients may not always be aware of the available support, HCPs should inform them about and refer them to the appropriate resources, including emotional and psychological support such as Acceptance and Commitment Therapy (ACT) or Managing Cancer and Living Meaningfully (CALM) therapy [ , , , ]. Additionally, practical support should be addressing needs such as employment, finance, diet, exercise, childcare, sexual health and fertility, complementary care, epilepsy management, and support for informal caregivers . Since most AYAs with a UPCP experience an erratic disease trajectory, additional support may not always be necessary, indicating the need for prompt referral when issues arise. According to pillar three, the support for these patients should not only focus on attempting to resolve all difficulties: this patient group is struggling with issues that may not be resolvable, or that are appropriate in this abnormal situation. Empowering patients or reinforcing their positive characteristics may also be an appropriate method to adequately support them. A peer support network can help to normalize emotions, offer advice, and provide mutual support. Furthermore, support for informal caregivers is limited and often requires a psychiatric diagnosis in the Netherlands, while more guidance and accessible support are needed. We have to learn from other diseases and initiatives to explore the possibilities within the Dutch healthcare. Clinical practice can generate topics for further research in this field. Currently, we are conducting research to longitudinally and exploratively examine what challenges AYAs with a UPCP are dealing with and what needs may exist, as they face a constantly changing disease trajectory. We aim to gain information on how these patients cope with their disease and their prognosis by including factors such as hope, meaning and purpose, and resilience. By gaining insight into the patients that are more prone to problems and establishing at which point these issues occur in this trajectory, appropriate support can be provided in a timely manner. This knowledge can complement the e-module, allowing the care for this group to be adjusted accordingly. This can also result in the healthcare system being less burdened: by gaining more insight into the most common issues and identifying the groups most affected by them, we can provide more targeted support. This ensures that actions are taken only where necessary, reducing the workload. Our research contributes to shaping the AYA care in the Netherlands for patients with a UPCP, incorporating different perspectives to illuminate both preferences and barriers. However, this study has several limitations. Conducting secondary analysis can lead to hindsight bias, but may also lead to the absence of relevant data because of adherence to codes analyzed in the previous studies. The interview scripts for all three stakeholders were different and specifically tailored, and were also collaboratively developed with input from stakeholders, as detailed in previous articles. For example, interviews with informal caregivers focused on the challenges of supporting someone with a UPCP. As a result, the code “essential characteristics” was not identified in their transcripts, leaving that section incomplete. This deductive coding process, which derived codes directly from the data underscores the need for further research into the preferences and needs of the stakeholders. Furthermore, the input of informal caregivers of AYAs with a UPCP to PCC was limited in this article since the interviews were focusing more on their caregiver burden, and it is possible that their perspectives were not thoroughly integrated into the model. This highlights the need for research on the PCC needs of this unique group of informal caregivers. The sample used is large for qualitative research, and so all views expressed can be regarded well. A limitation of this study is that it is difficult to generalize the findings to non-Dutch healthcare settings. It is possible that patients living in a different type of society (individualistic versus collective) have different needs. It is advised to perform this study in other countries to examine the discrepancies. The literature shows that cultural differences exist regarding disclosure of diagnosis and prognosis, the implementation of traditional healthcare, decision-making [ , , ], the discussing end of life, and the involvement of informal caregivers . Cultural differences emphasize the need for an open attitude that approaches patients as individuals and takes a personalized approach, as highlighted by this study. Culture is an important aspect for the AYA-population and the role of loved ones within the disease trajectory is culturally dependent. Additionally, AYA patients consist of both generation X, Y (millennials) and Z, who might have different needs regarding communication. It is important to examine these different needs and to use these when implementing patient-centered care .
To provide PCC for AYA patients with a UPCP, it is essential to make adaptations to the current AYA care pathway and focus specifically on AYAs with a UPCP, as this study has shown that these patients and their informal caregivers have unique and age-specific care needs, and HCPs report challenges in providing this specific care. When one’s life expectancy is uncertain, we argue for a focus on quality of life which revolves around the activities or goals that are important to someone in order to provide direction in the uncertainty rather than planning life solely based on the remaining time. The results of this study highlight that it is important to provide informal caregivers with more access to care by giving them a contact person to whom they can request additional support or ask questions to. Furthermore, integrating PCC for AYAs with a UPCP into the current healthcare system seems to require educating HCPs about the information and communication needs of the AYAs and empowering the AYAs to express their needs and navigate the healthcare setting more effectively. Identifying AYA patients with a UPCP is the initial step to aligning with their healthcare needs, with a focus on holistic healthcare and issues arising in essential life domains. Finally, it is important to establish an effective referral network with defined responsibilities of each healthcare provider to optimize access, coordination, and continuity of care.
|
AI improves accuracy, agreement and efficiency of pathologists for Ki67 assessments in breast cancer | 8d370a27-094b-4f17-b739-1003ff2dbe46 | 10787826 | Anatomy[mh] | Ki-67 immunohistochemistry (IHC) serves as a reliable marker of cell proliferation and is widely used to evaluate the aggressiveness and prognosis of human tumors. Notably, Ki-67 has been adopted for prognostication in breast cancer, with elevated Ki-67 expression correlating with poorer prognosis , . The Ki-67 proliferation index (PI) in breast cancer is a measure of the percentage of tumor cells with nuclear immunoreactivity relative to the total number of malignant cells assessed . A meta-analysis of 64,196 patients revealed that higher Ki-67 PI values are associated with worse overall survival in breast cancer, with 25% being a cutoff of strong outcome prognostication . The monarchE committee reported that among patients with early-stage HR+, HER2− breast cancer, and nodal involvement, the addition of abemaciclib to hormone therapy significantly improves cancer-specific free survival and decreases the risk of disease recurrence – . For tumor stage 1 to 2, nodal stage 0 to 1, ER+/HER2− breast cancer, the International Ki-67 in Breast Cancer Working Group’s (IKWG) consensus in 2021 recommended using Ki-67 to aid in the decision-making of adjuvant chemotherapy only for cases with a very low (< 5%) or very high (> 30%) PI due to substantial inter-rater variability within this range , . The panelists of the St. Gallen International Consensus Conference in 2021 generally support this recommendation . The monarchE phase III clinical trial studied the impact of a high Ki-67 PI on disease recurrence in a cohort of patients with HR+/HER2− node-positive breast cancer with high-risk clinicopathological features (at least 4 positive lymph nodes, or 1 to 3 positive lymph nodes with either tumor size ≥ 5 cm or histological grade 3 disease). Their analyses demonstrated that a Ki-67 PI ≥ 20% in patients treated with endocrine therapy alone was associated with a significantly increased risk of recurrence within three years compared to patients with lower Ki-67 expression , . Following this, the American Food and Drug Administration and Health Canada approved the use of abemaciclib (CDK4/6 inhibitor) for patients with HR+/HER2− high-risk early breast cancer and a Ki-67 PI of ≥ 20% . In a recently published landmark study based on 500 patients, it was demonstrated that a PI score threshold of < 13.25% derived from Ki-67 slides effectively identified women with luminal A breast cancer who could be safely treated without local breast radiation therapy. This underscores the clinical significance of Ki-67 as a marker with significant promise in guiding management decisions for breast cancer patients. The current gold standard for quantifying Ki-67 PI is to manually evaluate at least 500 malignant cells based on IKWG recommendations , . However, this method is labor-intensive, time-consuming, and prone to poor inter-rater reproducibility and errors , . As a result, it is hard to standardize and use Ki-67 for clinical assessments. As shown in the recommendations from the IKWG , , the assessment by the pathologist is most reliable for PI values below 5% and above 30% (the 5 to 30% range is subject to the most interpretation variability). The Canadian Association of Pathologists recommends that a second pathologist assess PIs in this range, or use a computer assessment tool to improve robustness . Considering this range is critical for treatment decisions, its reliability must be improved. The recent emergence of digital pathology and high-performance AI algorithms offers the possibility that automated PI scoring can overcome these challenges by accurately and efficiently measuring cell count. There have been several AI-based Ki-67 assessment tools developed – , and the advantages are becoming increasingly evident. Several comparative studies have reported the role of AI-assisted assessments of Ki-67 PI in breast cancer , , . These studies demonstrated that AI-aided assessment of Ki-67 could achieve a lower mean error and a lower standard error deviation , however, the impact on inter-rater agreement is less clear. Additionally, while these studies have encompassed broad PI ranges from 0 to 100%, the effect of AI assistance in the clinically crucial 5 to 30% PI interval has not yet been studied. Herein, we conducted a large-scale, international study that analyzed the effects of AI assistance on key aspects of pathologists' work, including accuracy, inter-rater agreement, and turnaround time (TAT) in the context of Ki-67 scoring for breast cancer. Our focus was on assessing these metrics within the 5 to 30% PI range to better understand the implications and usability of AI-assisted Ki-67 evaluations. Additionally, we gathered insights into pathologists' perspectives, trust levels, and readiness to adopt AI technologies, highlighting the importance of user acceptance. This study provides a strong foundation for understanding the future impact and potential of AI tools for Ki-67 scoring in the daily routine of pathologists.
Ethics approval for the study was obtained from Toronto Metropolitan University (REB: 2022-154). All experiments were performed in accordance with the Tri-Council policy statement 2 for the ethical conduct of research involving humans. Case selection, TMA preparation, and image acquisition AI tool Study design Study population Ground truth scores Statistical analysis Ethics approval This research study has been reviewed by the Toronto Metropolitan University Research Ethics Board (REB 2022-154). Participants voluntarily consented to participate and to share contact information if they wanted to.
A subset of ten TMAs from the Toronto-British Columbia trial was used for this study , which was composed of node-negative patients above the age of 50 years with invasive breast cancer . Tissue microarrays (TMAs) were constructed using a 0.6 mm tumor core procured from formalin-fixed, paraffin-embedded specimens. TMA sections, with a thickness of 0.5 μm, were stained using a 1:500 dilution of SP6 (ThermoFisher Scientific, Waltham, MA, USA)—a Ki-67 antibody—and counterstained with hematoxylin. The study incorporated ten TMAs with high tumor cellularity, averaging 2093 neoplastic cells per TMA and a PI range of 7 to 28% . This range, which poses a challenge for pathologists, encompasses the clinically relevant PI cutoffs identified in prior studies , , .
A deep learning-based AI tool for IHC quantification, UV-Net, developed by Toronto Metropolitan University, was used in the study . This tool detects neoplastic cells in IHC-stained tissue and differentiates Ki-67 positive from Ki-67 negative tumor cells. Its underlying architecture, a modified U-Net, includes additional connections for densely packed nuclei and replaces the standard 3 × 3 convolutional layers with 'V-Blocks'. These V-Block connections maintain high-resolution nuclear features for precise differentiation between nuclei; each V-Block inputs n channels and outputs 2n channels, forming a 'V' shape across four successive stages. The AI tool was trained using 256 × 256 RGB patches of WSIs from St. Michael's Hospital, and an open-source dataset "Deepslides" from × 20 Aperio AT Turbo and × 40 Aperio ScanScope scanners respectively. Images were annotated with single-pixel centroid markers distinguishing Ki-67 positive and Ki-67 negative tumor nuclei cells , defining positive nuclei as any brown color above the background, following the IKWG’s recommendations , . Single-pixel markers were extended into circular areas using a Gaussian function, this allocated the highest value to the center of the nuclei, incorporated more contextual information, and improved the efficiency of the training process. A Huber loss function was used to regress and predict the centroid of nuclei. For a given image, the AI tool generates an automated Ki-67 positive and Ki-67 negative overlay (Fig. ), providing an accessible visual interpretation along with the automated PI calculation. The generalizability of UV-Net was previously validated on multi-institutional datasets from 5 institutions , including WSIs and TMAs from breast cancer images. UV-Net consistently outperformed other architectures across all image variations, registering an average F1-score of 0.83 on expertly annotated data. In comparison, alternative architectures achieved scores between 0.74 and 0.79. The images on which UV-Net was trained differed from those used in this study, originating from datasets with different scanners and institutions. None of the pathologists involved in this study participated in annotating the training or validation datasets.
A cross-sectional study was performed using an anonymous, self-administered, and structured online survey developed using Qualtrics™, which included hyperlinks for viewing digitized TMAs on the cloud through PathcoreFlow™, a browser-based commercial image management solution and viewer for digital pathology . The AI tool for Ki-67 scoring was integrated into PathcoreFlow™ using an Application Programming Interface. The tool provided an overlay of the Ki-67 positive and negative nuclei and calculated PI scores (Fig. ). Participants were presented with a digital invasive breast cancer TMA stained for Ki-67 for each question and were asked to assign a Ki-67 score by entering a percentage value into Qualtrics™. Examples of questions with and without AI assistance are shown in Supplementary Figs. and . Each Ki-67 TMA was reviewed by respondents twice—once without AI assistance and once with AI assistance—resulting in a total of 20 assessment questions. Participants were not explicitly told to use the AI, but rather to observe the AI results and estimate their PI score. They were instructed to compute the Ki-67 PI by counting individual cells with a denomination of 500 cells and to regard any brown staining beyond the background as positive, in line with current guidelines , . They were also guided to spend approximately the same time they would during standard procedures with no limit on the time for assessments. Each pathologist used a distinct viewer from a separate workstation. To minimize bias, the order of cases was randomized, ensuring that TMAs with AI assistance were not shown immediately before or after the same TMA without assistance. Additionally, the AI-assisted images were altered in orientation to look different from the unaided images. At the end of the study, participants were requested to provide their demographic information and respond to inquiries regarding their perspectives on AI.
Participants were recruited through the professional networks of the authors between September and November 2022. Contact channels included pathology associations, local pathology residency programs, pathologist colleagues, and social media platforms (LinkedIn, Twitter). Eligible participants were trained pathologists with experience in Ki-67 PI scoring. The study included all participants who provided consent and identified themselves as pathology specialists. There were no limitations based on gender, age, or employment status, and only those who finished the study were considered, in total there were 116 completed responses. Spurious responses defined as outliers with large PI errors (more than 20% on a single response) were excluded from the analysis (N = 26 participants). Consequently, the main analysis included 90 respondents, all experienced in using digital pathology. Demographic characteristics are described in Supplemental Table . The participants' median age ranged from 40 to 49 years; however, the most common age group was 30 to 39 years, accounting for 34.4% of the respondents. While the median work experience falls within the 10 to 19 years range, the most prevalent work experience category is 0 to 9 years, representing 26.7% of the total. The majority of respondents are male, with many being retired clinical pathologists from North America. Among those currently working, most practice in academic health sciences centers.
The ground truth Ki-67 PI scores for the 10 TMAs were determined using the gold standard manual counting method, where any brown staining above the background level was deemed positive, following current guidelines , . Each TMA was divided into five rows and five columns, creating 400 × 400 pixel tiles, and annotations were made in each region. Nuclei were annotated at the center of each cell, with tumor cells marked as Ki-67 positive if any discernible brown staining above the background was observed and the cell border was visible; otherwise, they were marked as Ki-67 negative. In cases of overlapping tumor cells, each cell was marked individually if its borders were discernible. An anatomical pathology resident (N.N.J.N.) performed the manual annotations, which were verified by a breast pathologist (S.D.). Ground truth PI scores were calculated from these manual annotations. The ground truth PI scores of the ten cases ranged from 7 to 28%.
Statistical analyses were performed to assess the PI scoring error, inter-rater agreement, and TAT among pathologists when using the AI tool, compared to a standard clinical workflow (i.e., without AI). The experiment involved two groups: a control group where pathologists evaluated Ki-67 PI using standard clinical methods, and an experimental group where the same pathologists used the AI tool to assist with Ki-67 PI assessment on the same TMAs. For each participant, two PI estimations and TATs were obtained per TMA, resulting in 900 paired assessments (90 pathologists × 10 cases). For every assessment, several metrics were recorded, including the clinician-estimated raw PI score, the PI error (the absolute difference between the estimated and ground truth PI), and TAT, which denotes the time taken to score the TMA. The paired Wilcoxon signed-rank test was used to compare the differences between the two groups, with significance determined based on the median values of the paired differences. This test was chosen due to the non-normal distribution of the data, as indicated by the Shapiro–Wilk test. All statistical analyses were two-sided, with significance set at p < 0.05. PI scores and PI errors were assessed with and without AI assistance, using continuous and binary values. PI scores and PI errors were first treated as continuous values and summarized by the mean and standard deviation. Box and bar plots were used to visually depict case-based and sub-demographic PI errors, respectively. PI scores and errors were additionally binarized and assessed using low-risk Ki-67 PI < 20%, and high-risk ≥ 20% stratification . The consistency of scoring among pathologists, with and without AI assistance, was examined using both continuous and binary metrics. For the continuous analysis, the Two-Way Random-Effects Model for single-rater consistency agreement was chosen to assess the inter-rater agreement using the Intraclass Correlation Coefficient (ICC) , . This model was selected since all cases were evaluated by all raters. The choice of the single-rater model stemmed from the clinical reliance on a singular clinician's decision for Ki-67 scores, rather than averaging scores from multiple clinicians . The ICC between the pathologists’ PI scores and the ground truth PI was assessed twice: once with and once without AI assistance. Complementary to ICC, Krippendorff's α was calculated to measure inter-rater agreement and chosen for its adaptability in handling continuous data . Bland–Altman and linear regression plots of the PI scores were incorporated to supplement the measure of inter-rater agreement, with parameters such as Pearson’s correlation coefficient, slope, offset, mean, and limits of agreement being considered. Using binarized PI scores (with scores ≥ 20% assigned a 1, and scores < 20% a 0), the percent agreement and Fleiss’ Kappa were calculated for both groups. The TAT among pathologists was considered the time in seconds to perform the PI score estimation, starting from the moment they began examining the case to the point when the PI score was saved. TATs wereTATs, respectively. Additionally, the percentage of time reduction computed by the time savings was determined by the formula: (total time saved/total time spent on conventional assessment) X 100%. Statistical analyses were performed using SPSS Version 28 (Armonk, NY, USA).
Scoring accuracy Inter-rater agreement Turnaround time Pathologists’ opinions The respondents' PI scores and PI errors per case and within ranges are shown in Table . Responses including outliers are shown in Supplementary Table . The overall mean PI error was found to be 2.1 (2.2) using the AI tool, and 5.9 (4.0) without the AI (difference of − 3.8%, 95% CI: −4.10% to −3.51%, p < 0.001). The AI tool significantly improved the accuracy of PI scoring. The PI error was plotted per case (Fig. A) and for each PI interval (Supplementary Fig. ). Cases 2 through 10 had significantly less error ( p < 0.001), and both the < 20% and ≥ 20% PI ranges had statistically significant decreases in error with AI ( p < 0.001). Furthermore, Fig. B, C, which display the PI error across various demographics, revealed that AI-aided scoring was superior across all pathologist age ranges and experience levels—indicating that despite variable background and training, AI improved PI accuracy for all groups of pathologists. Supplementary Fig. shows that AI-aided scoring was statistically superior ( p < 0.001) across all pathologist subdisciplines. The AI tool demonstrated high accuracy in the study, with a mean PI error rate of 0.6%, which ranged from 0.0 to 6.1%, as shown in Table . To quantify the increase of PI estimation accuracy when pathologists used the AI tool, Supplementary Fig. shows the difference in PI error for each case. This difference is calculated as the PI error for the estimated PI score with AI assistance minus the error without AI assistance, highlighting the extent to which the AI tool reduces error rates. Most pathologists experienced increased accuracy with the AI tool, as indicated by positive differences seen in Supplementary Fig. .
AI assistance led to a significant improvement in inter-observer reproducibility (with AI assistance: ICC = 0.92 [95% CI 0.85–0.98], Krippendorff’s α = 0.89 [95% CI 0.71–0.92], without AI assistance: ICC = 0.70 [95% CI 0.52–0.89], Krippendorff’s α = 0.65 [95% CI 0.41–0.72]). These statistics are visually depicted in Supplementary Fig. . Bland–Altman analyses (Fig. B, D) revealed that pathologists with AI assistance exhibited less bias (mean of 0.7 vs. − 2.7) and tighter limits of agreement (6.5 to − 5.1 vs. 10.2 to − 15.6) compared to the ground truth scores. Linear regression models (Fig. A, C) further support the notion that AI assistance improves inter-rater agreement (with AI assistance: y = 1.06x − 0.46, r = 0.92, SSE = 7792; without AI assistance: y = 0.64x + 3.62, r = 0.58, SSE = 33,992). After binarizing the pathologists' responses, with scores ≥ 20% assigned as 1 and scores < 20% as 0, the Fleiss’ Kappa values showed better agreement with AI assistance (with AI assistance: 0.86 [95% CI 0.85–0.86]; without AI assistance: 0.40 [95% CI 0.40–0.41]). Table shows that agreement levels are increased for every case when using AI, with some cases achieving 100% agreement.
A visual depiction of TATs for each case is provided in Fig. A. Table displays the mean response time, standard deviation, and time saved for each TMA case for PI scoring with and without the AI aid. Without AI assistance, pathologists required an average of 23.3 s to assess each TMA, with a median time of 7.5 s and an interquartile range (IQR) of 5.5 to 16.2 s. AI assistance led to a statistically significant increase ( p < 0.001) in efficiency where the average TAT per TMA reduced to 18.6 s, a median time of 6.4 s and a narrower IQR from 4.6 to 12.1 s. Figure B illustrates the TAT for each question, showing the progression of TAT across cases as they were presented to the pathologists. Due to initially high response times, likely caused by participants acclimating to the software and study setup, question 1 (Case 2 without aid and Case 7 with aid) was excluded from further analyses. For evaluations without AI, pathologists averaged 18.3 s per TMA, with a median time of 7.2 s and an IQR of 5.5 to 14.0 s. With AI support, the average TAT per TMA decreased to 16.8 s, the median time was 6.4 s, and the IQR narrowed to 4.7 to 11.6 s. The reduction in TAT was statistically significant among pathologists with experience ranging from 10 to 39 years (Fig. C) ( p < 0.001), and for pathology fellows, practicing and retired pathologists (Fig. D) ( p < 0.001). Supplemental Fig. shows the mean TAT with and without aid for various disciplines, where roles such as clinical and forensic pathologists were statistically faster ( p < 0.001). AI assistance resulted in an average reduction of 1.5 s per TMA [95% CI, −2.4 to −0.6 s, p < 0.001]. Supplementary Fig. displays a histogram of the distribution of the total percentage of time saved, calculated using the formula: (total time saved/total time spent on conventional assessment) × 100%. The mean percentage saving was 9.4%, with a median of 11.9%.
Pathologists’ opinions on the use of AI for Ki-67 assessment in breast cancer are summarized in Fig. . The majority of respondents considered the AI tool's suggestion, found it to be appropriate and agreed that this AI tool could improve accuracy, inter-rater agreement and TAT for Ki-67 assessments (Fig. A). Many respondents also agreed that they would personally implement and agree with the routine implementation of AI aid for Ki-67 assessments within the next decade (Fig. B, C).
Ki-67 serves as a crucial indicator for predicting cancer recurrence and survival among early-stage high-risk breast cancer patients , . It informs decisions regarding adjuvant chemotherapy and radiation therapy opt-out for Luminal A breast cancer patients . These clinical decisions often rely on PI scores between 5 and 30%; however, this range exhibits significant scoring variability among experts, making standardization and clinical application challenging , , . This inconsistency, combined with long assessment times using the current Ki-67 scoring system, has limited the broader clinical application of Ki-67 and resultantly, has not yet been integrated into all clinical workflows . AI technologies are being proposed to improve Ki-67 scoring accuracy, inter-rater agreement, and TAT. This study explores the influence of AI in these three areas by recruiting 90 pathologists to examine ten breast cancer TMAs with PIs in the range of 7 to 28%. Two previous studies aimed to quantify PI accuracy with and without AI , . One study demonstrated that AI-enhanced microscopes improved invasive breast cancer assessment accuracy . They had 30 pathologists use an AI microscope to evaluate 100 invasive ductal carcinoma IHC-stained whole slide images (WSIs), which provided tumor delineations, and cell annotations. AI use resulted in a mean PI error reduction from 9.60 to 4.53. A similar study was conducted , where eight pathologists assessed 200 regions of interest using an AI tool. Pathologists identified hotspots on WSIs, after which the AI tool provided cell annotations for the clinician's review. The study found that this method significantly improved the accuracy of Ki-67 PI compared to traditional scoring (14.9 error without AI vs. 6.9 error with AI). Similarly, this study found that using AI assistance for PI scoring significantly ( p < 0.001) improved pathologists’ accuracy, reducing both the PI error and its standard deviation across various demographics, including years of experience and specialties. This indicates that AI assistance leads to higher PI accuracy across all levels of pathologists' training, enabling professionals at every career stage to deliver more precise PI scores in the range critical for clinical decision-making. This improvement may help bridge experience gaps and is critical for PI scoring standardization. An underestimation trend, previously reported by , was also noted in this study, as shown by the PI correlation and Bland–Altman analysis (Fig. ). However, scoring with the support of AI improved PI accuracy for all cases and corrected this underestimation bias. This is exemplified by the scoring near the 20% cutoff, which simulates a clinical decision threshold. In conventional assessments, many pathologists select the incorrect range (≥ 20% or < 20%), particularly for TMAs 7, 8, and 9, with ground truths of 19.8, 23.7, and 28.2, respectively. For instance, TMA 8 had 76.7% of respondents incorrectly estimated the score as < 20%. Errors like these would result in incorrect therapy decisions and poor patient outcomes. Fortunately, with AI assistance, the percentage of pathologists agreeing with the ground truth greatly improved, providing a strong incentive for the clinical use of AI tools in Ki-67 scoring. All cases showed a statistically significant PI error decrease with AI assistance, except for Case 1, with a ground truth PI score of 7.3% (p = 0.133). This exception could be attributed to fewer Ki-67 positive cells requiring counting, which likely simplified the scoring process. In addition to accuracy, PI scoring agreement is critical to ensure that patients with similar disease phenotypes are delivered the proper therapeutic regimes. However, significant variability in Ki-67 scoring is widely recognized, even in established laboratories. A study led by , found reproducibility among eight labs was only moderately reliable with contributing factors such as subjective judgements related to PI scoring and tumor region selection. Standardizing scoring methods becomes imperative, as transferring Ki-67 PIs and cutoffs between laboratories would compromise analytical validity. In another study by , the variability in breast cancer biomarker assessments, including Ki-67, among pathology departments in Sweden was investigated. While positivity rates for HR and HER2 had low variability, there was substantial variation in Ki-67 scoring, where 66% of labs showed significant intra-laboratory variability. This variability could potentially affect the distribution of endocrine and HER2-targeted treatments, emphasizing the need for improved scoring methods to ensure consistent and dependable clinical decision-making. The study by , aimed to improve Ki67 scoring concordance with their AI-empowered microscope. They found a higher ICC of 0.930 (95% CI: 0.91–0.95) with AI, compared to 0.827 (95% CI: 0.79–0.87) without AI. Similarly , aimed to quantify the inter-rater agreement for WSIs with AI assistance across various clinical settings. The AI tool evaluated 72 Ki-67 breast cancer slides by annotating Ki-67 cells and providing PI scores. Ten pathologists from eight institutes reviewed the tool and input their potentially differing PI scores. When the scores were categorized using a PI cutoff of 20%, there was an 87.6% agreement between traditional and AI-assisted methods. Results also revealed a Krippendorff's α of 0.69 in conventional eyeballing quantification and 0.72 with AI assistance indicative of increased inter-rater agreement, however, these findings were not significant. In this study, we evaluated the scoring agreement with and without AI across 90 pathologists, representing one of the largest cohorts analyzed for this task. It was found that over the critical PI range of 7 to 28%, AI improved the inter-rater agreement, with superior ICC, Krippendorff’s α and Fleiss’ Kappa values compared to conventional assessments and higher correlation of PI estimates with the ground truth PI score. Additionally, there was a decrease in offset and variability, as shown in Fig. . These agreement metrics align with findings from earlier studies , and signify that AI tools can standardize Ki-67 scoring, enhance reproducibility and reduce the subjective differences seen with conventional assessments. Therefore, using an AI tool for Ki-67 scoring could lead to more robust assessments and consistent therapeutic decisions. AI applications have predominantly focused on automating the laborious tasks for pathologists, thereby freeing up time for high-level, critical decision-making, especially those related to more complex disease presentations – , . Some research into AI support tools in this field has demonstrated a notable decrease in TAT for pathologists. For instance, a study led by , which involved 20 pathologists analyzing 240 prostate biopsies, reported that an AI-based assistive tool significantly reduced TAT, with 13.5% less time spent on assisted reviews than on unassisted ones. Similarly, the study by , demonstrated a statistical improvement ( p < 0.05) in TATs when 24 raters counted breast mitotic figures in 140 high-power fields, with and without AI support, ultimately achieving a time saving of 27.8%. However, the study by , reported a longer TAT using an AI-empowered microscope in their study, which involved 100 invasive ductal carcinoma WSIs and 30 pathologists (11.6 s without AI vs. 23.8 s with AI). Our study found that AI support resulted in faster TATs (18.3 s without AI vs. 16.8 s with AI, p < 0.001), equating to a median time saving of 11.9%. Currently, our team only performs Ki-67 testing upon oncologists' requests, as routine Ki-67 assessment is not yet standard practice. This is partly due to the difficulties in standardizing Ki-67, compounded by pathologists' increasing workloads and concerns over burnout , . Pathologists' caseloads have grown in the past decade, from 109 to 116 annually in Canada and 92 to 132 in the U.S. . With the Canadian Cancer Society expecting 29,400 breast cancer cases in 2023 , routine Ki-67 assessments would significantly increase workloads. Therefore, the implementation of AI tools in this context could alleviate workload pressures by offering substantial time savings and supporting the clinical application of this important biomarker. The gold standard for assessing Ki-67 PI is manual counting , ; however, due to the labor-intensive nature of this method, many pathologists often resort to rough visual estimations , . As indicated in Table and Fig. , the shorter TATs suggest that respondents may have relied on visual estimations for Ki-67 scoring. Despite this, the TATs significantly improved ( p < 0.001) when using AI. This improvement was evident among experienced pathologists; however, some encountered longer TATs after integrating AI, possibly due to unfamiliarity with the AI tool or digital pathology viewing software. Although participants received a brief orientation and two initial examples, the novelty of the tool might have posed a learning curve. Addressing this challenge involves integrating the tool into regular practice and providing comprehensive training before its use. The perspectives of pathologists highlight a growing enthusiasm towards AI integration for Ki-67 evaluations for breast cancer. A significant 84% of participants agreed the AI’s recommendations were suitable for the task at hand. They recognized AI's ability to improve pathologists' accuracy (76%), enhance inter-rater consistency (82%), and reduce the TAT for Ki-67 evaluations (83%). Additionally, 49% expressed their intent to incorporate AI into their workflow, and 47% anticipated the routine implementation of AI within the next decade. An important observation is that many respondents who were hesitant about personally or routinely implementing AI in clinical practice were retired pathologists. In total, 83% of retired pathologists reported they would not currently implement AI personally or routinely, which is a stark contrast to only 15% of practicing pathologists who expressed the same reluctance. This positive outlook in the pathologist community supports the insights of this study and signals an increasing momentum for the widespread adoption of AI into digital pathology. The strength of this research is highlighted by the extensive and diverse participation of 90 pathologists, which contributes to the study's generalizability in real-world clinical contexts. Adding to the study's credibility is the focus on Ki-67 values around the critical 20% threshold, which is used for adjuvant therapy decisions. Moreover, the AI nuclei overlay addresses the transparency concerns often associated with AI-generated scores, thus improving clarity and comprehensibility for users. The ongoing discussion around 'explainable AI' highlights the importance of transparency in AI tools' outputs, a crucial factor for their acceptance and adoption . The outcomes of the study emphasize the positive outlook and readiness of pathologists to embrace AI in their workflow and serve to reinforce the growing need for the integration of AI into regular medical practice. The study has its limitations, one of which includes the potential unintentional inclusion of non-pathologists. The survey required respondents to confirm their status as pathologists through agreement before beginning; however, due to confidentiality limitations, no further verification was possible. In some instances, pathologists' scores deviated from the ground truth by more than 20%, with PI errors reaching up to 50%. Such large errors would render any PI score diagnostically irrelevant, as the variance exceeds the clinical threshold of 20%. These errors might be attributed to input errors or a lack of experience in Ki-67 assessments. Consequently, we used this threshold to filter out potentially erroneous responses. In total, 26 participants who logged responses exceeding the 20% error threshold were subsequently excluded from the study. For completeness, Supplementary Table discloses the PI scores and PI errors of all respondents, including outliers, where the data trends appear similar. The demographics of the study's participants reveal there was limited participation from currently practicing pathologists, representing 14.4% of respondents. This may be attributed to the time constraints faced by practicing pathologists. In future research, efforts will be made to include more practicing pathologists and to evaluate intra-observer variability. Additionally, while the survey provided specific guidelines for calculating the PI and applying Ki-67 positivity criteria, the accuracy and thoroughness of each pathologist's evaluations could not be verified. Lastly, the study deviated from standard practice by using TMAs instead of WSIs for Ki-67 clinical assessments. The rationale behind this choice was the expectation of more precise scoring with TMAs, as this eliminates the need to select high-power fields (a subjective process) and involves a lower number of cells to evaluate, leading to better consistency in visual estimations. Future research should focus on evaluating the accuracy achieved with AI assistance in identifying regions of interest and analyzing WSIs. This should also incorporate a broader range of cases and a wider PI range. Prospective studies involving solely practicing breast pathologists could also yield valuable insights into the real-world application of the AI tool and its impact on clinical decision-making. In conclusion, this study provides early insights into the potential of an AI tool in improving the accuracy, inter-rater agreement, and workflow efficiency of Ki-67 assessment in breast cancer. As AI tools become more widely adopted, ongoing evaluation and refinement will be essential to fully realize its potential and optimize patient care. Such tools are critical for robustly analyzing large datasets and effectively determining PI thresholds for treatment decisions.
Supplementary Information.
|
Enhancing dental education: integrating online learning in complete denture rehabilitation | 371f167e-3a2f-49f8-9528-9ac64cf3d863 | 11445855 | Dentistry[mh] | Complete denture rehabilitation (CDR) is a traditional prosthodontic treatment option for edentulous patients experiencing systemic, anatomic, or financial limitations . CDR is challenging for dental interns (fifth-year dental students) because of its patient-centered nature, the many factors influencing treatment planning, the need for technical precision, and the profound impact on patients’ functional and psychosocial well-being . Mastering CDR is crucial for dental interns because it not only meets the clinical needs of patients but also contributes to the intern’s professional growth, reputation, and overall success in their dental practice. It aligns to provide comprehensive and patient-centric care and prepare interns for the challenges and responsibilities of modern dentistry. Dental students in CDR navigate through a challenging and rewarding field that requires a comprehensive understanding of dental science and a commitment to ongoing education and skill development. Online learning refers to the application of information and communication technologies to support and enhance learning and teaching between students and teachers . Integrating online learning into CDR education during the internship year offers several advantages specific to this critical period’s unique challenges and requirements . First, interns often have demanding clinical schedules during their internship year. Online learning provides flexibility, allowing interns to access CDR education conveniently and avoiding conflicts with clinical responsibilities. It is a resilient alternative, allowing interns to continue their CDR education even when physical attendance is not feasible. The self-paced nature of online learning enables interns to revisit and review CDR concepts until mastery is achieved. This flexibility supports individualized learning and ensures that interns are well prepared for clinical applications. For example, Subramani et al. reported that 90.4% of preclinical students, and 80.4% of clinical students, used smartphones with various learning apps to enhance their learning online . Moreover, online learning opens doors to expertise worldwide, and online platforms enable the efficient use of educational resources. Online platforms facilitate the incorporation of diverse learning resources, such as videos, interactive simulations, and multimedia presentations. Interns can collaborate with peers in ways that are not constrained by the limitations of physical boundaries. Online learning platforms can accommodate various learning styles, catering to the diverse preferences of interns. Online learning eliminates costs associated with commuting, accommodations, and physical textbooks. This can be particularly advantageous for interns facing financial constraints during their internship year. Finally, in prosthodontics, CDR in particular, it is not easy to visualize and correlate theory with practice. Therefore, exceptional knowledge and training are necessary for students to master CDR skills. A study conducted by Gilmour et al. , and a national study conducted by Ali et al. , investigated the preparedness and confidence levels of undergraduate dental students in the United Kingdom. These studies provide insight into the challenges that dental interns may face during their training, especially in complex procedures such as CDR. Lack of clinical experience and traditional teaching methods can lead to students lacking the confidence to complete CDR. Familiarity with online learning platforms equips interns with the technological skills that are essential for modern dental practice. Online assessments and quizzes provide immediate feedback, allowing interns to gauge their understanding and identify areas for improvement. This timely feedback contributes to ongoing skill enhancement. Bahanan et al. evaluated dental students’ perceptions and overall experiences with e-learning and found that most students considered e-learning a positive experience . Furthermore, substantial progress has been made in educational methodologies, especially in the integration of virtual reality (VR) technologies and artificial intelligence (AI). These innovations are proving transformative in online education, especially in specialized areas such as CDR . Loka et al. studied the effect of reflective thinking on academic performance among undergraduate dental students . They recognized that self-directed learning is a vital principle promoted in health professions education, particularly with the increasing use of online learning methods. Furthermore, Linjawi et al. conducted a cohort study in Saudi Arabia to assess students’ perceptions, attitudes, and readiness toward online learning in dental education. The results indicated that the attitude and understanding of interns towards online education are crucial to its development and effectiveness . When online learning is incorporated into CDR education during the internship year, dental interns can benefit from a comprehensive and adaptable learning environment that addresses their unique needs, supports continuous skill development, and prepares them for successful clinical practice in CDR. Therefore, this study aims to evaluate dental interns’ background in CDR and assess their attitudes toward the online learning of CDR.
Participants and questionnaire Data analysis The study conducted a questionnaire-based online survey via the Universal Questionnaire Designer platform ( www.wjx.cn ) to assess dental internship students’ backgrounds in and attitudes toward online learning of CDR. The survey comprised three parts and 20 structured questions, including students’ online learning experiences, knowledge background about CDR, and attitudes toward online learning for CDR. The elements of the questionnaire are illustrated in Table . The study received ethical approval from the Academic Affairs Office of the West China School of Stomatology, Sichuan University (WCHSIRB-NR-2022-005). A total of 63 dental interns (19 male and 44 female undergraduate dental students) participated, and their privacy was safeguarded, with personally identifiable information kept confidential. The interns who participated in this survey were all fifth-year dental students who had just started their clinical internship. They had relevant theoretical knowledge but lacked practical clinical experience. The participants were required to respond to all the questions to ensure the completion of the electronic forms. Informed consent was obtained from all the participants involved in the study.
The data analysis involved descriptive statistics, with the findings presented as percentages. The response percentages were calculated based on the number of respondents for a specific response compared with the total number of answers to a question. This approach allowed for a comprehensive understanding of the participants’ experiences, knowledge, and attitudes related to online learning of CDR.
Students’ experience with online learning Students’ knowledge background about CDR Students’ attitude about online learning CDR In this survey, 63 undergraduate dental students participated, with a gender distribution of 19 males (30.2%) and 44 females (69.8%). The findings revealed that 22.22% of the students preferred online learning, whereas the majority (60.32%) favored traditional face-to-face teaching. In addition, 17.46% of the students expressed uncertainty about their preferences (Fig. a). The survey indicated a high participation rate in online learning, with 93.65% of the students engaging in online educational activities and only 6.35% who did not participate (Fig. b). In terms of the perceived necessity of online learning, 76.19% of the students believed that it is essential, whereas 6.35% held different opinions (Fig. c). Furthermore, 80.95% of the students were willing to participate in online learning, whereas only 4.76% strongly indicated an unwillingness (Fig. d). With respect to readiness for online learning of CDR, 71.42% of the students considered themselves prepared, whereas 12.70% felt unprepared for such learning (Fig. e).
The evaluation of the students’ knowledge background about CDR yielded noteworthy insights. Only 7.94% considered their knowledge of CDR to be good, with a substantial 63.49% rating it as average and 28.57% rating it as poor (Fig. a). With respect to confidence in clinical performance, a mere 11.1% expressed confidence, 65.08% lacked confidence, and 23.81% were uncertain (Fig. b). In terms of readiness for participation, 44.4% felt prepared, 28.57% believed they were not ready, and 26.98% were unsure (Fig. c). In terms of familiarity with the CDR treatment plan, 26.98% claimed to be familiar with it, an equivalent percentage did not know, and 46.03% were uncertain (Fig. d). Significant disparities in students’ perceptions of appointment management for CDR patients existed, with 22.22% feeling confident, 46.03% feeling uncertain, and 31.75% having no idea (Fig. e). The level of communication confidence varied; 34.92% of the participants felt confident, 28.57% lacked confidence, and 36.51% were unsure (Fig. f). These findings indicate diverse levels of knowledge, confidence, and readiness, indicating potential areas for targeted educational interventions and support to enhance students’ understanding and skills in this critical aspect of dentistry. In specific clinical tasks related to CDR, students exhibited varying levels of self-confidence and uncertainty. Notably, 30.16% believed that they could handle impression-taking independently, 38.10% thought that they could not, and 31.75% were unsure (Fig. a). Similarly, only 15.87% of the students felt confident in achieving occlusal relationships alone, whereas 50.79% believed they could not, and 33.33% were unsure (Fig. b). With respect to the selection of correct artificial teeth for patients, only 7.94% felt capable, 53.97% believed they could not, and 38.1% were unsure (Fig. c). In the CDR try-in stage, 31.75% thought that they could perform the task independently, while 46.03% believed they could not, and 22.22% were uncertain (Fig. d). In addition, 39.68% believed that they knew how to instruct patients on wearing complete dentures, while 30.16% did not, and 30.16% were unsure (Fig. e). Finally, approximately 31.75% thought they knew how to provide postoperative guidance, 36.51% did not, and 31.75% were unclear (Fig. f). These findings indicate the diverse self-perceptions and potential areas for targeted educational support in specific clinical competencies related to CDR.
When we explored students’ attitudes toward the online learning for CDR, significant insights emerged. A substantial percentage (60.90%) of the students enjoyed participating in online CDR learning, with only 6.35% expressing dislike, and 31.7% being unaware (Fig. a). In addition, a majority (71.43%) expressed a strong desire to continue online learning for CDR, only 7.94% declined, and 20.63% remained undecided (Fig. b). When assessing attitudes toward online learning in general, 82.54% of the students believed it was helpful, 6.35% held a contrary view, and 11.11% were uncertain (Fig. c). These findings underscore a positive inclination toward online learning for CDR among students, suggesting its perceived effectiveness and acceptance within the academic context.
Ensuring that dental students master CDR during their internship is crucial because this stage is pivotal in the transition to clinical practice. The internship provides a vital opportunity for students to apply their theoretical knowledge in real-world settings, refine their technical skills, and gain practical experience. Effective mentoring, hands-on training, and exposure to a variety of cases are essential in building confidence and competence in CDR. Integrating both traditional and innovative teaching methods, including online learning tools, can enhance this learning process. The flexibility of online platforms, which are accessible through smart devices and apps, enables students to review lessons at their convenience, making them a viable choice. Despite challenges, some scholars predict that online learning for dentures will become mainstream by 2025 . Thus, the active promotion and participation in online learning for CDR have become imperative in the current circumstances. The original purpose of this study was to assess dental students’ knowledge and attitudes toward online learning in CDR, with the aim of enhancing their understanding and clinical practice. The data indicated high engagement in online learning among students, with a majority recognizing its necessity and planning future participation. This inclination may be attributed to the increased emphasis on digital education and the use of online platforms during the COVID-19 pandemic, as supported by Wang et al.’s findings on the widespread adoption of online courses in dental education . However, it is important to point out that the data showing that more students prefer face-to-face teaching do not conflict with the popularity of online learning. The prevalence of online learning is not a substitute for face-to-face learning but rather a necessary auxiliary means of learning. Through online learning, conflicts between time arrangement and low personal learning efficiency can be easily solved, and thus it is more suitable for students who wish to personalize intensive learning so they can fully master the relevant knowledge. Moreover, the low efficiency of teacher‒student interactions can also be effectively solved via traditional face-to-face learning. This study also revealed gaps in students’ readiness and confidence with respect to CDR. Although a significant proportion felt prepared for online CDR learning, only a small percentage rated their CDR knowledge as good, and even fewer felt confident in performing CDR clinically. This may be due to the complexity of oral rehabilitation with complete dentures, which requires extensive theoretical and practical knowledge . Nonetheless, the literature suggests that students’ practical abilities and confidence can improve significantly during their internship, emphasizing the importance of clinical experience . The survey also explored students’ familiarity with various aspects of CDR, such as treatment planning, appointment scheduling, patient communication, and specific procedural skills. The results revealed that students had limited confidence in these areas, highlighting the need for targeted improvements in online CDR education to address these gaps. This necessity is highlighted by the ongoing relevance of CDR, especially for older patient populations . Overall, this study highlights the potential of online learning to improve dental students’ proficiency in CDR while also identifying specific areas for educational focus. In higher education, the acceptance of online learning is primarily due to its time and cost efficiency. As physical and online classrooms increasingly merge, dental students must adapt to online learning environments . Prosthodontics, a comprehensive subject, necessitates internships for dental graduates to develop clinical, communication, and teamwork competencies . Internships are pivotal in cultivating patient-centered attitudes and behaviors, thus significantly enhancing students’ future clinical performance. Therefore, assessing students’ attitudes and performance is crucial in evaluating the success and value of online learning . Our survey revealed interns’ strong optimism about online learning for complete dentures: 61.90% enjoyed online learning, 71.43% were motivated to continue, and 82.54% found it beneficial for CDR (Fig. a-c). However, dental students still require hands-on training and opportunities to apply their skills clinically . We advocate early clinical exposure and active preclinical prosthodontic teaching methods. Sole reliance on the internship year for the acquisition of procedural skills is inadequate. Dental students across all specialties need efficient access to educational materials. Research has indicated that the effectiveness of online learning is on par with, or exceeds, that of face-to-face methods . Chang et al. reported a 5–10% improvement in learning efficiency when blended learning was used compared with traditional methods . The positive perception of online learning among students and lecturers suggests its potential integration into post-COVID-19 curricula . However, given that stomatology focuses on clinical practice, the lack of practical experience may impede the enhancement of clinical skills , necessitating further research to evaluate the effectiveness of online learning in such practical subjects. In the field of dental education, online learning can leverage advanced digital resources to significantly enrich the learning experience, in particular in CDR techniques. For example, the application of on-demand, enhanced videos equipped with real-time subtitles that capture the presenter’s dialogue, along with concise text bullet points and summary pages, offers a robust platform for augmenting knowledge acquisition, enhancing perceptual skills, and improving clinical performance in dentistry . In addition, the integration of custom-built simulation models for impression-taking and tooth arrangement exercises can substantially improve online learning outcomes for dental students, fostering practical skills in complete denture impression creation and tooth positioning . Furthermore, the adoption of multimedia learning applications, such as video demonstrations of artificial teeth placement and patient case studies, serves to uphold or even increase the quality of dental education, effectively bridging the gap left by the absence of face-to-face instruction . Finally, the deployment of AI-driven e-learning tools, such as the Generative Pre-trained Transformer 4 (GPT-4) model by OpenAI, exemplifies a forward-thinking approach to training . This model facilitates an immersive learning environment that allows students to engage in realistic diagnostic conversations with virtual patients, thereby honing their diagnostic capabilities in a controlled yet lifelike setting. Furthermore, virtual reality (VR) has emerged as a transformative force in dental education, heralding a new era characterized by immersive training environments and immediate feedback mechanisms . This innovation facilitates the acquisition of standardized skills among students, bridging the gap between theoretical knowledge and practical expertise. The advent of VR simulation-based pedagogy marks a significant shift in educational paradigms, encompassing both undergraduate and postgraduate realms . It acts as a complement—or, in certain contexts, an alternative—to conventional training methodologies across various dental specialties . Although VR simulators cannot entirely supplant traditional hands-on training, their utility and effectiveness in specific educational scenarios are undeniable. In particular, VR technologies, in conjunction with three-dimensional computer models and simulators, are proving to be invaluable assets in the comprehensive management of edentulous patients. Research conducted by Mansoory et al. highlights the efficacy and utility of VR in facilitating the learning process related to the neutral zone and teeth arrangement for edentulous patients, thereby fostering a dynamic, engaging, and successful educational experience . The integration of VR simulators with advanced technological frameworks, such as big data analytics, cloud computing, the proliferation of 5G networks, and deep learning algorithms, promises to further revolutionize preclinical dental training. Numerous dental colleges and universities have already embarked on integrating VR-based experimental teaching into their curricula, highlighting the feasibility and adaptability of such innovative teaching modalities . Educational researchers now have a responsibility to rigorously evaluate this novel online and VR-assisted teaching methodologies. Their objective is to ascertain the efficacy of these methods in comparison with traditional educational approaches, ensuring that the quality and efficiency of dental education are not only maintained but also significantly enhanced. Virtual reality (VR) technologies provide immersive and interactive experiences that are particularly beneficial for dental education, where hands-on practice is essential . In complete denture rehabilitation, VR can re-create clinical environments, enabling students to practice crucial techniques such as impression-taking, jaw relation recording, and denture fitting in a controlled and highly realistic setting. This immersive experience allows students to hone their motor skills and decision-making abilities in a safe environment where mistakes can be made without compromising patient safety. VR also enhances the visualization of complex anatomical structures and the interaction of dentures with oral tissues, deepening students’ understanding of denture design and function . This critical aspect is often challenging to master through traditional methods. Artificial intelligence (AI) complements VR by offering personalized learning experiences, assessing student performance in real time, and providing instant feedback . AI-driven platforms can identify areas where students struggle, such as achieving proper occlusion or understanding material properties, and offer tailored resources to address these gaps. AI also enhances the accessibility and scalability of education by adapting content to different learning styles and paces, which is particularly valuable in online education environments. Integrating VR and AI in complete denture rehabilitation education also offers new opportunities for dynamic and interactive assessments of practical skills and clinical decision making; however, adopting these technologies requires significant investment and careful planning to ensure effective integration into educational curricula . Despite these challenges, VR and AI hold promise for revolutionizing dental education by making it more immersive, personalized, and effective in preparing students for clinical practice.
The manuscript discusses the importance of CDR training for dental interns, emphasizing the role of online learning in enhancing education. This highlights the need for comprehensive understanding, skill development, and the integration of innovative teaching methods. The study explores dental students’ attitudes toward and knowledge of online learning of CDR, revealing not only engagement but also gaps in readiness and confidence. The results suggest the necessity of improving online CDR education. This study also highlights the ongoing relevance of CDR, in particular for older populations, and the need to integrate theoretical knowledge with innovative technologies such as virtual reality into dental education. The results suggest that a balanced approach to online and traditional learning is crucial for equipping future dental professionals with the necessary skills and confidence to succeed in |
Simulation as tool for evaluating and improving technical skills in laparoscopic gynecological surgery | da3aa090-4383-4a84-be26-f1db6bdc4aa2 | 6796391 | Gynaecology[mh] | Surgical training is one of the most important aspects in different medical specialties. Trainees can obtain competences in traditional and minimally invasive surgery only after years of practice and sacrifices. Traditionally, one of the main methods of surgical residency training remains an intensive internship in the operating room, although it presents several negative aspects including potential risks to the patient safety and the need for trainees to spend many hours in the operating theatre before achieving good results, with limited training opportunities due to the lack of time and to the many professional activities that must be performed . To overcome difficulties about this “learning-by-doing” approach, a lot of simulators and box trainers have been tested in the last decades to evaluate their effectiveness for surgery training, demonstrating their ability to improve technical skills, operative performance and coordination [ – ] and therefore simulation programs are now considered a key role in the surgical learning process. The American Board of Surgery in 2008 announced that, among the necessary requisites to complete general surgery residency in the United States, the Fundamentals of Laparoscopic Surgery (FSL) were mandatory; the goal of this program is to allow residents to learn and practice technical skills and then test them to ensure an appropriate required skills level . Also, the program of the OBGYN residency could benefit from training models. Several studies demonstrated the role of simulation to improve obstetric skills in specific clinical situation such as shoulder dystocia, vaginal delivery for breech presentation or vacuum extraction [ – ]. The aim of this study is to demonstrate the role of simulation in laparoscopy on improving technical skills in trainees in obstetrics and gynecology and as evaluation tool to self-assess own capacities with the use of a modified OSATs scale.
Thirty-three residents (post-graduate training year, PGY, 1–5) from the Division of Obstetrics and Gynecology of University of Pisa, Italy, performed five simulated surgical procedures and they were evaluated by an external observer who is an expert surgeon with a high experience in minimally invasive surgery. All the procedures were selected to assess laparoscopic skills and they showed a level of increasing difficulty using a high-fidelity simulation platform, the Simsei training system (© 2018 Applied Medical). The five-tasks for each station were: creating pneumoperitoneum and positioning trocars under vision, using a first entry kit that reproduced the characteristics of the abdominal wall moving six pegs on a platform, from right to left and from left to right using dominant and non-dominant hand changing the shape of a rubber band on a platform with spikes using dominant and non-dominant hand cutting precisely a circle printed on a dual layer gauze which was laterally fixed with supports making single stitch/knot on a silicon suture-pad that simulated tissues. Trainees performed all the procedures several times at different moment of the day, by their own. In the first phase, procedures were executed without any information about the “correct” type of execution. This phase was permitted to make trainees confident with simulation. At the beginning, trainees received only indications concerning the exercises and then they started doing procedures. Each procedure was evaluated by four expert surgeons of our department, that were randomly assigned and each trainee filled a self-evaluation test. After that, the correct execution of each task was shown by expert surgeons to the trainees and, after 2 h of training, they repeated all the five procedures again and a new evaluation was performed. Procedures were executed in the same order, from exercise number one to exercise number five. Time used to perform the procedure was evaluated only in the fourth station. Faculty members, during the second examination, could not see previous results to avoid any influence in the judgment. Assessments were given using OSATS (Objective Structured Assessment of Technical Skills) (Table ) that were specifically adapted for each station to evaluate surgical abilities for each procedure. OSATS presented scores ranging from 1 to 5, with minimum and maximum score of 3 and 15 for the first station, 4 and 20 for the second, third, and fourth stations, 5 and 25 for the fifth station. Tasks were the same for young and senior trainees and no differences on procedures were considered between trainees with higher experience in surgical room or different sub-specialties. This aspect is pivotal to consider. The training program of Department of Experimental and Clinical Medicine, Division of Obstetrics and Gynecology, University of Pisa, Italy, indeed provides a basic level of surgical competences for all the trainees during 5 years. Therefore, each resident, starting from the fourth year of residency, can decide which subspecialty focus on. Objective structured assessment of technical skills Wilcoxon matched-pairs signed rank test was used to study time-operative outcomes in ex 4. In addition, two-way Anova was performed followed by Holm-Sidak’s multiple comparisons test. The values of p < 0.05 were considered significant. (* p < 0,05; ** p < 0,01; *** p < 0,001).
Original OSATS model was developed in 1997 to evaluate general surgery residents performing operative tasks both on live anaesthetized animals and on bench models. The scoring system included three parts and it was used for each task. It consisted in: 1) a task specific check-list 2) a seven-item global rating scale 3) a pass/fail judgement . In our study, we partially modified the original OSATS model (Table ) to fit on our type of simulation model. We organized 5 different tasks, which covered the basic skills to perform laparoscopic surgery. An external observer reported the competence of trainees with a score from 1 to 5, considering 5 as optimum. In task 1, we evaluated the competence of trainees on creating pneumoperitoneum and positioning trocars correctly under vision. Then, in other tasks we evaluated other important aspects of mini-invasive surgery as instrument handling and synchronization between the hands (task 2 and 3), the careful handling of tissue and the cutting technique (task 4) and indeed, the making stitches and knots with laparoscopic instruments (task 5). For each task, we considered time and motion of trainees. Time was evaluated with a general consideration of the time of execution of each exercise. Only for task 4, which is considered the summary of different expertise and laparoscopic skills, we recorded the exact time (in seconds) occurred to complete the entire exercise.
Simulation as tool to evaluate different competences of trainees The use of simulator improves technical skills in each exercise Simulation improves technical skill especially in “naïve” residents Simulation reduces time of execution of the exercises Simulation as tool for self-assessment Institutional review board Main scores, obtained before the simulation training, show similar results for all the stations (Fig. ), with junior trainees (PGY 1–3) performing worse scores in comparison to senior trainees (PGY 4–5). However, this trend was statistically significant for the task 1–4 but not for the task 5 (Exercise 1: 6.947 vs. 11.07, p = 0.0004; Ex. 2: 9.684 vs. 12.64, p = 0.0196; Ex. 3: 10.05 vs. 13.29, p = 0.0084; Ex. 4: 8.263 vs. 11.43, p = 0.0104; Ex 5: 9.947 vs. 11.93, p = 0.2340).
Trainees improve their OSATS mean scores with the use of simulator. Except for exercise 1, the use of simulator improves the mean score of the other exercises (2–5) in a statistical significant way, as shown in Fig. (Exercise 1: 8.697 vs. 9.03, p = 0.6686; Ex. 2: 10.94 vs. 13.48, p = 0.00119; Ex. 3: 11.42 vs. 14.27, p = 0.00029; Ex. 4: 9.606 vs. 12.06, p = 0.00176; Ex.5: 10.79 vs. 14.67, p < 0.00001).
The improvement on technical skills is similar in all trainees without statistically significant differences (exercise 1: mean score differences 1.3 vs 0.8; exercise 2: mean score differences 2.6 vs 1.2; exercise 3: mean score differences 3.1 vs 4.1; exercise 4: mean score differences 3.8 vs 1.8; exercise 5: mean score differences 2.8 vs 6) but it is worthy to note that, after the training with simulator, junior residents (PGY 1) present very close scores to what the senior residents do before simulation (PGY 5), especially in exercises 2–4 (Fig. ).
The time of execution for each task was evaluated exclusively in task number four. For this task, the time was measured for all 33 residents in both the first and second tests. A statistically significant reduction in time in the second test demonstrating how practice affects execution times (Exercise 4a before simulation: mean time 393.5 ± 117.5; Exercise 4a after simulation: mean time 254.8 ± 76.85; p < 0.0001) (Fig. ). It is important to note how the time difference is statistically significant both in the group of residents who are dedicated to surgery and in that who are not dedicated (Mean time in surgery group: 335.4 vs.193.6; p 0.0002; mean time in no surgery group: 436.2 vs. 299.8; p < 0.0001). The group with higher experience on surgery performed the entire exercise with shorter times than the no-surgery group, both in the first test and in the second test (Mean time before simulation: 335.4 vs. 436.2; p 0.0029; Mean time after simulation: 193.6 vs. 299.8; p 0.0025). Similar results were also obtained by comparing times of the trainees of first and fifth year, with statistically significant results: in both groups, there was a reduction in the execution time after the simulation (Mean time before simulation: 466 vs. 305.4; p 0.0019; Mean time after simulation: 323.6 vs. 187; p 0.0182), as shown in Fig. b.
For all the five tasks, both in the first and second tests, assessment and self-assessment were performed. The self-perception of what trainees do is overlapping with the judgment of external observers (Fig. ). However, this does not occur for the last task, the number five. In this task, the scores of self-assessments are statistically lower than those of evaluation by the external observers. This difference was present before and after simulation training (Exercise 1a: 8.697 vs. 8.303; p 0.9921; Exercise 1b: 9.03 vs. 10.06 p 0.8234; Ex 2a: 10.94 vs. 10.58; p 0.9921; Ex 2b: 13.48 vs. 12.73; p 0.9176; Ex. 3a: 11.42 vs. 11.06; p 0.9921; Ex. 3b: 14.27 vs 13.42; p 0.9050; Ex 4a: 9.606 vs. 9.485; p 0.9921; Ex 4b: 12.06 vs. 11.88, p 0.9921; Ex 5a: 10.79 vs. 7.303; p 0.0001; Ex 5b: 14.82 vs. 9.364; p < 0.0001).
This study does not require any approval by our IRB according our national regulation about simulation study: all the participants, however, received the informed consent procedure and the participation in the research was completely voluntary. All the results remained confidential and completely anonymity for participants in the manuscript was ensured.
Laparoscopic training is an important aspect of the curriculum for general surgeons, and for gynecologists. This type of procedure requires a set of skills as coordinating instrumentation, cutting, knotting, 2D optics, depth perception which constitute the FSL. These capacities can be reached only with a long and challenging curve of learning in the operating room. And also after these efforts, validated methods for assessing laparoscopic skills remain debatable and there are not approved protocols for the standard use of simulation as a tool for evaluating and improving the surgical skills of the operators . Indeed, some authors have partially questioned the role of simulation as training compared to surgical experience. Comparing the operating times of experienced surgeons with young trainees, despite having similar scores in simulation, the first group presented completely different results in operating room . The aim of this study is not only to provide further data to support the routine use of simulation in the educational field, but, above all, the use of simulation as a useful evaluation tool by an external observer and as a self-assessment test. In this paper, we demonstrated the role of simulation to evaluate and to improve technical skills in OBGYN trainees. By using OSATS, we evaluated trainees’ training and their improvement, comparing the role of simulation on the learning process of no-expert operators and on meliorating technical skills depending on the prior experience in surgery. Furthermore, the judgment expressed by external observers corresponds to the trainee’s perceptions. This element is very important to stress. Trainees can self-evaluate their ability to perform FSL and to understand any improvements and this self-assessment corresponds to the judgment of the observer. Previous researches have demonstrated the usefulness and the validity of OSATS to evaluate residents’ training and their improvement. OSATS have become one of the most used tools for surgical skills assessment, thanks to its construct validity and inter-examiner reliability [ , , ]. Although OSATS were used in laboratory setting, their reliability has been repeatedly demonstrated even in the operating room, making it a very useful instrument to evaluate the operative skills in their entirety . In recent years, however, some authors have shown greater validity and reliability of the global rating scale than the other scores, using it also in a modified form . Also in this study, we used the global rating scale but not in its original form. For each task a specific score has been created, adding criteria like hands synchronization and removing others unnecessary for that exercise. The use of OSATS to evaluate technical skills in no-general surgery trainees as gynecologists is limited . Even if FSL are all basic technical skills, usually this part of the training is relatively considered. Most of the authors focus their attention on an objective scale for the assessment of specific procedures (hysterectomy, annexectomy, etc) . By using OSATS applied to simulation, we could evaluate different competences of trainees in each task. As expected, young trainees presented lower scores than seniors in statistically significant way. This did not happen for exercise 5, where trainees had to make stitches and knots on a silicon suture-pad. The most reasonable option of this result could be the difficulty of this exercise. In fact, senior trainees still presented better results but not in statistically significant manner. However, simulation has a pivotal role on ameliorating technical skills. In this study, we demonstrated the use of simulation improves, in a statistically significant way, specific surgical competences as handling instruments, using dominant and no-dominant hands, cutting, making single stitch/knot in a lab setting (Table ). After simulation, trainees significantly improve their technical skills in all the considered domains. However, it is worthy to note no differences were found in task 1, (introduction of trocars and Verres’s needle). This could be due to different reasons: the system we used for simulation did not perfectly match with reality. In this type of exercise, this lack of likelihood could lead to a difficult evaluation of the correct introduction of trocars and needle. At the same time, for this type of exercise the recognition of the instruments and its handling, outside the operating field, is also intuitive for trainees who have never performed laparoscopy in person. These elements could justify no significant differences before or after simulation. The benefit of simulation relapsed on junior (first year of residency) and senior (last year) trainees. In each task, competences resulted significantly improved and the powerful of this improvement is particularly important in junior trainees. After simulation, junior trainees presented competences, which are superimposable with those of senior trainees before simulation. This aspect deserves to be highlighted. After a few sessions of simulation, junior trainees reached the same level of basic technical skills on laparoscopic surgery, which senior trainees reached after at the least 2–3 years of work in surgical theater. Also, the execution time improves before and after the simulation training and this difference persists even when comparing junior and senior trainees. This data is in contrast with other papers already published on this issue but it should be emphasized that, in our study, the number of participants is a little bit higher. Obviously, we must consider this data with extreme caution even because recruited residents were only in Gynecology and Obstetrics. However, this data could support the use of simulation before any training in vivo, on the patient from the beginning of residency. The development of simulation in Medicine has often been hampered by misperception of reality that would reduce the simulation to a game for adults. In truth, several studies now show how simulation can help not only to acquire technical skills but also to improve the perception of one’s own abilities in that specific task [ , , ]. Several authors even consider mandatory the use of simulation before starting rotation in surgical room . In our study, we have shown that simulation can also be a useful tool for evaluating technical skills of the trainee by an external tutor. This evaluation, moreover, corresponds to the learner’s self-judgment in all the tasks observed except for exercise 5. The discrepancy in this exercise is not causal. This exercise requires high surgical skills and expert operators in clinical practice usually perform it. Evidently, based on this comparison, trainees consider badly their performance even if, from a purely technical point of view, their skills are good at the judgment of their tutors. This work presents some weaknesses. Evidently, in the learning process, per the Kirkpatrick’s Model, the third level is missing . This aspect is necessary to analyze and evaluate results of training and educational programs. In fact, this study did not evaluate the differences in the participant’s behavior at work after completing the program. From an experimental point of view, this study does not have an in vivo “after simulation” evaluation. This limitation of the study depends on several elements, including the decision shared by our Institute’s Ethical Committee, not to test the ability of young trainees in patients for pure research purposes. In our residency program, the approach to the operating room as operators is gradual and takes place in about 5 years. The activity of the first operator in major surgery, where an appropriate laparoscopic skill is required, is reserved for the last 2 years of the program and for those residents who want to specialize themselves in the surgical gynecology. However, in literature, skills acquired by simulation-based training are transferable to the operative setting [ – ]. The lack of direct access to the operating room by all the specialists limits an adequate training and simulation could help us in this as pointed out by some authors [ – ]. For this reason, we have routinely introduced simulation in our programs. However, it should be highlighted that the real purpose of the study is to use simulation as a tool for evaluation in safety. In our study, we demonstrated that simulation can be a useful tool to evaluate a trainee by a senior tutor and as self-evaluation.
Our data support the routine use of simulation in laparoscopy to evaluate and improve surgical skills in trainees. Indeed, trainees could use simulation to self-test their capacity before to practice on patients.
|
Wearable-derived short sleep duration is associated with higher C-reactive protein in a placebo-controlled vaccine trial among young adults | 15892782-1483-47e0-ab13-94da32807272 | 11947287 | Pathologic Processes[mh] | C-reactive protein (CRP) is a systemic and sensitive marker of inflammation and infection . The plasma levels of CRP increase rapidly with onset of an inflammatory stimulus and decrease quickly following the removal of the stimulus. Elevated CRP levels have been associated with inflammation, cancer, obesity, metabolic syndrome, and increased risk of cardiovascular diseases, including heart attack and stroke – . In addition, CRP was found to be elevated within the first 2 days after vaccinations and returned to baseline values within 1 week , . The typical inflammatory response to vaccine has been characterized by tracking changes in inflammatory markers like CRP . For example, Paine et al. found that administration of Salmonella Typhi vaccine significantly increased granulocytes and cytokines IL-6 and TNF-α with a peak after 6–8 h, and CRP with a peak after 24h . Posthouwer et al. found that 48 h after influenza vaccine, alone or with the pneumococcal vaccine administration, CRP was substantially elevated . It has been shown that a number of factors can affect such responses, for example cytokines can remain elevated for several weeks after vaccination in elderly , or have an insufficient increase in HIV-infected individuals , or individuals with major depressive disorder and trauma . Other factors, such as sleep, may also affect the immune response to vaccinations since sleep plays an essential role in physical and physiological functions. Inadequate sleep duration has been associated with an increased risk of mortality and a variety of adverse health outcomes, such as cardiovascular disease, abdominal obesity, hypertension, and mental disorders – . Other studies have shown the importance of appropriate sleep duration for proper immune system function. For instance, short sleep duration has been indicated as a factor in the susceptibility to infectious illnesses such as the common cold and COVID-19 , – . Recent research found that insufficient sleep was associated with decreased immunologic response to several types of vaccines and may impair the development of immunity to vaccinations . Furthermore, extreme sleep durations have been reported to lead to increased levels of inflammatory markers, particularly CRP – . A longitudinal analysis of sleep duration and biomarkers showed that longer sleep duration was associated with elevated CRP levels , . In contrast, other studies reported that subjects experiencing sleep deprivation exhibited raised CRP levels . In addition to duration, sleep quality has also been reported to influence immune response. Poor sleep continuity, disturbance of sleep architecture, and poor sleep quality have been found to correlate with susceptibility to infections and elevated CRP levels . The digital revolution has led to a proliferation of wearable technologies that can automatically detect and track sleep in real-time . Since the passive tracking of sleep metrics using wearables is more accessible and less intrusive compared to subjective sleep questionnaires and gold standard polysomnography (PSG), wearable devices are useful for monitoring sleep health in real-world settings – . Despite recent research exploring the potential and reliability of wearable-derived sleep metrics , the role of sleep metrics from wearables in public health remains unclear. We recently conducted a triple-blinded, randomized clinical trial which aimed at mimicking mild inflammatory responses in real infections using vaccinations . Trial participants were administered either vaccinations or placebo injections and were monitored over 4 weeks by blood samples, wearable devices, and questionnaires. Wearable devices were used to continuously monitor participant physiology (including during sleep). In terms of changes in blood biomarkers post-vaccination/placebo, we specifically found that CRP levels increased following administration of both Typhoid Vi Polysaccharide (Typhim Vi) and Pneumococcal Polysaccharide Vaccine (PPSV23) vaccines . In this retrospective study, we aimed to fill the knowledge gap regarding sleep duration and inflammatory responses triggered by vaccination or placebo administrations. Specifically, we focused on objectively measured sleep duration measured by wearable devices and investigated how these sleep metrics from wearables were associated with the inflammation biomarker CRP. We focused specifically on sleep duration and efficiency as sleep has been found to be accurately detected with the Oura Generation 2 smart ring (89% accuracy in sleep vs. awake estimation) , which was one of the wearable devices used in the clinical trial – . While measured by the Oura ring, we did not investigate metrics such as sleep quality since they are dependent on time spent in specific sleep stages and have more variability (61% agreement with PSG in multi-state categorization) and a precise assessment would require an assessment via polysomnography .
Summary of previously completed clinical trial Study population Assessment of sleep quality from wearable sleep metrics Questionnaire Measurements of hs-CRP levels Statistical analyses In this study, we investigated the participants who were enrolled in the Persistent Readiness Through Early Prediction (PREP) immunization study. PREP was a triple-blinded, randomized clinical trial conducted at Texas A&M University in 2022 . The details of the study design, participant selection, and data collection of the PREP study were published previously in Wang et al . Here, we briefly summarize aspects of the trial that are relevant to the current study.
A CONSORT diagram illustrating participant assessment for eligibility, randomization, and on site visits for clinical trials data used in the current study was published previously . Following a 14-day baseline period, healthy participants, aged 18–40 years, were randomly administrated either a PPSV23, Typhim Vi, or placebo vaccination (NCT05346302). Only participants who were judged to be in satisfactory health based on medical history and physical examination based on a physician’s judgment were included in the trial. Participants diagnosed with chronic diseases, such as diabetes and asthma, were excluded. Data was collected at various time points over 4 weeks from blood samples, wearable devices, and questionnaires. Following a 2-week baseline period, 210 healthy participants, aged 18–40 years, were administered either a PPSV23, Typhim Vi, or placebo vaccination. Pneumococcal Polysaccharide Vaccine (PPSV23) is a vaccine designed to protect against pneumococcal disease, which is caused by Streptococcus pneumoniae . Typhoid Vi Polysaccharide Vaccine (Typhim Vi) is designed to protect against Typhoid fever, which is a potentially life-threatening intestinal illness caused by Salmonella Typhi . Informed consent was obtained from the participants. The study was approved by the Texas A&M University Institutional Review Board (IRB) and Human Research Protection Official, as well as registered on clinicaltrials.gov (NCT05346302), in accordance with relevant guidelines and regulations. We included clinical trial participants in the current study if they had at least 3-nights data tracked by a smart-ring wearable device (Oura ring) during the 14-day baseline period and high-sensitivity (hs)-CRP measurements available prior to interventions and after 2-days post interventions.
To assess sleep metrics in the current study, we used data from the Oura ring (Generation 2), which continuously monitors physiological changes and sleep during the night. Derived features such as heart rate and heart rate variability (root mean square of the successive differences—RMSSD), as well as hypnogram, activity and readiness summaries for each night were provided by Oura in csv format. For each participant, we extracted daily sums of those derived features prior to vaccination and then calculated the average to obtain mean sleep duration during the 14-days baseline period. Wearable-derived Total Time in Bed (TIB) was defined as total time spent in bed throughout the night, summing the duration of Awake, Light, Rapid Eye Movement (REM), and Deep sleep stages recorded by the Oura ring. Wearable-derived Total Sleep Time (TST) was defined as total asleep time throughout the night, summing the duration of all sleep stages (Light, REM and Deep) recorded by the Oura ring. Wearable-derived sleep efficiency (SE) was calculated as the ratio of time spent asleep to time spent in bed.
The Pittsburgh Sleep Quality Index (PSQI), a self-reported questionnaire with 19 individual questions, was administered during PREP to estimate sleep quality and disturbance over the one-month time interval. The PSQI questionnaire was administered at 7 days prior to vaccination or placebo injections to record the subjective sleep matrices at baseline. Participants were asked about their sleep habits, for example bedtime, getting up time, and sleeping troubles. Scoring of those questions yields 7 “component scores”, each with a scale of 0–3, including subjective sleep quality, sleep latency, sleep duration, habitual sleep efficiency, sleep disturbance, use of sleep medication, and daytime dysfunction . For example, question 4 was designed to capture subjective sleep duration by asking, “ During the past month, how many hours of actual sleep did you get at night? ”. The total of the 7 component scores generates a global sleep quality score ranging from 0 to 21. A high score suggests poor sleep quality. During the data collection, the daily questionnaire was collected via MyCAP, where participants were asked to self-report symptoms and any medical testing for infection that they had received .
During PREP, blood samples were drawn after fasting overnight to perform the clinical biomarker measurement. Blood samples were collected on the same day as vaccination (prior to immunization), 2 days post-vaccination, and 7 days post-vaccination. Blood samples were sent promptly to CLIA-certified Quest Diagnostics for measurement of high-sensitivity C-reactive protein (hs-CRP). Additional blood was processed and stored at −80 °C for batch-wise analysis. Participants whose results were returned as “out of range” by Quest Diagnostics were re-analyzed using the stored samples. Samples were thawed and prepared according to the standard operating procedure for the hs-CRP (part number: 05401607190) kit on the COBAS c111 (Roche Diagnostics, Indianapolis, IN).
In the current study, baseline wearable characteristics of the participants were compared by groups using Fisher’s exact tests for categorical variables and Kruskal–Wallis rank sum tests for continuous variables across the sleep duration groups. To better investigate post-vaccination associations between sleep duration and incidence of hs-CRP elevation (> 0 mg/L), wearable-derived TST was categorized into three group: less than 6 h (< 6 h), up to 7 h (> = 6 h and < = 7 h) and more than 7 h (> 7 h). Wearable-derived TIB was categorized by quartile. The levels of hs-CRP elevation relative to baseline were calculated by subtracting the CRP baseline levels from the CRP levels measured on day 2 post-administration. The hs-CRP elevation event was identified by an increase in CRP levels post-administration relative to baseline levels. Multivariable logistic regression was performed, with hs-CRP elevation relative to baseline after administration as the dependent variable and the sleep duration group as the predictor variable, adjusting for age, sex, body mass index (BMI), comorbidities, vaccination status (vaccination or placebo), and averaged daily steps. Multivariable linear regression was utilized when CRP elevation levels relative to baseline were treated as a continuous dependent variable with the same predictors in the models. The pre-specified confounders were known factors that potentially affect changes in CRP levels, based on literature review, domain knowledge, and preliminary univariate analysis – . Participants were defined as having a comorbidity if they reported any medical condition according to the Charlson Comorbidity Index and any additional comorbidities. The detailed list of comorbidities was shown in Supplementary Materials in Table S1. The covariate averaged daily steps was estimated by averaging the daily steps derived from the Oura ring in the 14-days baseline prior to vaccination. Vaccination status was defined by whether the participant was administrated a vaccination (PPSV23 or Typhim Vi) or placebo in the PREP study. Additionally, we conducted a baseline model to evaluate whether hs-CRP elevation was caused by inflammation induced by vaccination/placebo injections. In the baseline model, elevated hs-CRP at baseline was defined as hs-CRP level greater than 3 mg/L . All analyses were performed using Python software packages.
Cohort characteristics Correlation between subjective and objective sleep measurements Wearable-derived TST and hs-CRP Wearable-derived TIB and hs-CRP Subjective PSQI sleep duration with hs-CRP Sensitivity analysis In a sensitivity analysis aimed at reducing potential effects of hs-CRP changes induced by other infections, we excluded 10 subjects who reported feeling ill and tested for infections (e.g., COVID-19, influenza, and strep throat) in the daily survey (N = 179). When we restricted the cohort by excluding those subjects, none of the analyses was substantively changed (results not shown). In a secondary sensitivity analysis, we restricted the study population administered with vaccination (N = 127). Short wearable-derived TIB was still independently associated with hs-CRP elevation (1 st quartile, OR = 5.17, 95% CI: 1.48–18.05; 2 nd quartile, OR = 1.66, 95% CI: 0.55–5.01; 3 rd quartile, 1[reference]; 4 th quartile, OR = 2.27, 95% CI: 0.77 −6.72) (Table S9). However, short wearable-derived TST and sleep duration from PSQI were not significantly associated with a higher incidence of elevated hs-CRP, likely due to insufficient sample size (not shown).
A total of 206 volunteers enrolled and completed the study from February to December 2022. Participants were excluded from the study if they lacked either any measurements recorded from Oura at baseline (N = 6), or hs-CRP measurements prior to vaccination and after post 2-days vaccination (N = 12). This yielded a total of 188 participants for the final analysis (Fig. ). Participant baseline characteristics before interventions are displayed in Table S2. The median age was 23 years (interquartile [IQR], 21 to 27 years), the median body mass index was 24.9 kg/m2 (IQR, 22.3 kg/m2 to 28.3 kg/m2), and there was a male to female ratio equal to 1. The median wearable-derived TST and TIB over 14-day baseline prior to intervention were 6.60 h (IQR, 5.96, 7,13 h) (Figure S1) and 7.79 h (IQR, 7.30 to 8.56 h), respectively. The median hs-CRP levels at baseline and 2 days post-intervention were 1.1 mg/L (IQR, 0.5 to 2.4 mg/L) and 1.8 mg/L (IQR, 0.7 to 3.3 mg/L), respectively. 58% of those participants had hs-CRP increased at the 2-day post-intervention, with median change of 0.1 mg/L (IQR, −0.1 to 1.0 mg/L) (Figure S1).
To investigate the relationship between objective and subjective sleep measurements and further inform sleep metrics for further analysis, we compared the objective wearable-derived TST from Oura to the 7 component scores and global sleep scores from the subjective PSQI questionnaire (Table S3). Longer wearable-derived TST was significantly correlated with lower self-reported sleep duration score (r = −0.341, p < 0.001) (Fig. A). Wearable-derived TST showed a significant positive correlation with sleep quality (r = 0.203; p = 0.005) and sleep dysfunction scores (r = 0.159; p = 0.030) from PQSI responses (Fig. B , Fig. C). However, subjective sleep latency, habitual sleep efficiency and sleep global scores exhibited non-significant positive correlations with wearable-derived sleep duration (all p > 0.05). The results were similar when comparing wearable-derived TIB and PSQI scores, except for significant correlation between TIB and PSQI sleep latency (r = 0.155; p = 0.034) and disturbance (r = 0.213; p = 0.003) component score (Table S3). However, no significant concordances were found between wearable-derived SE and PSQI scores (Table S3).
Participant baseline characteristics and changes of hs-CRP after intervention by wearable-derived TST are displayed in Table . The distribution of wearable-derived TST was 26.1%, 44.7% and 29.2% for < 6, < = 7 and > 7 h, respectively (Table ). We found that there was a higher incidence of CRP elevation (p = 0.031) after intervention (Fig. ) and a higher proportion of males (p < 0.001) in the group with shorter TST group (< 6h) compared to other groups (Fig. A). In addition, longer sleepers tended to report comorbidities (p = 0.004) and psychological issues (p = 0.005) more often (Table , Fig. B , Fig. C). In a logistic regression model adjusting for potential confounders and using TST > 7 h as the reference, those who slept less than 6 h have a higher incidence of hs-CRP elevation after vaccination and placebo injections in comparison to those who slept more than 7 h (< 6 h , OR = 3.79, 95% CI: 1.45–9.91; < = 7, OR = 1.41, 95% CI: 0.66 −3.02; > 7 h, 1[reference]), suggesting that shorter sleep duration was independently associated with a higher incidence of hs-CRP elevation after intervention (Table , Fig. ). The AUC of this logistic model to predict the hs-CRP elevation after interventions was 0.72 (Figure S2 A). Short sleep duration remained a significant predictor (p = 0.035) for increased hs-CPR levels using multivariate linear regression after adjusting for confounders (Table ). This significant association was no longer present when sleep duration was fit as a continuous variable. For reference, in the baseline model to assess the association between wearable-derived TST and hs-CRP elevation during baseline prior to vaccination, there was no significant difference observed in the odds of hs-CRP elevation among different groups of TST (Table S4, Figure S2 B).
The associations between wearable-derived TIB and incidence of elevation of hs-CRP after intervention was also evaluated. Participant baseline characteristics and changes of hs-CRP after vaccination and placebo administration were compared across the wearable-derived sleep duration quartiles. The median wearable-derived TIB was 6.9, 7.5, 8.1 and 8.9 h for quartile 1, 2, 3 and 4, respectively (Table S5). We found that there were more increases of hs-CRP from baseline after vaccination and placebo administration (p = 0.02) in an unadjusted analysis (Table S5). In a logistic regression model adjusting for potential confounders and using 3 rd quartile of sleep duration as reference, those who slept least in the 1 st quartile had higher incidence of hs-CRP elevation after vaccination in comparison to those who slept around 8.1 h in the 3 rd quartile (1 st quartile, OR = 3.30, 95% CI: 1.25–8.73; 2 nd quartile, OR = 1.37, 95% CI: 0.55 −3.40; 3 rd quartile, 1[reference]; 4 th quartile, OR = 1.12, 95% CI: 0.46 – 2.74), suggesting that shorter wearable-derived TIB was also independently associated with higher incidence of hs-CRP elevation after vaccination/placebo administration (Table S6).
For comparison with objective sleep duration results, we also assessed the associations between PSQI sleep duration and incidence of elevation of hs-CRP after vaccination/placebo administration. Since there were only 3 individuals who reported sleeping less than 5 h (component score = 3), we combined those who slept less than 5 h and those who slept less than 6 h (component score = 2) into one group. The distribution of wearable-derived sleep duration was 27.1%, 63.8%, and 9.1% for < 6, < = 7 and > 7h, respectively (Table S6). Similarly, there was a higher incidence of hs-CRP elevation (p = 0.024) after vaccination/placebo administration in shorter TST group (< 6h) compared to other groups (Table S7). However, no significant differences between males and females were observed across the subjective sleep duration groups. In a logistic regression model adjusting for potential confounders and using subjective sleep duration of more than 7 h as the reference, those who slept less than 6 h had higher incidence of hs-CRP elevation after vaccination /placebo administration in comparisons to those who slept more than 7 h (< = 6 h , OR = 6.78, 95% CI: 1.30- 35.4; < = 7, OR = 1.26, 95% CI: 0.62 – 2.56; > 7 h, 1[reference]). This suggests that a subjectivelyvaccination/placebo administration (Table S8).
In this study, we demonstrated how sleep metrics from commercially available wearables could be used in public health, particularly within the context of the response to inflammation in a medium-size clinical trial cohort of 188 healthy young adults over 14-days baseline and 2-day post intervention time windows. First, we compared objectively recorded sleep metrics, including TST, SE, and TIB by Oura rings, with subjective sleep scores from PSQI. Our analysis indicated moderate correlations between wearable-derived TST and TIB with PSQI sleep duration, sleep quality and sleep dysfunction component scores. However, no significant correlations were observed between wearable-derived SE and PSQI scores. In addition, we identified both sex and health status associated with wearable-derived TST and TIB. Furthermore, our findings revealed that shorter wearable-derived TST and TIB recorded by Oura, and subjective sleep duration from PSQI at baseline, was associated with an increased incidence of hs-CRP elevation after vaccination or placebo injections, which was consistent with the correlation study that found a significant association between objective, wearable-derived sleep duration and subjective PSQI sleep duration. Importantly, this association was independent of known potential confounders, including age, sex, BMI, comorbidities, interventions (vaccination or placebo), and averaged daily steps. We also observed a significant inverse association between age and the incidence of hs-CRP elevation after intervention, which is expected given that aging was often associated with a diminished inflammatory response to vaccines due to immunosenescence . To our knowledge, this is the first report of an association between wearable-derived sleep duration and experimentally induced CRP elevation in healthy young adults. Given vaccination or placebo injections could induce mild inflammation and CRP is a sensitive inflammation biomarker, our results suggest that sleep duration could represent a risk factor for inflammation triggered by such stimuli. We first explored the correlation between objectively recorded sleep by Oura rings and subjective sleep assessments from PSQI questionnaires. We observed a significant moderate correlation between Oura-recorded TST and TIB and self-reported sleep duration sub-score from PSQI, suggesting a general agreement between the objective and subjective measurement of sleep duration, despite some discrepancies. This finding aligns with previous studies comparing objectively recorded sleep duration (using PSG, actigraphy and Fitbit) with self-reported sleep duration , . For example, Teo et al. reported a weak, but significant correlation of 0.28 between wearable-derived sleep duration (Fitbit) and self-reported sleep duration . In addition, our analysis showed that wearable-derived sleep duration was significantly correlated with higher self-reported PSQI scores for sleep quality and sleep dysfunction. This indicates longer sleep duration was associated with poorer perceived sleep quality and greater sleep dysfunction. When comparing wearable-derived sleep efficiency (SE) with self-reported PSQI scores, no significant concordances were found. This finding is consistent with previous reports showing weak or no correlation between objectively-measured and PSQI-derived habitual SE . One possible explanation for this discrepancy is the distinction between subjective and objective measures might be more pronounced with sleep efficiency as a metric than with sleep duration. While wearable devices like Oura ring can track the ratio of time spent asleep to time spent in bed, this does not necessarily capture the full complexity of an individual’s perception of sleep experience, such as personal expectations, sensitivity, and psychological components. Our comparison between wearable-derived and PSQI-derived sleep metrics will inform investigators on the use of wearable-derived sleep measurements to refine methodologies and improve the accuracy of sleep studies in future research. Our study protocol allowed us to investigate the relationship between wearable-derived sleep metrics with demographics, daily activities, and health status. We found that wearable-derived TST and TIB were significantly associated with sex and comorbidities, particularly physiological issues. Previous studies have reported that women tend to sleep longer than men across various study populations and age groups, attributed to a combination of physiological, biological, environmental, and social factors, which is consistent with our findings . Moreover, our univariate analysis revealed that young adults with health concerns had longer recorded TST and TIB, suggesting they might exhibit altered sleep patterns. One possible explanation is that health conditions, such as stress and anxiety, could be closely linked with a variety of factors (e.g., poor sleep quality, dysregulation of the sleep–wake cycle, medication side effects), leading to increased sleep . These findings underscore the importance of considering possible moderators (e.g., sex, health factors) when assessing sleep behaviors and interventions. Although wearable-derived sleep duration has not previously been associated with vaccination-induced CRP levels, sleep duration has been associated with inflammatory biomarkers, including CRP, in various clinical studies with mixed findings , – . In a longitudinal study, shorter sleep and decreases in sleep duration were associated with higher levels of CRP over a 5-year period, even after adjusting for demographics and occupation . Both acute total and short-term partial sleep deprivation has been shown to raise basal levels of CRP in young healthy adults . However, a meta-analysis of cohort studies, including 72 studies with a total sample size more than 50,000, showed that extreme long sleep duration, but not short sleep duration, was associated with higher levels of CRP and IL-6 . Interestingly, another large-scale study in American adults found CRP was elevated among extreme sleep sleepers (> 9 h or < 5 h) . However, other population-based studies of healthy men and women , as well as young women, suggested there was no association between sleep duration and inflammatory biomarkers . In our baseline model, no association was observed between sleep duration and hs-CRP elevation, using a hs-CRP cut off of 3 mg/L, before interventions. However, we found that higher BMI was associated with an increased incidence of CRP elevation, which is consistent with previous findings that BMI was a significant factor in CRP elevation, even among young adults . A variety of microbial-derived factors, such as muramyl peptide and endotoxin lipopolysaccharide, and inflammatory mediators, such as cytokines interleukin (IL)−1and growth factors, were identified as sleep-regulating factors. These factors allow the immune system to signal the brain and interact with other substances involved in sleep regulation, such as neurotransmitter . In our study of interventions in a placebo-controlled vaccine trial among healthy young adults, shorter sleep duration was associated with increased incidences of hs-CRP elevation after intervention. One potential explanation is that the weakened immune system, caused by sleep deprivation, diminishes its ability to regulate inflammatory responses and increases susceptibility to stimuli, vaccinations, or placebo injections. This could lead to more exaggerated and robust inflammatory responses compared to individuals with adequate sleep, due to the reduced efficiency of the immune system in counteracting threats. Previous vaccination studies have suggested that regular sleep supports the primary immune responses to different vaccines. For instance, findings from a recent meta-analysis provide evidence demonstrate that insufficient sleep (< 5 h) around vaccination reduces the antibody responses, suggesting maintaining a healthy sleep duration could improve vaccine effectiveness . Sipegel et al. reported that restricting sleep in participants resulted in a diminished antibody response to the influenza vaccine, producing approximately half the antibody levels observed in the control group at 10 days after inoculation . Cellular immunology studies on sleep loss also support the changes in immune function that are relevant to host resistance. Sleep deprivation could remarkably downregulate T cell production and shift the T-helper 1/T-helper 2 cytokine balance away from T-helper 1 predominance, indicated by a lower of interferon-γ/IL-4 production ratio . This is consistent with previous findings that individuals with insufficient sleep were more susceptible to viruses, such as common cold virus and coronavirus , . Another potential explanation is the weakened immune system caused by sleep loss triggers the production of extra CRP to maintain and enhance antigen presentation after vaccination. The role of CRP in antigen presentation has been suggested due to its interaction with Fc-gamma receptor I, a high-affinity receptor for the Fc region of immunoglobulin, and its binding to various microorganisms , . Evidence indicates that CRP plays a protective role against bacterial infections by the activation of the classical complement pathway and facilitating opsonization for phagocytosis. For example, Szalai et al. demonstrated that expression of human CRP in CRP transgenic mice could enhance long-term antibody response to Typhimurium and decrease mortality. This effect was attributed to the increased early clearance of salmonellae from the blood and reduced dissemination of bacteria to the liver and spleen in the early stage of inoculation . This study has several strengths. First, to our knowledge, this is first study to investigate associations of wearable-derived sleep duration and the incidence of CRP elevation after vaccination and placebo administration in young adults. Second, we controlled for numerous potential confounders in logistic regression to decrease the possibility of residual confounding. Third, our study protocol allowed us to study how wearable-derived sleep related to subjective sleep metrics and various factors, including demographics, health status, and lifestyle factors. This study also has several limitations. First, since we relied on wearable-derived sleep duration based on the average over a 14-day baseline period, it is possible participants did not consistently wear the devices during the night, potentially resulting in the misclassification of sleep duration. Second, possible confounding factors remain despite adjusting for prespecified potential confounders. Third, the sample size was insufficient to further investigate the relationship in the intervention subgroup. Fourth, there was a limited number of extreme sleepers (< 5h or > 9h) in our study cohort. Finally, this study does not provide any information about underlying mechanisms between shorter sleep duration and an increased incidence of CRP after challenges. In summary, shorter wearable-derived TST and TIB, as well as subjectively assessed sleep duration from PSQI, are associated with a higher incidence of CRP evaluation after vaccination and placebo administration in young adults, suggesting that sleep loss could be a risk factor of inflammation, particularly mild inflammation. In addition, our study highlights the potential and reliability of wearable-derived sleep metrics to provide novel insights into their applications in biomedical research and public health. Whether the CRP in response to sleep deprivation might be a protective functional response to the immune system requires further study. Further studies also are needed to investigate the potential underlying mechanisms linking shorter sleep duration with an increased incidence of CRP elevation after such challenges.
Supplementary Information.
|
Toothbrushing behavior over time: A correlational analysis of repeatedly assessed brushing performance | cf488285-d3d1-43bf-a0d3-207a9e3ca7b9 | 11658570 | Dentistry[mh] | Thorough oral hygiene is a preventive self-care behavior for maintaining oral health . However, the high prevalence of plaque-associated periodontal disease suggests that the brushing behavior of individuals is ineffective at ensuring oral cleanliness. Several studies have consistently shown residual plaque immediately after toothbrushing, particularly on gingival margin sections and inner teeth surfaces . Video-based observational studies indicate that this inadequate plaque removal might be a result of suboptimal oral hygiene skills . In these studies, brushing performance was assessed by the analysis of brushing time and distribution across teeth surfaces and sextants as well as brushing movements. The results revealed that many study participants did not brush at least one sextant and neglected the inner teeth surfaces; in some cases, the latter were omitted completely. Regarding brushing techniques, study participants frequently showed only horizontal brushing movements as opposed to applying more elaborate circular or vertical movements . These observational studies were of cross-sectional nature and provided limited insight into whether the toothbrushing behavior observed once is maintained over time. Survey studies indicate that toothbrushing is generally performed regularly as a part of daily routine , which indicates that the process has a habitual nature. Habits are established as a result of a repeated and patterned sequence of behavior that becomes routine and automatic over time . Children learn to brush their teeth at a very early age with adult support and supervision , and over time a toothbrushing habit is formed that is automatic and performed more-or-less subconsciously . This suggests that once the habit is established, it tends to remain unchanged over time. However, there is a paucity of empirical work on habit formation most of which focuses on specific aspects such as the cues that trigger the brushing behavior , the frequency of brushing, and the sequence of actions performed . In terms of toothbrushing performance itself, there is some data on the stability of specific aspects of the performance such as brushing duration or the brushing force. For example, it has been shown that children or students show little variation in brushing time, brushing force, or other brushing patterns, such as the surfaces of the teeth brushed or the brushing technique used . However, these studies are somewhat dated or based on only a small subsample of study participants. Furthermore, most involved only a few aspects of the toothbrushing behavior such as brushing duration or brushing force. Thus, it still remains uncertain, which further parts of the brushing performance are shown in a stable form or vary randomly from brushing to brushing event. This knowledge is required to alter aspects of brushing behavior and improve its effectiveness. The aim of the present study was to examine the degree to which behavioral aspects of toothbrushing performance remain stable over time or vary between brushing events. The hypothesis tested was that there is a high degree of concordance between two toothbrushing events assessed two weeks apart. This hypothesis is tested under two conditions: When study participants are instructed to perform to the best of their ability and as usual.
The study included two study objectives and was registered with the German Clinical Trials Register ( www.drks.de ; ID: DRKS00017812; 2019). The present analysis focuses on the second objective, which involved assessing the stability of the behavioral parameters observed at two time points (T1 and T2) two weeks apart. The first objective was to analyze the differences in brushing performance and the subjective and objective achieved oral cleanliness under two different brushing instructions. The respective analyses considered the data of T2 only, and were published previously . Ethics Study sample Procedures Observed behavioral parameters Statistical analysis The study was conducted according to the principles of the Declaration of Helsinki. The Ethics Committee of the Department of Medicine at Justus-Liebig-University in Giessen, Germany approved the study protocol (file no 254/18; 2019/01/23). Study participants were recruited from April 5, 2019 through July 17, 2019. The study participants received detailed information before the start of the examination and all participants provided written consent.
The study participants were university students from Giessen (Hesse, Germany), aged 18–35 years, who predominantly (at least 2/3 of all brushing events) brushed their teeth using a manual toothbrush. Participants were recruited through the ‘internal email distribution list of the university, which includes almost all students of the university, as well as through advertisements in a regional online magazine. Students were excluded if they: (1) were studying dentistry or human medicine, (2) had fixed orthodontic appliances, removable prostheses/dentures, dental jewelry, or oral piercings, (3) had a physical impairment that affected their oral hygiene behavior, (4) used antibiotics within the three months prior to study entry, (5) had dental prophylaxis within the four months prior to the study entry, or (6) were pregnant. The sample size was calculated using the free available power analysis program G*Power based on the first study objective (comparison of brushing to the best of one’s ability vs. as usual) and resulted in a minimal sample size of n = 102 (see ). This sample size allowed for the detection of correlations of ρ > 0.72 with alpha = 0.05 and a power of 1 - β = 0.80.
The study was conducted at the Institute of Medical Psychology of the Justus-Liebig-University in Giessen, Germany. The procedures followed those previously described and provided in . Briefly, eligible students were informed of the study details and scheduled for two brushing sessions two weeks apart. The procedures during the two appointments (T1 and T2) were the same. While participants brushed their teeth at both T1 and T2, clinical data were not assessed on both appointments. Disclosing of teeth and assessment of dental plaque took place only at T2. To ensure both appointments were as similar as possible and to prevent visible plaque staining at T1 from influencing brushing behavior at T2, a sham staining (using water faked as a fluorescent solution) and a simulated plaque assessment were conducted at T1. With regard to the first study objective (not focused here), study participants were randomly assigned to two brushing conditions (brushing to the best of one’s abilities vs. brushing as usual). At both appointments, the study participants brushed their teeth according to their assigned brushing instructions. The data assessors were maintained between the two appointments and blinded to the respective brushing condition. Any interaction with the study participants was conducted in a fully standardized manner. For both brushing sessions, the study participants were placed in front of a mobile washbasin and a computer tablet with a front camera mounted on a tripod. The tablet served as a mirror for the participants as well as a device for recording toothbrushing performance. In addition, toothbrushing was recorded by two side cameras mounted on the walls in case the tablet camera did not fully capture the brushing event. The participants were provided with a standard manual toothbrush (Elmex InterX short brush-head, medium; CP GABA, Hamburg, Germany) and toothpaste (Elmex; CP GABA). Dental floss (waxed and unwaxed; Elmex; CP GABA), super floss (Meridol Special-Floss; CP GABA), and interdental brushes (Elmex interdental brush sizes 2 and 4; CP GABA) were provided on a table beneath the basin. After receiving their brushing instructions, the participants were asked to begin brushing their teeth and their brushing performance was recorded.
The video-based analysis of the brushing behavioral parameters was conducted according to the procedure described in previous studies . A detailed description is provided in the supplemental material. The videos were analyzed by independent calibrated observers using observational software (Interact 18; Mangold International; Arnsdorf, Germany). Videos from a previous study were used for calibration. The criterion for a successful calibration was an intra-class correlation (ICC) ≥ 0.90 on five consecutively analyzed videos for each of the observed behavioral parameters. To ensure the reliability of the video analysis, an additional 10 videos of the study participants were analyzed in duplicate by two independent observers. These double codings returned a high agreement between the respective independent observers (ICC ≥ 0.90 for all observed behavioral parameters). The behavioral parameters included were: a)total tooth contact time (tct; length of time (seconds) that the brush touched the teeth), b) tct and proportional distribution of tct on the tooth surface (occlusal, outer and inner teeth surfaces), c) tct of circular or horizontal brushing movements on outer surfaces and vertical and horizontal movements on inner surfaces, and d) overall quality index for the distribution of tct across sextants and tooth surfaces (QIT-S ; this index represents a rank-scaled measure describing the extent to which the sextants were brushed on the outer and inner surfaces).
The present data analysis was conducted to test the hypothesis that the described toothbrushing parameters observed at T1 and T2 are highly correlated regardless of whether the study participants brushed to the best of their ability or as usual. According to Gilford , correlational coefficients between 0.70 and 0.90 indicate a high correlation, whereas correlations above 0.90 describe a very high correlation and a reliable relationship. A re-test correlation of r > 0.70 is sufficiently high to render brushing behavior stable over time. In addition to the product-moment correlational coefficient (Pearson), rank correlation coefficients (Spearman) were calculated to account for potential outlier values. According to the research hypothesis, the following hypothesis pair was independently statically tested for each of the behavioral parameters under observation: H 0 : ρ T1/T2 ≤ 0.5 and H 1 : ρ T1/T2 > 0.5. The respective means, SDs, and effect sizes with 95% confidence intervals of the differences of the means at T1 and T2 were reported. Effect sizes were calculated for correlational data according to Dunlap et al. . Cohen stated that effect sizes of d ≥|.2|, |.5|, and |.8| were considered small, medium, and large, respectively. The relationships between T1 and T2 of the categorical data assessed by the QIT-S index were analyzed by chi 2 -tests. All statistical analyses were conducted with the statistical software package SPSS (IBM SPSS Statistics for Windows, Version 28, IBM, Armonk, New York, USA).
Of the 106 study participants who finished the study, one person in the best-brusher group was excluded from the analyses due to an unusual brushing behavior (tooth contact time exceeded 15 minutes and deviated from the mean value by more than four SD). The demographic and clinical data of the study sample are provided in . The descriptive data and results of the correlational analyses of the observed brushing parameters for each brushing group are shown in Tables and . Scatterplots for each behavioral parameter are provided in . The descriptive analyses of the mean differences between T1 and T2 revealed small or spurious effects according to the nomenclature of Cohen (all d <|0.24|) . Under both brushing instructions, the correlation coefficients were similar in magnitude and varied between r = 0.72 and r = 0.93. Total brushing time showed the greatest correlations in both groups (r = 0.90 and r = 0.93). Correlations for the absolute time values of the time spent at the tooth surfaces (occlusal, inner and outer surfaces) were r ≥ 0.83 in both groups. Similar values were observed for the percentage of time used for brushing the respective surfaces (all r ≥ 0.77). Furthermore, brushing movements were similar for the two groups, with correlations varying from r = 0.72 to r = 0.89. The lowest correlations were observed for the time by which horizontal brushing movements were observed on the outer surfaces (r = 0.72 and r = 0.73). All reported correlational coefficients were statistically significant (p < 0.001). The results of the QIT-S are shown in . Significant relationships were found between T1 and T2 in the best brusher group for both the outer surfaces (chi 2 = 16.65, p < 0.001) and the inner surfaces (chi 2 = 146.92, p < 0.001). Identical QIT-S values at T1 and T2 were found for the outer surfaces in 94% of individuals and for the inner surfaces in 42% of individuals. Minimal differences (one score point) still indicating similar QIT-S values were found in the outer and inner surfaces of approximately 4% and 35% of the participants, respectively. In the as-usual brushing group, the relationship between T1 and T2 was also statistically significant (outer surfaces: chi 2 = 30.23, p < 0.001; inner surfaces: chi 2 = 114.01, p = 0.001). Identical QIT-S values at T1 and T2 were found in 76% and 34% of individuals for the outer and inner surfaces, respectively. Differences of one score point between T1 and T2 were found in 7.5% of individuals in this group for the outer surfaces and in 30% of individuals for the inner surfaces.
Previous studies observing toothbrushing behavior have been of a cross-sectional nature; therefore, the aim of the present study was to examine whether various aspects of this behavior were stable over time. The study participants brushed their teeth twice at two brushing sessions two weeks apart. One-half of the students brushed their teeth to the best of their ability, while the other half brushed as usual. As toothbrushing behavior is considered to be a routine and automated behavior , it was expected that the performance of toothbrushing would be similar at both time points T1 and T2 (i.e. that a person’s position in the group would remain stable over time). The analyses of the effect sizes of the comparisons between T1 and T2 showed that the average behavior at T2 differed only slightly from that at T1. A small increase in brushing time was observed in both groups (Tables and ), which may have been a carry-over effect from the procedures of the first brushing session. At T1, plaque measurement was simulated by (sham-) staining after brushing. Although this sham procedure did not give the subjects any realistic feedback about plaque on their teeth, it may have encouraged them to make a greater effort to clean their teeth at T2, thereby increasing their brushing time. However, the differences revealed were small (≤d = 0.23). Considering the correlations between the observed toothbrushing parameters, a high concordance of the brushing behavior was observed over time. The null hypothesis (H 0 : ρ BV1,MT ≤ 0.5) was rejected for all parameters. In particular, the observed absolute time values were strongly correlated, indicating a stable behavior. This correlation was most pronounced in the total brushing time results for both groups (i.e., tooth contact time). Similarly, the absolute time values for brushing occlusal and outer and inner surfaces were highly stable over time. In addition, the proportional distribution of time to the tooth surface showed a close relationship between T1 and T2. The proportion of brushing time spent on the respective tooth surfaces was similar to those observed in prior studies on students or adults, with only minor variations . These results suggest that the way individuals brush their teeth seems to be a rather stable behavior which is even more evident when considering the brushing technique (i.e., brushing movements used by the study participants). It was already reported, that among different aspects of toothbrushing the applied brushing technique seems to be the one with the most pronounced habit when repeatedly observed . In previous observations the majority of individuals were shown to apply circular and horizontal movements on the outer tooth surfaces and vertical and horizontal movements on the inner surfaces . Therefore, the present analyses focused on these brushing movements. Although somewhat less pronounced than the other brushing parameters, a clear relationship was found between T1 and T2. Finally, the results for the QIT-S showed a concordance between T1 and T2. These results differed slightly depending on whether the outer or inner surfaces were considered and whether the study participants brushed their teeth to the best of their ability or as usual. Strong relationships were found with respect to the outer surfaces for both groups. Although slightly less pronounced in the as-usual brushing group, the alterations of the QIT-S shown by some participants were small. The results of the QIT-S for inner surfaces were somewhat different, regardless of how the participants brushed their teeth. In contrast to the outer surfaces, the sextants on the inner surfaces were brushed for less time and some participants did not brush the inner surfaces at all. This neglect of the inner surfaces was observed in several observational studies and could explain the variation observed in this aspect of the brushing behavior. A strong behavioral habit for brushing inner surfaces appears not to be established. Nevertheless, most of the participants showed values scattered around the diagonal, representing complete concordance. In summary, the data from the present study showed that the toothbrushing behavior exhibited during the two brushing sessions appears to be stable. This corroborates the habit assumption of toothbrushing behavior , which was expected since most people perform this behavior regularly daily from an early age. Over time, it becomes a routine behavior that can be performed largely automatically. To date, research on habit formation in the context of toothbrushing behavior focused on aspects such as when or if this behavior occurs . In contrast, how individuals perform this behavior and whether this performance represents a habitual pattern has received little attention. The results of the present study show that toothbrushing behavior appears to be generally routine with little variation. These results are consistent with those of intra-individual toothbrushing patterns observed in a cross-over study, where students brushed their teeth with a manual and a powered toothbrush . In that study, there was no time interval between repeatedly observed toothbrushing; however, when brushing with a different brushing device, study participants tended to show similar behavioral patterns . The development of a stable individual pattern of toothbrushing performance implies that this behavior encompasses both strengths and deficits. With regard to the deficits, such a stable behavior is much harder to change than a brushing behavior that would only be dysfunctional by chance. The results indicate that increasing the frequency of toothbrushing is likely insufficient to improve oral hygiene and that preventive measures should consider the difficulty of changing established habits. Healthcare professionals and patients must recognize this challenge when addressing oral hygiene deficiencies. The present study has both strengths and limitations that should be discussed. One of the strengths is the systematic analysis of various aspects of toothbrushing behavior, such as total brushing time, distribution on tooth surfaces and sextants, and brushing technique, which provides a more thorough understanding of the behavior than assessing only one or a few factors. However, other important aspects of toothbrushing behavior have not been addressed here. These include the brushing force or the sequence of brush positions and the number of times brushing areas are changed. However, previous reports have shown that these aspects are similar among different brushing sessions . The use of thoroughly trained and calibrated video observers and the high standardization in the performance of the study are strengths to be highlighted. The study procedures at T1 and T2 were kept as constant as possible to ensure that toothbrushing was performed under the same conditions by each participant. To avoid visual feedback of remaining plaque at T1, which could result in a change in brushing behavior, a sham staining procedure was performed at T1. However, it cannot be excluded that this sham procedure caused a change in the behavior, and it is possible that it was responsible for the longer brushing times at T2 compared to T1. Another limitation was the external validity of the results. First, the present study analysed university students only and the generalizability to other age groups is undetermined. However, toothbrushing routines are acquired through regular and frequent practice; therefore, older persons could show an even more pronounced habit. Indeed, it would be interesting to study children in this regard. Secondly, the stability of this behavior was demonstrated under laboratory conditions and it is unclear whether individuals exhibit the same level of stability when brushing their teeth outside of this environment. Further studies analyzing toothbrushing in a domestic setting would be instructive.
The findings showed that repeatedly assessed toothbrushing performance exhibited a high degree of stability. The observed total brushing time and distribution of time to tooth surfaces and brushing movements were highly correlated. Deficits in brushing behavior such as neglecting the inner tooth surfaces and omitting entire sextants were similarly evident at both time points of observation.
S1 File Supplemental material. (PDF)
|
Subsets and Splits