title
stringlengths
1
827
uuid
stringlengths
36
36
pmc_id
stringlengths
5
8
search_term
stringclasses
44 values
text
stringlengths
0
2.08M
Survey on the working, training, and research conditions of resident physicians in internistic and rheumatological continuing education—BEWUSST
c4d85220-fa51-4e15-a498-654580e21f82
10879383
Internal Medicine[mh]
The questionnaire was created using the QuestionPro system (QuestionPro GmbH, Berlin, Germany). All questions could be answered online between May 4, 2022 and October 31, 2022. The survey was advertised via the DGRh newsletter and the social networks of the Young Rheumatology Working Group (Arbeitsgemeinschaft Junge Rheumatologie; AGJR). All rheumatologists authorised to provide rheumatological training received a separate letter raising awareness for the survey and motivating them to take part. The survey included a total of 102 questions on the following topics: basic data ( n = 15), working conditions in everyday professional life ( n = 5), continuing medical education and training ( n = 17), compatibility of work and family ( n = 10), compatibility of work and research ( n = 16), perspectives as a rheumatologist ( n = 14), practical activities ( n = 22) and personal opinions/comments ( n = 3). After completion of the evaluation, the survey results were extracted from the QuestionPro system and transferred to an Excel spreadsheet (Microsoft Excel 2016, Redmond, WA, USA). All data were available as absolute numbers or percentages. Ethics Statistics The data analysis was carried out according to the regulations of the Ethics Committee of the University Hospital of Jena, Germany. After summarising the data, descriptive statistics were performed. The statistical analyses were carried out with the software IBM SPSS Statistics version 27.0 (IBM Corp., Armonk, NY, USA) for Windows. Basic data Working conditions in everyday professional life Continuing medical education and training Compatibility of career and family Compatibility of work and research Perspectives as a rheumatologist Practical activities At the time of the survey, 91.2% of the participants had not yet completed their specialist training; 40.2% of the participants were supervised by a mentor (specialist in internal medicine and rheumatology) at their workplace. Arthrosonographies were performed under supervision or independently by 41.2% and 86.3% of the colleagues, respectively. The following data were collected for vascular ultrasound: under guidance 35.3% versus independently 43.1%. More participants, 60.8% and 73.5%, were able to perform joint punctures under guidance or independently, respectively. Regarding capillary microscopy, 29.4% of colleagues carried out the examination under supervision (54.9% independently). In total, 53.9% had the opportunity to acquire competence in rheumatological/immunological laboratory diagnostics at their own training centre, and 30.4% of the participants estimated that they already had decision-making skills in laboratory diagnostics at the time of the survey. Regarding radiological and nuclear medicine imaging procedures, the respondents had already received training in the following procedures: 48.0% X‑ray diagnostics, 35.3% computer tomography, 38.2% magnetic resonance imaging and 29.4% nuclear medicine imaging procedures (Table ). A total of 102 trainees (women n = 68 [66.7%] and men n = 34 [33.3%]) took part in the survey. The main age ranged from 30 to 34 years (40.2%). The majority of participants worked in the German federal states of Bavaria (20.6%) and North Rhine-Westphalia (19.6%). In 38.2% of the respondents, 2 children were living in the household; 91.2% of the participants had not finished any specialization. A total of 46.1% and 33.3% of the participants worked on a normal ward and in a hospital outpatient department, respectively. In 71.6% of the respondents, the hospital was in public ownership; 47.1% worked at a university clinic. The primary career goal of 38.1% of the participants was to establish a private practice, 27.3% aimed to become a senior physician at a hospital, 16.5% to pursue an academic career with habilitation/professorship and 3.4% to become a department head/chief physician (Fig. and Table ). The survey revealed that 36.3% and 11.8% of the participants were “rather satisfied” or “very satisfied” with their professional situation. The factor most frequently causing dissatisfaction was a heavy workload (14.6%), without an exact definition of working time. The proportion of activities in the daily work routine was as follows: 42.7% work with and on patients, 31.8% patient-related work and 25.5% non-medical or non-patient activities (Table ). A total of 75.5% of participants worked fulltime and 24.5% parttime. Of the parttime colleagues, 60% felt that they were disadvantaged in their continuing medical education and training. A total of 81.4% of the participants were employed at an institution with full authorisation for training as a specialist in internal medicine and rheumatology; 36.3% of the respondents had primarily received an employment contract for the entire period of their specialization; 22.5% stated that a structured training curriculum was available at their institution and that a transparent presentation of rotations was provided by the hospital management (42.2%). Training was completed within the scheduled time by 68.6% of participants. The use of external training opportunities was considered helpful by 55.9% of participants (Table ). The following results were obtained regarding satisfaction with the compatibility of career and family life: 6.9% “fully agree”, 27.8% “rather agree”, 29.2% “partly agree” and 22.2% “somewhat disagree”. Regarding the shift of tasks in favour of work (23.6% “fully agree”, 40.3% “rather agree” and 23.6% “partly agree”) and the support from colleagues (12.5% “fully agree”, 36.1% “rather agree” and 29.2% “partly agree”), a similar picture emerged. According to the participants, the flexible organisation of working hours (21.4%), home office (19.9%), less overtime (19.4%), better planning of working hours (14.4%), and a childcare place (4.5%) or all-day care (4.0%) led to an improvement in the balance between work and private life (Table ). In total, 69.6% of the participants had a doctorate, of whom 54.9% continued to work in science. The scientific work mainly concerned clinical topics (73%). Scientific work was primarily carried out after regular working hours (71.6%). On average, the participants were listed as coauthors of one scientific paper per conference. Over a quarter (28.4%) of the respondents were aiming for habilitation. Furthermore, 61.8% of the participants were involved in student teaching (Table ). For 77.5%, training in the outpatient sector was of interest. In the private practice sector, 13.7% of the respondents had completed a training period. After completing their specialist training, 52.9% wanted to combine clinical and outpatient work; 64% planned to work as employees. For 74.5%, working in a medical care centre as an employed physician was an option; 82.4% of the colleagues had not yet had any contact with a resident rheumatologist (Tables and ). The BEWUSST survey focuses on the working, training and research conditions of resident physicians in continuing medical education in rheumatology. The results of the survey presented here provide important insights into the situation of colleagues in continuing medical education. A total of 102 participants from all over Germany took part in the survey. In the 2022 survey on training positions, 478 positions for internal medicine and rheumatology were evaluated, of which 82.8% ( n = 396) were filled , so approximately 25% of residents for rheumatology in Germany took part in the survey. It should be noted that half of the respondents worked at a university hospital, so the statements regarding training positions in the non-university sector are possibly insufficiently represented. On the other hand, university hospitals have the highest number of clinical training positions (45%; 177 of 391 clinical training positions) . Two thirds of the participants were female, and over 90% were under 40 years of age. Three quarters worked fulltime, and one quarter parttime. Comparable results were found in the surveys of the German Society of Internal Medicine (fulltime 87% and parttime 13%), the German Radiological Society (fulltime 83% and parttime 17%), the Association of Residents in Training for Urology (fulltime 90% and parttime 10%) and the Society of Gynaecology and Obstetrics (fulltime 70% and parttime 30%) . Due to the anonymity of the survey, a representative picture of the current situation of residents in rheumatology training can be assumed. It should be emphasised as positive that almost half of the participants (48.1%) expressed satisfaction with their current work situation. This is to be interpreted in particular against the background of a higher level of satisfaction of the respondents compared to the results in internal medicine (38%), urology (44%) and gynaecology (40%) . In addition to the high workload, the proportion of non-physician activities and the impact on personal life were cited as factors that negatively affect the work situation. The highest priority was given to the issue of reconciling family and private life with working life. Another important result of the survey is that residents in rheumatology training stated that they suffer from considerable work pressure and lack of time. As this can have a negative impact on their physical and mental health and lead to burnout and other health issues, it seems important to take measures to improve the work–life balance and to provide better support systems for residents. It should be emphasised that more than three quarters of the participants stated that there is no structured training curriculum with plannable learning content/rotations at their training location. Again, the results of the current survey are in line with the previously published survey results for general internal medicine (78%), radiology (63%), urology (70%) and gynaecology (82%) . This seems to be a concern with a need and the potential for optimisation. The model curriculum of the DGRh can serve as a basis for the implementation of standardised and structured training in the field of internal medicine and rheumatology in Germany . The implementation of a training curriculum is associated with an increase in satisfaction . The additional use of the courses of the rheumatology training academy can further improve rheumatologically education in clinics and practices. Overall, 70% of the respondents were doctoral graduates—with a significantly higher proportion than in internal medicine (52%), radiology (59%), urology (44%) and gynaecology (54%) . Moreover, 55% of the respondents are active as scientists (internal medicine 19%, radiology 51%, urology 39% and gynaecology 42%) and work mainly in clinically oriented research areas. However, the high proportion of scientifically active training residents must be seen against the background of a relative overrepresentation of participants from university institutions in our survey. Primarily (72%), scientific work is done after working hours. To strengthen scientific work, research periods should be established, with time off from clinical obligations. The survey of the AGJR in 2019 indicated that only 19% of respondents reported regular mentoring . In the current survey, this was true for 40% of the participants, which means that the situation has improved compared to the AGJR survey. However, the conditions regarding mentoring for the performance of rheumatological diagnostics (e.g., joint punctures) should be further improved to enhance the quality of training. After completion of specialist training, half of the respondents aspired to a combined in- and outpatient activity, which should primarily be carried out as salaried employees; 74% of the survey participants see their future in a position in a medical care centre. These data are comparable to the results of a survey of rheumatology training residents from 2020 in Saxony-Anhalt, Saxony and Thuringia . Taking over an independent rheumatology practice does not seem to be a desirable career alternative for many . According to the survey, compatibility of family and career, namely the work–life balance, possibly as a parttime employee, is an important factor for professional activity. In the implementation of a combined in- and outpatient activity and, e.g., parttime employment, working in a medical care centre is an option for professional activity. Furthermore, outpatient specialist care forms an additional supplement for rheumatology training, since combined in- and outpatient practice is possible. To summarise, appropriate professional offers should be created for future specialists so that rheumatology remains attractive for training residents. The choice of an internal medicine speciality is often made at the end of medical school, and the possible lifestyle in the corresponding speciality plays an important role in the choice of speciality in many cases . In this context, the lack of independent rheumatology chairs or rheumatology departments at many university hospitals in Germany is critical, as good teaching during students’ medical education will clearly motivate young colleagues to choose this speciality for continuing medical education . In addition, such a survey should be repeated at regular intervals to be able to evaluate the effect of changes in training and, if necessary, to carry out a renewed adaptation. In summary, important findings can be drawn from the results of the BEWUSST survey that can lead to adjustment and continuous optimisation of rheumatological training education in Germany, incorporating the perspective of young colleagues in training. Consideration of the abovementioned negative implications is needed in light of an increasing shortage of internal rheumatologists and the rising prevalence of inflammatory rheumatic diseases. In order to ensure continued adequate rheumatological care, the number of training positions must be increased urgently. Half of the participating future rheumatologists are satisfied with the working conditions in the specialty. Mentoring should be further expanded, e.g., by using the model curriculum with the implementation of a structured training programme. With regard to the preference for combined clinical and outpatient work, the existing options (e.g., outpatient specialist care) should be expanded or new professional fields of activity established so that the specialty also remains attractive for the next generation. Raw data from the survey on the working, training and research conditions of resident physicians in internistic and rheumatological continuing education—BEWUSST
Joint effect of temperature and insect chitosan on the heat resistance of
c5887246-2dbb-4ffb-a8a2-3654dc74cd9a
9518883
Microbiology[mh]
Bacillus cereus is present in many foods due to its ubiquitous nature. This microorganism is one of the top ten pathogens responsible for many foodborne diseases in humans . According to the latest EFSA and ECDC report there is strong evidence for B . cereus was involvement in 38 outbreaks and weak evidence of involvement in 117 outbreaks out of a total of 155 outbreaks reported in 2019. Some recent outbreaks in non-EU countries have also been associated with this pathogen; 45 people were affected in an outbreak in a restaurant in Canberra (Australia) in 2018 and 200 students in an outbreak in a school in China in 2018 . Bacillus cereus causes two types of food poisoning one of an emetic nature and the other of a diarrheal nature . On the one hand diarrheal syndrome is caused by a gastrointestinal disorder due to the ingestion of B . cereus spores present in food and at a dose given, an appreciable probability that cells cross the stomach barrier and implanting themselves in the small intestine is possible. Once they germinate in the small intestine they produce enterotoxins that cause disease. On the other hand emetic syndrome is associated with the production of cereulide toxin in the food contaminated with spores that germinate and produce the toxin resulting in foodborne poisoning . In general, this microorganism is associated with complex food products that may include rice as a component; however, other rice-based products and farinaceous foods such as pasta and noodles are also frequently contaminated and involved in cases of B . cereus poisoning . The ability of B . cereus to form spores and biofilms enables its persistence in various ecological niches and food products resulting in its presence in processed foods such as cooked rice . Furthermore, it is the bacteria most commonly present in rice and rice-based products . Rice is a basic cereal in many diets and is widely consumed by the general population given its ample supply of nutrients and its relatively low cost. This cereal is one of the most important staple crops feeding almost half of the world’s population . Starch is the most abundant component of a rice grain constituting about 80% of the dry weight of a brown rice grain and approximately 90% of a milled rice grain . Rice also provides an important variety of micronutrients including vitamins such as niacin, thiamine, pyridoxine or vitamin E, and minerals such as potassium, phosphorus, magnesium and calcium . These conditions provide a very good substrate for B . cereus growth and subsequent toxin production. This cereal is habitually contaminated by B . cereus spores throughout all production stages from cultivation to the later stages of processing and consumption. It is believed that the primary habitat of emetic strains could be related to roots tubers and mycorrhizae of some plants such as rice which could explain the generally higher prevalence of these strains in carbohydrate-rich foods. In fact, starch has been shown to promote B . cereus growth and emetic toxin production. This would explain why most outbreaks of emetic disease are associated with starch-rich farinaceous foods . Some works pointed out that the current cooking processes for rice and rice derivatives do not inactivate B . cereus spores and consequently they can germinate and grow in food if it is not stored properly . Different control measures have been proposed to control Bacillus cereus in foods. As an additional strategy, heat treatment can be combined with other control measures (hurdle technology). In this respect, chitosan from different sources (crustacean or fungi) has received attention as an antimicrobial. It is a polysaccharide with a well-documented antibacterial activity towards vegetative cells which has already been effectively applied as edible chitosan films and in food packaging applications . According to Van Huis et al . rearing insects is a sustainable activity more friendly with the environment than fishing or traditional farming. Besides, as indicated by Mohan et al . , the extraction of chitin and chitosan from insects is more advantageous in terms of extraction methods, chemical consumption, time and yield compared to existing sources. Existing chitin resources have some natural challenges, including insufficient supplies, seasonal availability, and environmental pollution. As an alternative, insects could be utilized as unconventional but feasible sources of chitin and chitosan. According to previous in vitro studies , insect chitosan could be used as antimicrobial instead of chitosan from other sources. Based on those results it could be also applied as an additional control measure during heat processing of rice thus favouring the destruction of B . cereus spores by affecting their heat resistance. Currently there are no data on the joint effect of insect chitosan and heat on the heat resistance of B . cereus spores since chitosan from other sources is used as a natural antimicrobial in the preservation processes. The purpose of this study is to determine how B . cereus spore inactivation is affected by the presence of insect chitosan during the heat treatment. This knowledge can. Microorganisms and sporulation procedure Substrate preparation Capillary filling and heat treatment Statistical analysis The Bacillus cereus CECT 148 strain used in this study was obtained from the Spanish Type Culture Collection (CECT), (Valencia, Spain). The strain was reactivated in nutrient broth by shaking for 24 hours at 32°C and subsequently 0.5 mL of the B . cereus culture was inoculated in 20 Roux flasks (Fisher Scientific SL, Madrid, Spain) with Fortified Nutritive Agar (Scharlab. Barcelona, Spain) and incubated at 30°C. When the sporulation level reached approximately 90% the spores were collected. Spore harvesting was performed using a modified metal Digralsky loop (Deltalab, Barcelona, Spain) gently sweeping the agar surface and washing it with double distilled water. The collected solution was centrifuged at 2500g for 15 minutes at 5°C the supernatant was removed suspended again in 5mL of double distilled water and was centrifuged under the same previously described conditions this process was repeated 4 times. Finally, the spores from the pellet were stored at 4°C in distilled water. The rice solutions from cooked and lyophilized rice supplied for a local company were prepared by dissolving 0.4 g in 19 mL H 2 O. All solutions were heat sterilized. After sterilizing the rice solution, 1 mL of the spore solution was added and homogenous distribution was guaranteed by a vortex. Two solutions of rice with insect chitosan from Tenebrio molitor (150 and 250 μg/mL chitosan) (ecoProten, Cordoba, Spain) were used for the heat resistance studies on the food matrix. Those concentrations were chosen because in previous studies carried out by Valdez et al . 250 μg/mL showed a higher antimicrobial effect than the 150 μg/mL concentration. The pH was adjusted to between 6.8 and 6.9 by using NaOH. Finally, 1 mL of the spore suspension was added and homogenous distribution was guaranteed by a vortex. The resulting 20 mL of solution containing spores and chitosan were poured into a 50 mL sterile beaker. In all cases the spore concentration in the resulting rice solution was 10 8 spores/mL. The capillary tubes with one end closed were supplied by Vitrex, reference 217913 (1.50 x 2.00 x 100 mm). For the heat resistance study capillaries were filled using a drying chamber with a vacuum pump. Once the vacuum was achieved, it was broken and the rice solution rose through the capillaries, which were filled to a volume of 2/3 of their capacity. After that, the solution column was centred in the capillaries they were removed from the chamber and the open end was closed with a quick-drying silicone. Before the heat resistance study spores were heat activated in order to create the conditions for them to germinate and grow in the culture medium. For the activation of B . cereus spores the capillaries were placed in hooked racks designed for this type of study. The racks with the capillaries were immersed in a water bath (HAAKE N3) at 80° C ± 0.5 for 10 minutes. Both the rice solution alone and the rice solution containing chitosan were heat treated at 90 95 100 and 105°C for different exposure times. A silicone oil bath (HAAKE DC5) was used for this treatment. For time zero (0) and for each treatment temperature a capillary rack was removed after spore activation and was not heat-treated thus considered as control. The rest of the racks were withdrawn from the activation bath and immediately immersed in the oil bath at the selected temperature. A rack was removed at each time interval and immersed in ice water to stop the treatment. Before the solution was plated the capillaries were cleaned with 96% ethanol and using forceps the ends were split to extract the solution. The content of eight capillaries was deposited into sterile Eppendorf tubes. With the solution recovered from the capillaries two series of serial decimal dilutions (series A and B) were made up to 10 −6 in duplicate. From each decimal solution 100 μL was plated in duplicate on nutrient agar (Scharlab, Barcelona, Spain) enriched with 1g/L starch (Scharlab, Barcelona, Spain) and incubated for 18–20 hours at 30°C. After the incubation time, a manual count of B . cereus colonies was carried out. Spore aggregation was prevented by vigorous shaking with glass beads before taking each sample for plating. All statistical analyses including the one step nonlinear regression were performed using Statgraphics Centurion XVI Software (Addinsoft SARL. New York. NY. USA). Non-linear regression is a powerful technique for standardizing data analysis , it allows obtaining the D and z values from survival curves at once In the present work, the heat resistance of Bacillus cereus was studied in a rice substrate without insect chitosan and with insect chitosan at two concentrations. The survival curves at each temperature tested in the study can be seen in . In general, at all temperatures studied B . cereus spore’s inactivation in the rice substrate was lower than in the substrate without chitosan. Regarding chitosan concentrations, we also observed that for all temperatures the heat resistance of B . cereus spores was quite similar, so the chitosan concentration in the heating medium did not affect the survival of these spores. The parameters defining the spore’s inactivation were derived by a non-linear one-step fitting of the survival data. Nonlinear models often capture the relationships in a data set better than linear models. Perrin described the disadvantages of the usual linear least squares analysis of first- and second-order kinetic data and nonlinear least squares fitting was recommended as an alternative. In our study the value of the studentized residuals was in all cases two or less than two in any case three as absolute value this means that in no case the residuals exceed two standard deviations. shows the estimation of the parameters that define the heat resistance of B . cereus spores D T for each of the substrates and temperatures studied. shows the z value for each of the studied substrate. The value of the parameter D T estimated by the model is clearly higher in the substrate without chitosan than in the substrate containing chitosan which is related with the lower spore inactivation as previously shown by the survival curves. Regarding the value of the parameter, D T estimated by the model when chitosan is present little difference was found between the two chitosan concentrations. It seems that the effect of chitosan on the inactivation of B . cereus spores does not depend on the concentration of chitosan between 150 and 250 μg/mL during heating. With respect to the value of the z parameter estimated by the model varied between 8.9 and 10.7, those are quite common values for this microorganism . Bacillus cereus is a ubiquitous microorganism that can cause serious food safety issues especially in rice products and their derivatives. Proper characterization of its thermal resistance is essential for the design and development of suitable cooking processes. Likewise the prospect of using combined processes in this case with natural antimicrobials can pave the way to improving the safety of these widely consumed products around the world. Currently there is information on the effect of temperature and on the effect of chitosan separately on B . cereus spores. Several works have reported the variation in the D T and z values of the microorganism in different heating substrates. Pendurka and Kulkarni studied the heat resistance of the spores of five Bacillus species including B . cereus in distilled water and pasteurized skim milk. The authors found that in all cases the spores survived the cooking conditions applied to the rice. At 100°C a D T value of 19 min was shown by B . cereus in distilled water while B . cereus spores were completely inactivated in skim milk at the same temperature (100°C). This result indicates low levels of heat resistance. In the present work at 100°C a D T value of 1.82 min was recorded when the spores were heated in a rice solution. However, the great variability that exists between B . cereus spores in relation to heat resistance is well known. Fernandez et al . studied the heat resistance of two Bacillus cereus strains isolated from cooked chilled foods containing vegetables and found D T values between 0.22 and 2.5 min at 100°C. More recently Salwa Abu El-Nour Ali Hammad found D 85 -values of B . cereus spores ranging from 24.9 to 35.2 min. D 90 -values ranging from 7.6 to 11.6 min. whereas D 95 -values ranged from 2.4 to 4.7 min. depending on the type of substrate. The values obtained in the present work are slightly higher probably due to the strain and substrate differences. Regarding the z value Fernandez et al . reported values of 8.1 and 8.4°C depending on the strain considered obtained on a reference substrate. Salwa Abu El-Nour Ali Hammad reported z values of B . cereus spores suspended in different media ranging from 9.81 to 11.24°C. In the present work, the z value ranged from 8.9°C to 10.7°C depending on the substrate used. The z values obtained in the present work are in accordance with previously reported results; therefore, these results can be considered a suitable reference to develop suitable cooking conditions for rice. Today, chitosan is extensively studied given the multiple applications that it can have in both the food and the pharmaceutical industries. One of these applications is its use as a natural antimicrobial in food preservation. Ke et al . indicated that the broad-spectrum antimicrobial activity of chitosan offers great commercial potential for this product. Some studies have been published in which the effectiveness of chitosan against B . cereus has been demonstrated. Fernandes et al . found a relationship between the molecular weight of chitosan and its antimicrobial activity for both vegetative cells and spores of B . cereus . Mellegård et al . studied the inhibition of B . cereus spore outgrowth and multiplication by chitosan; they found chitosan exerts antimicrobial activity that appears to be concentration-dependent and related to the average molecular weight and fraction of acetylation of the chitosan used as antimicrobial. Currently the industry is looking into combined treatments in which the different control measures are administered with lower intensities than when applied individually. In this way, pathogenic microorganisms are inactivated in a way that improves both the nutritional and sensory quality of food. In some cases, this combination is interesting because it can provide greater inactivation by heat than when heat is administered alone. There are no studies in the literature reporting the combination of heat treatment and chitosan to achieve control and inactivation of B . cereus in rice-based substrates. However, the effect of combining heat treatment or other control measures with natural antimicrobials has been reported in the scientific literature. Ueckert et al . reported that exposure to heat and nisin caused synergistic reductions of Lactobacillus plantarum viability. Huertas et al . studied the combined effect of natural antimicrobials (nisin. citral and limonene) and thermal treatments on Alicyclobacillus acidoterrestris spores. Authors concluded that the antimicrobial agents tested did not affect the heat resistance of the spores; however, the antimicrobials were effective in controlling the growth of the microorganisms after the heat treatment. Kamdem et al . studied the effect of mild heat treatments on the antimicrobial activity of some essential oils. Authors indicated that the combination of temperature and those essential oils reduced the treatment time needed to inactivate 7 log cfu/mL of S almonella enteritidis . In the present work a joint effect of heat and chitosan on B . cereus spore’s inactivation was found, D T values were in general lower on samples containing chitosan than in the sample without chitosan. The decrease in the D T values may be due to a reduction in the spores’ resistance caused by the joint effect of the thermal treatment and the chitosan, as it has been observed in vegetative cells of E . coli , L . mocytogenes and S . Typhimurium . Probably, the additive effect during heat treatment depends on the type of microorganism or the type of antimicrobial. It is also possible that the reduction in D T could be also due to chitosan is blocking outgrowth of Bacillus spores that have been damaged by wet heat. In any case, the presence of chitosan increases the inactivation or prevents the development of spores of B . cereus , significantly improving the food safety of the food. Besides, in the present work, we found that the effect on D T values was not dependent on chitosan concentration. It is possible that at this level the chitosan concentration does not play an important role but rather it is the molecular structure of the chitosan that facilitates the action of heat on the bacterial spores thus reducing the number of spores capable of germinating and growing. This study investigated the nature of the inactivation of Bacillus cereus spores by combining insect chitosan with heat treatment. The results indicated that the presence of chitosan regardless of its concentration produced reductions in the D T value of B . cereus spores in a rice substrate. These findings pave the way to a better control of B . cereus during and after the cooking processes of rice and its derivatives making the combination of chitosan with heat treatment feasible in order to improve the safety of these types of products. These results also indicate that insect chitosan could be also used as chitosan from other sources, in combination with heat treatment as an additional control measure. S1 Data (XLSX) Click here for additional data file.
Discontinuation and Switchback After Non-Medical Switching from Originator Tumor Necrosis Factor Alpha (TNF) Inhibitors to Biosimilars: A Meta-Analysis of Real-World Studies from 2012 to 2018
0a439942-457f-4492-b4a6-35c3c34b8a7f
9309144
Internal Medicine[mh]
Since the advent of the first biologic (human insulin) in 1982, biologic therapies, including both small and large molecules, have transformed the treatment of numerous chronic conditions, improving clinical outcomes and patients’ well-being . In particular, tumor necrosis factor alpha (TNF) inhibitors, a class of large complex molecules, have advanced the management of a number of diseases including rheumatologic conditions, inflammatory bowel diseases, and dermatologic conditions . As the patents of a number of originator biologics have expired or are about to expire, highly similar copies to those originator biologics (i.e., biosimilars) have been developed and some of them have been granted market authorization . Unlike generic versions of synthetic small-molecule drugs, biosimilars are not exact copies of the originators because of the intrinsic manufacturing variability of biologics, which inevitably results in minor but acceptable structural differences between originators and the biosimilar products [ – ]. Notwithstanding the slight variability that is inherent to all biologic medications, regulatory agencies across the world require biosimilars to have no clinically meaningful differences in purity, potency, safety, and efficacy from their originator biologics through clinical trials . Researchers have been evaluating whether biosimilars and originator biologics are comparable in terms of safety and effectiveness [ , , – ]. Uncertainty remains with regard to the non-medical switch (NMS) from an originator biologic to its biosimilar(s) given that such a switch is typically motivated by cost-related reasons, such as changes in formulary, and not by medical reasons, such as side effects, lack/loss of response, or poor persistence/adherence [ , – ]. The drivers of such a switch could also be mandatory on a nationwide scale such as in the case of switching to infliximab biosimilar in Denmark in 2015 . Caution may be particularly considered in the case of TNF inhibitors, as they are used in patients with chronic conditions for whom continuity of care is highly recommended in order to maintain optimal disease management after achieving symptom control [ – ]. Existing research has shown mixed findings on the clinical impact of NMS to biosimilars. Indeed, while some studies suggested that biosimilar NMS did not affect therapeutic efficacy or safety [ – ], other studies found that an NMS from any one drug to another is associated with an increase in treatment discontinuation (and potentially switch back to the original therapy), as well as worsening clinical outcomes [ , , ]. As the number of TNF inhibitor biosimilars on the market continues to increase, it is important to systematically evaluate the impact of NMS on clinical management of conditions in the real world. To further inform this evidence gap, we conducted a meta-analysis to assess post-NMS biosimilar treatment patterns (focused on discontinuation and switchback rates) in real-world studies. Literature Search Strategy Study Outcomes Meta-Analysis Sensitivity Analyses Compliance with Ethics Guidelines This article is based on previously conducted studies and does not contain any new studies with human participants or animals performed by any of the authors. Real-world studies that reported discontinuation and switchback rates after an NMS from originator TNF inhibitors (i.e., infliximab and etanercept) to their biosimilars were identified through a systematic literature review. The initial search was conducted in February 2018 , with an updated search performed in August 2018. Given the search dates, no real-world adalimumab biosimilar NMS studies were expected to be available because adalimumab biosimilars were only recently made commercially available. The literature was searched using the following databases: BIOSIS Previews ® , Derwent Drug File, Embase ® , International Pharmaceutical Abstracts, MEDLINE ® , SciSearch ® , and selected conference abstracts (e.g., European League Against Rheumatism [EULAR] Annual Congress, European Crohn’s and Colitis Organization [ECCO] Annual Congress, and American College of Rheumatology [ACR] Annual Meeting). The search was limited to English language, humans, and publications dates from January 1, 2012 to August 8, 2018 . The search strategy included search terms related to TNF inhibitors, biosimilars, and NMS. The complete list of search terms can be found elsewhere . Of note, Benepali ® , the first biosimilar of etanercept, was approved in Europe in 2016 , and Amgevita ® , the first biosimilar of adalimumab, was approved in Europe in 2017 . Studies were considered eligible for inclusion if they were real-world prospective or retrospective studies, included adult patients with chronic conditions (i.e., rheumatoid arthritis, psoriatic arthritis, ankylosing spondylitis, plaque psoriasis, ulcerative colitis, and Crohn’s disease) who experienced an NMS from an originator TNF inhibitor to its biosimilar(s), and reported discontinuation rates after the NMS and/or switchback rates. Both single-arm and multiple-arm studies were included, as well as those reporting patient-based or registry/database data. Clinical trial studies, publications that did not report discontinuation outcomes, or publications that evaluated a pediatric population were excluded. For each selected publication, the study characteristics (i.e., geographic region, therapeutic area, originator and biosimilar agents, sample size, and follow-up time) and treatment pattern data (i.e., discontinuation rate, switchback rate, time to discontinuation, treatment after discontinuation, and reason for discontinuation) were summarized. The following four outcomes were included in the meta-analysis: Annualized discontinuation rate among NMS patients , defined as the estimated proportion of patients (among all patients who underwent an NMS) who discontinued the biosimilar after an NMS; discontinuation for any reasons (e.g., patients who discontinued and switched back to the originator biologic, biosimilar discontinuation without any further treatment) was included. Follow-up time was used to calculate the annualized discontinuation rate. Annualized switchbacknd then switched back to the originator TNF inhibitor that they used before the NMS. Follow-up time was used to calculate the annualized switchback rate. Switchback rate among biosimilar NMS discontinuers , defined as the estimated proportion of patients who discontinued the biosimilar (discontinuers) and switched back to the originator TNF inhibitor that they used before the NMS, estimated among all patients who discontinued the biosimilar following the NMS. Incremental discontinuation rate among NMS patients , defined as the difference in annualized discontinuation rate between patients who underwent NMS (switchers) and those who remained on the originator TNF inhibitor (non-switchers), estimated among the subset of studies that reported discontinuation rates for both groups. Incremental discontinuation rate was used to conserve the within-study comparability of switchers vs non-switchers. Meta-analyses based on the DerSimonian and Laird method using a random-effect model were performed to calculate the pooled estimates for each outcome of interest. Accounting for differences in the follow-up time and sample size across studies, the meta-analysis included a random intercept to account for the between-study differences (i.e., design and population difference). The summary discontinuation and switchback rates, along with 95% confidence interval [CI], were calculated for all the selected studies and by therapeutic area. Cochran’s Q was calculated as a check of homogeneity to confirm that a random-effects model was appropriate. The Higgins I 2 index was calculated for each meta-analysis to quantitatively measure the degree of variation between the results reported in the selected studies. Multiple sensitivity analyses were conducted to evaluate the robustness of the study findings. In particular, sensitivity analyses were conducted by running meta-analyses on subsets of studies with similar characteristics (i.e., sample size, follow-up time, intervention) to determine how study-specific characteristics affected the pooled estimates of discontinuation and switchback rates. Sensitivity analyses were not performed for the incremental discontinuation rate outcome given the small number of studies identified that reported data for both switchers and non-switchers. All the statistical analyses in this study were conducted using the R software version 3.2.1 . Selected Studies Annualized Discontinuation Rate Among NMS Patients Annualized Switchback Rate Among NMS Patients Switchback Rate Among Biosimilar NMS Discontinuers Incremental Annualized Discontinuation Rate Among NMS Patients Nine studies that had discontinuation data available for both switchers and non-switchers were included in the meta-analysis of incremental annualized discontinuation rate. These studies had heterogeneous designs and substantially varied sample sizes. In particular, on average, the sample size of non-switchers was larger (768 patients, ranging from 19 to 2870) than switchers (344 patients, ranging from 89 to 1621). Four of these studies used historical controls before biosimilars became available for the non-switchers. The other four studies provided the discontinuation rates among patients elected to remain on originators when approached for the possibility of switching. The remaining study did not have a true discontinuation rate among non-switchers but used a proxy estimate with NMS patients as their own controls and evaluated discontinuation rate during the 6 months prior to NMS for non-switchers. Follow-up times were similar between switchers (mean 11 months, ranging from 6 to 18) and non-switchers (mean 12 months, ranging from 6 to 18). When all the studies were pooled together, the incremental annualized discontinuation rate was 18% (95% CI 4%, 31%) across all therapeutic areas (Fig. ), indicating a significantly higher discontinuation rate among switchers than non-switchers. Specifically, the incremental annualized discontinuation rate ranged from − 12% to 54%. The study that had a higher discontinuation rate among non-switchers used a proxy estimate. In that study, all patients who had been stable and persistent on treatment for at least 6 months were offered an NMS; those who did not accept the NMS discontinued, while those patients who accepted switched to a biosimilar. The discontinuation rate for non-switcher was estimated in all patients who were offered an NMS. In terms of therapeutic areas, because of the small number of studies that reported discontinuation rate for both switchers and non-switchers, the discontinuation rate was not estimated separately for rheumatology studies ( N = 7) and gastroenterology studies ( N = 2). A total of 66 publications based on real-world studies were identified, including 29 full-text publications, 35 abstracts, and 2 letters to the editor. Of them, 51 assessed NMSs from originator infliximab to its biosimilar (e.g., CT-P13 or SB2), 10 studies from originator etanercept to its biosimilar (e.g., SB4 or GP2015), and 1 from both originator infliximab and etanercept to their biosimilars (Table ). Studies included patients from the following countries: Ireland, France, UK, Germany, Spain, Italy, the Netherlands, Norway, Scotland, Denmark, Czech Republic, Finland, Poland, Sweden, Portugal, and South Korea. None of the selected studies were conducted in the USA or Canada, likely because of the earlier adoption of biosimilars in Europe and Asia. In terms of therapeutic areas, 31 of the 66 studies focused on gastroenterology, 32 on rheumatology, and 3 on both gastroenterology and rheumatology. No studies reported data specifically for dermatology patients. The mean number of patients who underwent an NMS in these studies was 136 (range across studies 9–1641 patients). The mean follow-up time after an NMS was 10 months (range 3–24 months). A total of 62 studies reported discontinuation rate and follow-up time. Discontinuation rate varied substantially, from 1.5% to 87.0% across studies with different length of follow-up (range 3–24 months) (Table ). The average time to discontinuation was 6 months, ranging from 2 to 11 months across 10 studies that reported this information. Reasons for biosimilar discontinuation were reported in 56 of the 62 studies. The most common reasons for biosimilar discontinuation were loss of efficacy and side effects/adverse events, reported in 37% and 28% of discontinuers, respectively. Other reasons included patient choice (7%), disease improvement (4%), loss to follow-up (3%), pregnancy (1%), death (less than 1%), and unspecified reasons (19%). After discontinuing a biosimilar, the majority of patients switched back to the originator TNF inhibitor. A smaller percentage of patients switched to a biologic different from the originator or another biosimilar, underwent surgery, received other unspecified treatment options, or discontinued with no further treatment. When all the studies were pooled together and adjusted for follow-up time, the annualized discontinuation rate was 21% (95% CI 18%, 25%) among NMS patients across all therapeutic areas (Fig. ). The discontinuation rates by therapeutic area were consistent with the overall discontinuation rate: 26% (20%, 32%) for rheumatology and 17% (14%, 20%) for gastroenterology. For all the meta-analyses of discontinuation rates, the I 2 was greater than 8the included studies. The results of the sensitivity analyses were consistent with the aforementioned results (Supplementary Material Table 1), with discontinuation rates for all therapeutic areas ranging between 19% and 23%. Slightly higher discontinuation rates were observed when only studies with larger sample sizes were included. Similar discontinuation rates were observed in studies with a follow-up time of at least 6 months. Consistent results were also observed when considering individual therapeutic areas. A total of 29 studies reported switchback rate and follow-up time among all patients who underwent NMS. The reported switchback rate ranged from 0% to 63% across studies with various length of follow-up (range 4–24 months). When all the studies were pooled together and adjusted for follow-up time, the annualized switchback rate was 14% (95% CI 10%, 17%) among all patients who underwent an NMS across all the therapeutic areas (Fig. ). When stratified by therapeutic area, the switchback rate was 17% (12%, 21%) for rheumatology and 8% (5%, 12%) for gastroenterology. For all the meta-analyses of the annualizedrefore, switching back to the originator biologic used before the NMS was the most common option after biosimilar discontinuationthe annualized switchback rates for all therapeutic areas ranging from 11% to 20%. Slightly higher switchback rates were observed when only studies with larger sample sizes were included. Slightly lower switchback rates were observed in studies including only patients treated with etanercept as the originator biologic31 studies reported switchback rate among biosimilar NMS discontinuers. The reported rate ranged greatly across studies, from 0% to 100%. Notably, seven studies reported switchback rates of 100%, indicating that all patients who discontinued switched back to their originator TNF inhibitor [ – ]. When all the studies were pooled together, the switchback rate among discontinuers was 62% (95% CI 44%, 80%) (Fig. ). Consistent results were reported by therapeutic area, with a switchback rate of 71% (60%, 81%) for rheumatology and 47% (23%, 71%) for gastroenterology. For all the meta-analyses of switchback rates, the I 2 was greater than 90% and the p value associated with Cochran’s Q was less than 0.001, suggesting significant heterogeneity among studies. The results of the sensitivity analyses were consistent with those of the meta-analyses (Supplementary Material Table 1), with switchback rates among discontinuers for all therapeutic areas ranging from 61% to 69The biosimilarity of biosimilars to their originator biologics has been confirmed in randomized controlled trials (RCTs) for biosimilars of adalimumab , infliximab [ – ], and etanercept , in which no significant decrease in efficacy or increase in adverse events has been reported. However, approval of biosimilars on the basis of biosimilarity does not guarantee interchangeability with the originator biologic . Indeed, the US Food and Drug Administration (FDA) requires additional evaluation for the “interchangeable” designation, including evidence of identical clinical results in all treated patients and maintenance of safety and efficacy with multiple switching between originator biologic and biosimilar . Of note, no biologic has currently achieved the interchangeable designation. As such, concerns have been raised with regard to NMS from a biologic to its biosimilar(s), particularly among stable patients with chronic conditions. Some physicians believe that small changes in these patients’ overall treatment regimens, which are often established after multiple rounds of trial and error, may have unwanted negative effects, even more so when considering the simultaneous management of comorbidities . Additionally, large-molecule biologics such as TNF inhibitors are particularly difficult to replicate , and the potential risk of an immunogenic reaction after an NMS could be troublesome for some physicians . While studies have thus far not shown increased immunogenicity after switching to biosimilars , there is concern that current clinical trials may not be designed and/or sensitive enough to detect these changes in anti-drug antibodies . Further, an NMS may result in treatment instability and introduce unnecessary patient stress and anxiety, negatively affecting a patient’s well-being . To better understand the treatment patterns associated with a biosimilar NMS, we conducted a meta-analysis summarizing real-world evidence related to biosimilar discontinuation and switchback following NMS from an originator TNF inhibitor to its biosimilar(s). Data from 66 studies including over 8700 patients were pooled together to estimate the prevalence rates of post-NMS biosimilar discontinuation and switchback to the originator TNF inhibitors. Consistent with the results of two prior systematic literature reviews , we found a large variation in the discontinuation rates reported in real-world studies. Specifically, the unadjusted discontinuation rate ranged from 1.5% to 87.0%, and the annualized rate ranged from 3.3% to 81.8%. This large variation is likely due to the heterogeneity in study design, region, patient population, and sample size of the studies included in the meta-analysis. The included real-world studies used divergent data sources and methodologies to evaluate the discontinuation outcomes. Forty-seven out of the 62 publications prospectively collected patient data through selected centers, while the remaining publications retrospectively evaluated patient outcomes either through registry/databases, or medical records. Of note, the highest discontinuation rates (annualized rates of 81.8% and 80.5%) were found in two Turkish national database studies . Notably, in both studies, discontinuation rates among switchers were much higher than those among non-switchers (87% vs 34% and 82% vs 38% ). However, given that both studies defined discontinuation on the basis of evidence of switching to another biologic or absence of prescription claims for more than 120 days, the observed discontinuation rates could be subject to intrinsic limitations of claims data and may not fully reflect the prevalence of post-NMS discontinuation in the real world. By contrast, lower annualized discontinuation rates (3.3–7.2%) were observed in studies with a follow-up period less than 6 months and a sample size less than 100 patients [ , , , , ]. In addition, patients’ self-reported discontinuation of the biosimilar, or switchback to the original TNF inhibitor, and lack of follow-up with patients on a medication’s efficacy could be two limitations in some studies. To address the heterogeneity across the identified studies, we conducted meta-analysis to synthesize the evidence while accounting for across study differences. Meta-analysis is currently the most common approach for quantitatively combining the results reported in different studies pertaining to the same outcome. This method allows the generated pooled estimate to put more weight on studies with larger sample sizes, thus reducing the variation in divergent observations caused by small sample sizes . When the data from all the studies were pooled together, the annualized discontinuation rate was found to be 21%. A prior systematic literature review of post-NMS clinical outcomes reported discontinuation rates ranging from 5% to 33% across 12 different RCTs, including the landmark NOR-SWITCH study . It is worth noting that the discontinuation rates estimated in the present meta-analysis were within the range of the rates seen in RCTs even though our estimates were based on studies in real-world settings only. The pooled estimates from the current meta-analysis are likely to be more reflective of the discontinuation rates observed in clinical practice than those observed in clinical trials, which include only a select group of patients and adopt a more controlled design. For instance, the landmark NOR-SWITCH study excluded patients with certain comorbidities or those who adjusted co-medication prior to randomization. Further, all patients were required to maintain the same dose and infusion interval during the entire study follow-up and had frequent visits every 4 to 12 weeks. In real-world practice, greater heterogeneity in patient population and practice patterns is expected . Indeed, a major strength of the present study is the inclusion of real-world data from 66 studies comprising over 8700 patients, providing a comprehensive overview of discontinuation and switchback rates among patients who undergo an NMS from a TNF inhibitor to a biosimilar in everyday clinical practice. In line with the heterogeneity observed in the current study, it is important to recognize that considerable variability also exists in discontinuation research of originator biologics . In a systematic literature review and meta-analysis of 98 studies for the use of originator TNF inhibitors in rheumatoid arthritis in early years, the reported discontinuation rates were 21%, 27%, 37%, 44%, and 52% for 6-month, 1-year, 2-year, 3-year, and 4-year periods, respectively . In the present study, 66 publications contributed to meta-analysis and the follow-up period ranged between 3 months to 2 years, with the majority being less than 1 year. The annualized discontinuation rate of 21% among the biosimilar NMS patients was comparable to the findings from the meta-analysis of the originator discontinuers. Such a finding may suggest that although patients switched from a reference drug to its biosimilar, one should expect a similar discontinuation rate and many of these patients may switch back to the reference drug, as it was found in the present study. The reasons for discontinuation for patients on originators were similar to those of the present meta-analysis, including loss of efficacy and adverse effects [ – ]. However, external factors are also likely implicated, both in originator biologic and biosimilar switching patterns. Indeed, in a survey-based study of patients treated with biologics for various conditions, 20% reported receiving notice from their insurance company to switch to another originator biologic as a result of changes in insurance coverage . Taken together, these findings highlight the issue of biologic discontinuation and its multifactorial causes in the context of both biosimilars and originator biologics. In addition to discontinuation rates, large variations were also observed for reported switchback rates, which, among discontinuers, ranged from 0% to 100% across studies. In particular, in seven studies [ – ], all the patients who discontinued the biosimilar after an NMS switched back to the originator TNF inhibitor, whereas in two studies none of these patients switched back to the originator TNF inhibitor. Notably, all the extreme values (0% switchback and 100% switchback) were reported in studies with a relatively small sample size (20–134 patients). Importantly, through this literature review and meta-analysis, we found that the most common therapeutic choice (62%) after biosimilar discontinuation following NMS was to switch back to the originator TNF inhibitor. The pooled annualized switchback rate among all NMS patients was found to be 14% and was 62% among those NMS patients who discontinued biosimilar. Switching back to the originator TNF inhibitor appears to be a reasonable choice given that the most common reason for biosimilar discontinuation was loss of response or treatment failure (37%), followed by adverse events (28%). In line with these results, it has been suggested that switching from one originator to another or to its biosimilar(s) may increase the risk of developing anti-drug antibodies and subsequently failing to respond to treatment . While studies have shown that anti-drug antibody reactivity to an originator biologic would yield similar cross-reactivity to its biosimilar , other evidence may suggest differences in clinical response with switchback after development of anti-drug antibodies. Indeed, in an observational study including 23 patients who underwent an NMS but discontinued the biosimilar because of worsening disease symptoms, clinical improvement was observed in 71% of the patients after switching back to the originator biologic . In addition, of the nine studies that had discontinuation data for both switchers and non-switchers, other than one that used a proxy estimate with NMS patients serving as their own control, the other eight studies reported a higher discontinuation rate for the switcher group, with an incremental discontinuation rate ranging from 2% to 54%. However, with the current lack of robust immunogenicity data in the literature , the association between switching, anti-drug antibodies, and treatment failure remains unclear. The findings of this meta-analysis have important implications in managing patients with chronic conditions. Switching from a TNF inhibitor to its biosimilar for non-medical reasons was found to be associated with a high prevalence rate of biosimilar discontinuation due to treatment failure or adverse events, and a high switchback rate, but comparable with yearly discontinuation rate of the original TNF inhibitor. To address the phenomenon of NMS, a patient-centered approach such as discussing the nocebo effect and providing patient education could be important. In the context of NMS, the nocebo effect would be a patient expecting a biosimilar to be less effective than the original TNF inhibitor, or to cause side effects, and then actually experiencing reduced efficacy or side effects. A known factor affecting patients’ perceptions of a medication’s efficacy is the cost of medication , and a biosimilar usually costs less than the original TNF inhibitor. Healthcare providers should educate patients about the efficacy of biosimilars in layman’s terms and avoid technical jargon and ambiguous statements . Limitations Some limitations should be considered when interpreting the study results. First, the publications identified were highly heterogeneous in terms of designs, geographic areas, patient populations, and sample size (many of which were small). Many of the included publications were abstracts, which often do not include detailed information regarding the methodologies used or funding sources. However, the meta-analysis approach (random-effect model) was used to minimize biases associated with the heterogeneity observed across studies by distributing weight according to study characteristics like sample size and follow-up time. Sensitivity analyses were also conducted using subsets of studies with similar characteristics and showed consistent results as the main analysis. In addition, discontinuation data were reported with substantially different follow-up intervals, limiting the comparability of data across studies. To address this limitation, the current analysis annualized all reported discontinuation rates by assuming a constant transition over time. Furthermore, since all included publications were conducted in Europe or Asia, the generalizability of the results to other countries like the USA may be limited. Moreover, limited information available in the identified studies constrained the type of analyses and the generalizability of the study findings in the current study. For example, few studies assessed multiple switches (e.g., switching back and forth between biosimilars) or described potential population differences between those who underwent NMS and those who remained on originators. Insufficient data are available to evaluate whether there is any potential linkage between the discontinuation rate and therapeutic area. Lastly, as noted in the “ ,” because the search was performed in 2018, the only biosimilars to TNF inhibitors available were infliximab and etanercept. With more biosimilars on the market, an update to this research and additional analyses are warranted to further investigate these topics. This study found biosimilar discontinuation to be prevalent in the real world among patients who underwent an NMS from an originator TNF inhibitor to its biosimilar(s). Furthermore, switching back to the originator TNF inhibitor was a common therapeutic choice following biosimilar discontinuation. More real-world studies are needed to better understand the outcomes associated with biosimilar NMS and inform key stakeholders such as patients, healthcare providers, payers, and policymakers. Below is the link to the electronic supplementary material. Supplementary file1 (PDF 147 KB)
External Quality Assessment beyond the analytical phase: an Australian perspective
b73c7028-e60c-4cbc-8bba-bf005aa7548e
5382854
Pathology[mh]
Pathology is a crucial clinical tool, estimated to contribute to 60-70% of all critical decisions involving patient treatment . Despite this potential value, the Carter review estimated that 25% of pathology requests are unnecessary or inappropriate . Furthermore, CareTrack Australia examined the appropriateness of care provided in Australia for 22 common conditions and demonstrated that only 57% of patients received what was regarded as appropriate care . The cost of diagnostic test services in Australia rose to $5.25 billion in 2013. For pathology services, this represented an increase of 81% in the decade to 2013 . This rise has led to major concerns about the substantial costs and risks associated with unnecessary tests and incorrect result interpretation. For pathology services to be of value, the correct ordering of tests and interpretation of results is crucial. This is the responsibility of the treating clinician, and as such, can be considered as the pre- and post-laboratory or diagnostics phase . External Quality Assessment (EQA) is the verification, on a recurring basis, that laboratory results conform to expectations for the quality required for patient care . However, Australian laboratories tend to be focussed on very narrow concepts of EQA, even though the significance of pre and post laboratory errors is now widely recognized. This can be partly attributed to the fact that laboratories are largely sample, as opposed to patient, oriented. If the broader concept of Quality Management Systems (QMS) is primarily aimed at meeting customer requirements and enhancing customer satisfaction, then it is clear that the quality of the product of a laboratory is taken as a given by referrers, and enhancing clinician, patient and payer satisfaction extends far beyond the traditional boundaries of the laboratory. One way to improve the quality of service laboratories provide is to extend laboratory based EQA programs to the requesting and reporting phases, which are outside the current scope of pre-analytical, analytical and post-analytical EQA programs . Whilst it is recognised that some countries have made steps in this direction, this is far from widespread, and currently lacking in Australia. The pre-pre-analytical phase, which is primarily composed of test ordering, and the post-post-analytical phase, which is primarily composed of test result interpretation, can be regarded as the diagnostic phases (as opposed to the analytical phases) and sub-divide it into a pre-laboratory and post-laboratory phase . This terminology has been chosen to remove the laboratory as the focus of the process and shift it back to the referring clinician. In addition to introducing the diagnostic phase terminology, this opinion paper aims to set out the reasons why pathology laboratories and diagnostic medicine needs a way of monitoring these phases more effectively, with particular reference to the Australian situation. It is the belief of the authors that an EQA program could be developed to identify, learn from and reduce these errors and near misses in a timely fashion, and this will be explored in this article. A study conducted by the American Academy of Family Physicians reported that participants submitted 590 event reports with 966 pre- and post-laboratory errors. Pre-laboratory errors occurred in ordering tests (12.9%) and implementing tests (17.9%), while post-laboratory errors occurred in reporting results to clinicians (24.6%), clinicians responding to results (6.6%), notifying patient of results (6.8%), general administration (17.6%), communication (5.7%) and other categories (7.8%). Charting or filing errors alone accounted for 14.5% of errors. While patients were unharmed in 54% of events, 18% resulted in some harm, and harm status was unknown for 28%. Furthermore, these errors led to a range of other adverse consequences including time and financial consequences (22%), delays in care (24%), pain/suffering (11%) and adverse clinical consequence (2%) . Therefore, the impact of these pre- and post-laboratory errors demonstrates a pressing need to identify the sources of these errors to facilitate the development of interventions that can reduce the error rate. While there has been a vast amount of research to identify pre-laboratory error quality indicators , there are also significant pre-laboratory errors that we believe have not been included in these indicators. One of these areas of omission is the proportion of patients who are not adherent to a pathology request. It has been estimated that in Australia, approximately 20-30% of patients who are given a pathology request form in the community do not have this request completed . There are multiple reasons for this non-compliance including language barriers in communication, low socioeconomic status and poor health literacy such as forgetting important appointments, losing pathology forms and not showing up to or attempting to reschedule the appointment. This has potentially far greater impact on patient treatment than the analytical phase. Primary care in Australia is the responsibility of General Practitioners (GPs). The impact of the aforementioned non-compliance to test requests has required GPs to adopt complex workflows to remind patients of the need to have an appropriate test before the next appointment. Anecdotally, the non-adherence rates are of a similar range in hospital outpatient clinics. The cost to the community of these wasted appointments is significant. This is one reason why point of care testing, which enables laboratory tests to be performed at the patient location as opposed to a laboratory, may have significant benefits for both patients and health professionals. Hickner et al. reported that GPs described uncertainty in ordering laboratory tests in approximately 15% of diagnostic encounters . The task of selecting appropriate diagnostic testing is challenging for clinicians, in part because of the sheer volume of choices. For example, there are currently over 850 different pathology tests for which the government will reimburse patients in Australia. Therefore, methods to improve this workflow could lead to a significant improvement in the quality of pathology services. Sikaris has identified the importance of the post-laboratory phase and how it is subject to error, such as the misapplication of appropriate and accurate test results through cognitive failure . Laboratory tests and their misinterpretation are still an important contributor to misdiagnosis because of the emphasis put on laboratory testing for diagnosis and monitoring decisions. In the post-laboratory phase the quality of the final report, including its reference intervals, clinical interpretations and notifications based on knowledge from laboratory specialists, should support clinical decision-making. It has been reported that incorrect interpretation of diagnostic tests accounts for up to 37% of malpractice claims in primary care and emergency departments . Audit and dissemination of best practice plays an important role in managing the quality of results interpretation. While audits are not the preferred option for the Australian situation, primarily due to the great distances required to make on-site visits, a number of international studies have examined the quality of results interpretation in general practice. Skeie et al. found that 22% of Norwegian GPs misclassified changes in haemoglobin A 1 c (HbA 1 c) for patients with diabetes mellitus (DM) and that the vast majority of GPs assumed that analytical quality was better than it really was . The finding of this study are supported by that of Thue and Sandberg , who analysed clinician expectation of analytical performance in relation to current analytical performance specifications, finding that clinicians are generally uninformed of the capability of analytical performance. Three subsequent Norwegian studies performed EQA of GP’s interpretation of pathology results and showed general agreement in critical differences (CDs) for blood glucose and HbA 1 c, with variation in the perceived risk to patients of a severe bleed . There was also a large variation in CDs for uric acid and in International Normalized Ratio (INR) interpretation for warfarin monitoring . Kristoffersen et al. found that GPs across 13 countries overestimated the risk of ischemic stroke and bleeding in people treated with vitamin K antagonists (VKA) by 2-3 times . The results of these studies suggest that guidelines for these conditions may be either unknown or impractical. Hellemons et al. , who found that guidelines around the use and interpretation of albuminuria in patients with DM were poorly followed in general practice further support this . The Institute of Medicine (IOM) report entitled “To Err is human” identified the types of error that arise in the diagnostic process, namely: failure to employ indicated tests; use of outmoded tests or therapy; failure to act on results of monitoring or testing; treatment error in the performance of an operation, procedure, or test; error in administering the treatment; error in the dose or method of using a drug; avoidable delay in treatment or in responding to an abnormal test; inappropriate (not indicated) care; preventive failure to provide prophylactic treatment; inadequate monitoring or follow-up of treatment; other failures of communication; equipment failure; other system failure . Many of these errors, such as failure to order a test, wrong test ordered and failure to recognise urgency, are amenable to the type of EQA programs that are used in laboratory medicine. However, developing EQA programs for the pre- and post-laboratory phases will require consultation and support from the referring doctors. The format of the programs will also need careful construction to ensure that the data collected is de-identified and provides education as well as useful and meaningful data. The EQA program we propose could take the form of a series of patient scenarios where a response would be required. The Interpretative Comment programs of The United Kingdom National External Quality Assessment Service (UKNEQAS) or Royal College of Pathologists of Australasia Quality Assurance Programs (RCPAQAP) are two such examples; however, the referring doctor would be the participant. The results would be analysed and reported back with a guideline-based response, concordance, and group performance (see Appendix 1 for an example of such a report). Sikaris has described these concepts in terms of a medical laboratory, but they are translatable to a referring doctor model . The cases will need to be carefully chosen, however, so that the suggested interpretations in terms of what tests to order based on a given clinical scenario or what treatment to suggest has a strong evidence base in both current literature and current clinical guidelines. The purpose of a pathology report is to communicate the results of the test in a clear and unambiguous manner. It is clearly a patient safety issue if a report is misread in a way that may lead to an incorrect understanding of the results. Hickner et al. found that 8.3% of GPs had uncertainty towards interpreting results . Challenges included different names for the same test, tests not available except as part of a test panel and different tests included in panels with the same names. While this has been addressed in some countries, it remains a prominent issue in the Australian setting. Additionally if a report is difficult to read, there can be valuable time lost in trying to correctly identify the key elements of the results. In the modern era, doctors commonly receive pathology reports from a range of different laboratories. Examples include tests requested by a specialist, results from a hospital, results obtained while travelling interstate or overseas or results from a different laboratory attended by the patient for convenience or other reasons. Clearly, uniformity of reporting formats amongst laboratories can be beneficial in making the review of pathology reports easier and safer, irrespective of the testing laboratory . Clear and consistent reporting is vital to support safe pathology interpretation. Guidelines aimed at improving the effectiveness of testing have been the subject of standardisation between medical groups for a significant period of time . While there has been focus on communication using electronic systems , paper reports remain in common use and rendered reports (e.g. portable document formats (PDF) or Pathology Information Transfer protocol (PIT) formats) are still widely used in practice. In 2013, the Royal College of Pathologists Australasia (RCPA) published an initial Standard. A group known as the Australian Pathology Units and Terminology Standardisation Project (APUTS) wrote the draft, and after public feedback, edited and finalised comments. The Standards and Guidelines were released in 2014 to assist in the requesting and reporting of pathology . It is now important that conformity to the aforementioned guidelines and standards be monitored. This can be done through a form of EQA for reports. An EQA organisation is part of the request-result cycle and hence is in a position to perform quality assurance on the laboratory result when the laboratory sends a result back to the EQA. The units, format, reference interval and comment are all a part of the EQA result and hence can be treated as part of the EQA program . The Pathology Information, Terminology and Units Standardisation (PITUS) Informatics EQA Project aims to build a system to enable electronic requesting and reporting for an existing RCPAQAP EQA program. The electronic messages involved in the process will be assessed for compliance and conformance to relevant Standards from National Pathology Accreditation Advisory Council (NPAAC), Australian Standards and the RCPA. A rendered PDF version of the report will also be generated and assessed against the format, rules and rendering conformance requirements of the APUTS Standard. Advances in Health Information and Communications Technology (ICT) means that ICT can be used to support EQA and assist clinicians when both ordering and interpreting pathology test results. This combination has the potential to significantly reduce errors in the diagnostic phase of pathology testing. In the pre-laboratory phase, Computerised Physician Order Entry (CPOE) systems allow clinicians to enter laboratory orders directly into a computer system. This can support EQA systems that aim to reduce the chance of errors associated with illegible handwriting, patient identification and specimen collection and labelling, key sources of error in the pre-analytical phase. CPOE systems can also be coupled with clinical decision support, assisting the clinician to choose the most appropriate tests for their patient. However, there is still scant evidence around the impact of such systems on patient outcomes . In one of the few studies to date, Georgiou et al. demonstrated that the implementation of a CPOE system led to a reduction in errors associated with mislabelled, mismatched and unlabelled specimens . CPOE also led to a reduction in both the number of tests being ordered per episode of patient care and laboratory turnaround time. These findings have a direct impact on patient safety and quality of care as a subsequent study showed that for every five additional tests, emergency department length of stay increased by 10 minutes and that each 30-minute increase in turnaround time was associated with a 17 minute increase in emergency department length of stay . In the post-laboratory phase, ICT can be used to support EQA systems that aim to standardise result reporting, reduce the number of missed test results and improve the quality of pathology result interpretation. The use of ICT to generate standardised pathology result reports, such as through mobile applications, may decrease the risk of incorrect result interpretation due to the clinician being unfamiliar with the report layout. However, the impact of such systems remains to be fully explored. Electronic test acknowledgement systems, which require the clinician to acknowledge that they have viewed a pathology result, can also be used to reduce the number of missed test results . Finally, electronic decision support systems can also be used in the post-laboratory phase to assist clinicians with adhering to guideline or protocol based care. However, in Australia, evidence surrounding the impact of such systems on patient outcomes remains weak . Therefore, while further patient centric studies are required to fully assess the impact of Health ICT on patient safety, ICT combined with EQA has the potential to reduce errors in the diagnostic phase of the pathology process. For pathology services to be of value, the correct ordering and interpretation of results is crucial. Both of these factors arephase . Errors occur throughout the testing process, most commonly involving test implementation and reporting results to clinicians. While significant physical harm caused by these errors is rare, adverse consequences for patients are common. It is a recommendation of the IOM that accreditation organizations have programs in place to ensure competencies in the diagnostic phase, and to identify and learn from diagnostic errors and near misses with an aim to reduce these errors in a timely fashion . EQA programs are proven ways of achieving these goals and have the experience and processes in place to provide the required platforms. To this end, we believe that widespread implementation of such programs, supported by ICT, is the next stage of identifying and reducing error in the diagnostic phase of the request-result cycle.
Exploring the Potential of Chitosan–Phytochemical Composites in Preventing the Contamination of Antibiotic-Resistant Bacteria on Food Surfaces: A Review
97e0529c-2db1-4fae-a752-7a76f3eb4e9d
11820375
Microbiology[mh]
Recently, issues related to food safety, particularly food poisoning, have become increasingly prominent worldwide. Food poisoning is triggered by the ingestion of contaminated food or beverages containing harmful agents, including bacteria, toxins, metals, and parasites. Among these, bacteria are one of the most common causes of food poisoning. Notable examples include common foodborne pathogens such as Salmonella spp., Staphylococcus aureus , and Clostridium perfringens , botulinum-producing strains like Clostridium botulinum , enteric pathogens , and emerging bacterial strains . Antibiotics play a crucial role in preventing and treating bacterial infections, contributing significantly to public health. However, the recent indiscriminate and uncontrolled use of antibiotics has led to the development of antibiotic-resistant bacteria (ARB), affecting foodborne infections. As a result, various pathogens have become resistant to multiple antibiotics, complicating treatment efforts. Typically, Escherichia coli has become resistant to several antibiotics, including bactrim, tetracycline, ampicillin/sulbactam, and clindamycin . Klebsiella pneumoniae exhibits high resistance to penicillin, cephalosporin, carbapenem, and fluoroquinolone , while Salmonella is resistant to fluoroquinolone . Moreover, the so-called “ESKAPE” pathogens, which include Enterococcus faecium , S. aureus , K. pneumoniae , Acinetobacter baumannii , Pseudomonas aeruginosa , and Enterobacter spp., are experiencing growing multidrug resistance and virulence . This growing resistance makes it imperative to find new ways to prevent the growth of pathogenic bacteria, including antibiotic-resistant strains, to ensure food safety and alleviate the burden on the health and economic sectors. Traditional sanitation methods, such as cooking and the use of chemical disinfectants, are foundational in food preservation, helping to prevent spoilage and extend shelf life. Cooking utilizes high temperatures to destroy bacteria, viruses, and fungi, which effectively reduces the risk of contamination and enhances food safety . Chemical disinfectants, including chlorine , hydrogen peroxide , and other antimicrobial agents, are commonly applied in food processing environments to sanitize surfaces, equipment, and fresh products. These methods are widely employed due to their proven effectiveness in preventing microbial contamination . However, they also present several limitations. Cooking can alter the texture, flavor, and nutritional value of food, which may reduce the overall quality of the product . In addition, cooking may not completely destroy bacterial toxins, such as enterotoxins, produced by bacteria growing in food, as enterotoxins remain stable at 100 °C for over an hour . Chemical disinfectants may leave residues that pose potential health risks to consumers or contribute to environmental pollution . Therefore, natural and safe preservatives that enhance food safety without compromising quality are needed. The preservation of food is an ongoing battle against microbial agents that threaten its safety. The food industry is increasingly exploring alternatives to traditional preservation techniques, driven by consumer demand for safe, natural, and convenient food products. Among these emerging strategies, the use of natural preservatives stands out as the most extensively investigated approach. Studies have shown the efficacy of natural compounds from plants for antimicrobial purposes, including against ARB. For instance, it is shown that sterols and hexadecenoic acid, present in Tridax procumbens L., exhibit effectiveness against bacterial strains such as E. coli , S. aureus , Bacillus subtilis , and Proteus mirabilis . Other examples include methyl gallate and fraxetin, found in the stem bark of Jatropha podagrica , along with cordycepin in Cordyceps militaris , which have demonstrated efficacy against both E. coli and B. subtilis , and terpenoids from Perilla frutescens leaves, which exhibited effectiveness against a broad spectrum of microbes . Additionally, momilactones, a diterpenoid lactone abundant in rice by-products like straw and husks, has been found to significantly inhibit the growth of E. coli , P. ovalis , B. cereus , and B. pumilus , as well as certain harmful fungi . These findings highlight the potential of natural preservatives in enhancing food safety, particularly against antibiotic-resistant pathogens. On another note, chitosan, a deacetylated derivative of chitin obtained from shrimp and crab by-products, is widely recognized for its inherent antimicrobial properties . Numerous studies have explored grafting natural compounds onto chitosan to enhance its effectiveness against ARB. For example, gallic acid grafted chitosan inhibits S. aureus , B. subtilis , B. cereus , E. faecalis , caffeic acid grafted chitosan inhibits Propionibacterium acnes , Staphylococcus epidermidis , S. aureus , P. aeruginosa , and quercetin-modified chitosan mitigates E. coli , S. aureus , and P. aeruginosa . The application of CPCs in the context of food safety embodies dual functionality. Primarily, these conjugates are harnessed as natural preservatives in the domain of food processing, manifesting inhibitory efficacy against both pathogenic and antibiotic-resistant bacterial strains. This phenomenon significantly extends the shelf life of food products by enhancing their quality properties, thereby contributing to food safety. Secondarily, the conjugates assume a pivotal role as integral components of food packaging materials, imparting an enhanced stratum of protection against bacterial contamination throughout the stages of storage and transportation. Due to the unique structure of a linear polymer, chitosan can form a stable structural system with phytochemicals through covalent and non-covalent bonding, such as Schiff base formation (-C=N-), hydrogen bonding, and ionic interactions. These interactions result in a stable composite system that enhances the bioavailability and bioactivity of phytochemicals while improving the antimicrobial efficacy of CPCs. In addition, the surface behavior and stability of CPCs are critical for their application as food coatings or films, particularly in maintaining food quality during storage. Specifically, with properties such as film-forming, thermal stability, moisture resistance, adhesion and surface uniformity, and high surface compatibility with food matrices, CPCs are expected to be a highly versatile material that extends shelf life, reduces spoilage, and enhances the safety of fresh and minimally processed foods. With the growing threat of ARB and the pressing need for innovative food preservation methods, this paper aims to explore the potential of CPCs as a solution to address both challenges. By reviewing the current research, we provide insights into the development of CPCs as surface coatings to protect food from ARB, offering a viable alternative to conventional preservation technologies. This approach not only addresses critical public health concerns but also aligns with industry trends toward natural antimicrobial solutions that enhance food safety and reduce the risk of antimicrobial resistance. In this study, we gathered documents retrieved from reputable databases, specifically Web of Science, ScienceDirect, Scopus, and the National Library of Medicine (via PubMed), spanning the timeframe from 2000 to 2024. The searching strategy focused on the keywords “antibiotic-resistant bacteria”, “chitosan–phytochemical composites”, and “food safety”, aiming to identify documents pertaining to the utilization of CPCs as a potential strategy against ARB associated with foodborne illnesses. The investigation delved into various techniques for synthesizing chitosan–phytochemical conjugates, including chemical modification, grafting, and enzyme-mediated techniques. Furthermore, it assessed their inhibitory effects on antibiotic-resistant bacterial strains and explored the applications of CPCs in food preservation. A total of 250 scientific publications were screened and filtered, resulting in 176 articles being included in the final review. Of these, 53 articles were experimental studies focused on CPCs targeting ARB. The remaining 123 articles contributed to the broader context of the paper, including the introduction and other sections not directly related to experimental studies. Food contamination with ARB and antibiotic residues is a global concern . First, even in countries with advanced food safety systems and strict regulations, contamination can still occur. This includes the presence of harmful bacteria, including ARB, that can survive throughout the food production and distribution chain. Factors such as improper handling, cross-contamination during processing, and breaches in hygiene protocols can contribute to such incidents. Additionally, the global nature of food trade allows contaminated products to spread across borders, posing significant public health risks . Second, antibiotic residues themselves present a serious food safety issue. Beyond promoting resistance, they also pose allergenic risks. Consuming food containing antibiotic residues can expose people to subtherapeutic doses of antibiotics, facilitating the development of resistant bacteria in humans. Moreover, some individuals may be allergic to specific antibiotics, and even trace amounts of residues in food can trigger allergic reactions, ranging from mild symptoms to severe anaphylaxis . Antibiotic resistance poses a widespread global public health crisis characterized by the resilience of bacteria, fungi, and other microorganisms against the effectiveness of antibiotics . This phenomenon amplifies the complexity of infection treatment, resulting in prolonged illnesses, heightened healthcare costs, and, potentially, fatalities . Antibiotic resistance primarily stems from the imprudent overuse and misuse of antibiotics, fostering the evolution of certain bacteria and the emergence of resistance mechanisms . These mechanisms manifest through alterations in the bacterial genome, the acquisition of resistance genes via horizontal gene transfer, or the activation of pre-existing resistance genes . The multifaceted nature of antibiotic resistance represents a significant and multidimensional threat to public health. Firstly, elevated morbidity and mortality rates arise from the heightened complexity associated with treating resistant infections . Secondly, patients with antibiotic-resistant infections often require extended hospitalization periods, leading to increased healthcare costs. Furthermore, the management of antibiotic-resistant infections results in greater financial expenditure due to the use of alternative, frequently more expensive, antibiotics . The propagation of resistance exacerbates the scarcity of effective antibiotics for infection treatment, potentially leading to the emergence of untreatable conditions in the absence of accessible and efficacious antibiotics . Moreover, antibiotic resistance introduces complications to surgical procedures, cancer treatments, and other medical interventions reliant on robust infection control measures. Within the context of global health threats, the facile dissemination of resistant bacteria across borders presents a formidable international health concern. International travel and trade expedite the rapid global spread of antibiotic-resistant strains . Antibiotic use in food production: The utilization of antibiotics in food production is a noteworthy contributor to the escalating concern surrounding antibiotic resistance—a critical global public health issue. Antibiotics find extensive application in agriculture and animal husbandry for various purposes, primarily to prevent and treat bacterial infections in farm animals . In settings characterized by high population density and insufficient sanitation, common in numerous industrial-scale farming operations, antibiotics play a pivotal role in mitigating the transmission of diseases among animals . Additionally, the incorporation of subtherapeutic doses of antibiotics into animal feed for growth promotion, particularly in poultry and livestock farming, has become pervasive . Consequences of routine antibiotic applications in food production: The routine application of antibiotics in food production yields several consequences that contribute to antibiotic resistance. demonstrates that pathogenic bacteria found in food exhibit resistance to multiple commonly used antibiotics. The consistent use of antibiotics in animal agriculture exposes bacteria in these environments to protracted periods of low antibiotic concentrations, creating selective pressure that favors the survival and proliferation of ARB. Subsequently, these resistant bacteria may infiltrate the food chain, potentially reaching consumers and facilitating the dissemination of resistance . The second factor underscores the presence of resistant bacteria in food items, including meat and dairy products, which act as reservoirs of infection for humans. Consumption of contaminated food places individuals at risk of acquiring infections from these strains, thereby reducing the efficacy of antibiotic therapies . Amidst the escalating challenge of antibiotic resistance, chitosan and its derivatives have emerged as a promising alternative in the ongoing battle against ARB. 4.1. Introduction of Chitosan4.2. Antibacterial Mechanisms of Chitosan Chitosan is well-known for its antibacterial properties , but the action mechanisms differ from conventional antibiotics . Resistance against antibiotics occurs because these substances have specific molecular targets within bacterial cells . Bacteria can acquire mutations in these specific targets or develop mechanisms to bypass them, leading to resistance . On the other hand, chitosan has been reported as a multi-target compound, interacting with different cellular processes simultaneously. This may make it challenging for bacteria to develop resistance, as it would require simultaneous mutations in various pathways . To date, there has been no evidence that chitosan specifically can cause or contribute to antibiotic resistance in bacteria. Further research is needed to better understand this issue. In this section, we discuss the action mechanisms by which chitosan inhibits bacteria to highlight its great potential in addressing the increasingly serious issue of ARB. Chitosan and its derivatives exhibit distinct mechanisms of action against Gram-positive and Gram-negative bacteria . This divergence in mechanisms can be attributed to variations in the composition of the bacterial cell wall. In Gram-positive bacteria, the cell wall consists of peptidoglycan, wall teichoic acid (WTA) covalently linked to peptidoglycan, and lipoteichoic acid (LTA) anchored to the microorganism’s cell membrane. Both WTA and LTA feature a negatively charged anionic structure. These teichoic acids create a densely arranged layer of negative charges in the cell wall, which, in turn, inhibits the passage of ions across the membrane, as shown in . In Gram-negative bacteria, the cell envelope is composed of two membranes separated by a periplasmic space that includes a thin layer of peptidoglycan. As depicted in , the lipid makeup of the outer membrane in Gram-negative bacteria displays an imbalance: lipopolysaccharide (LPS) is predominantly located in the outer layer, while the inner layer is composed of a diverse array of phospholipids. The outer layer of Gram-negative bacteria is characterized by negative charges stemming from the phosphate and pyrophosphate groups of LPS present in the outer membrane. Chitosan’s antimicrobial effects are commonly explained through four widely accepted models. 4.2.1. Membrane Disruption4.2.2. Interaction with Microbial DNA4.2.3. Formation of a Polymer Film on the Surface of Microorganisms4.2.4. Chelation of Nutrients by Chitosan Both Gram-positive and Gram-negative bacterial cell wall components have an affinity for divalent metal cations essential for microbial cell vitality because they maintain enzymatic functions and the stability of cytoplasmic membranes. WTAs in the peptidoglycan layer of Gram-positive bacteria, particularly attracting Mg 2+ and Ca 2+ cations, are crucial for maintaining enzymatic functions and membrane integrity. LPS in Gram-negative bacterial cell surfaces, contributing to the negative charge of the bacterial cell membrane, also exhibits a strong affinity for divalent cations . The absence of these cations renders bacteria more susceptible to chemicals or certain antibacterial agents. At a pH lower than 6.0, the protonated amino groups in the chitosan polymer chain compete with divalent cations for the phosphate groups present in LPS or teichoic acid structures. The addition of Mg 2+ and Ca 2+ to the culture medium increases the membrane’s positive charge, thereby diminishing the antimicrobial efficacy of chitosan . However, when the pH value of the medium is higher than the pKa of chitosan, the amino groups of chitosan become unprotonated amino groups. Therefore, they can donate their lone pair of electrons to the metal ions on the cell surface . Chitosan is a biopolymer derived from chitin, a naturally occurring polymer found in the shells of crustaceans such as shrimp, crab, and lobster, as well as in the cell walls of fungi, insects, algae, and microorganisms . It is produced through the deacetylation of chitin, a process that removes acetyl groups from the chitin molecule, resulting in the formation of chitosan. Chitosan comprises glucosamine units joined by β-(1,4)-glycosidic bonds . Chitosan is characterized by its functional groups, such as amine, hydroxyl, and acetylated amine groups. These functional groups confer a variety of biological properties, including film-forming capabilities, hypolipidemic activity, biodegradability, antimicrobial activity, immunoadjuvant activity, and acceleration of wound healing . Notably, the amine functional group (-NH 2 ) has garnered significant attention from the scientific community, as it enhances solubility in acidic environments, facilitates chemical modifications by reacting with various compounds, and contributes to antibacterial activity by interacting with bacterial cell membranes . The inherent attributes of chitosan directly relate to the scope of its applicability, as illustrated in . Due to its inherent nature as a biopolymer with substantial molecular weight, chitosan, when dissolved in a solvent, produces a high-viscosity gel-like solution. This solution subsequently facilitates the development of a thin film upon deposition or spraying onto material surfaces. The resultant chitosan film exhibits durability, resistance to tearing, and a capacity for biodegradation . Chitosan degradation primarily occurs through enzymatic processes, with lysozyme and chitinase playing pivotal roles. It is noteworthy that lysozyme is abundantly present in the human body, particularly within the pulmonary system, making it a viable material for both drug delivery and medical applications . The positively charged amino groups of chitosan can establish bonds with negatively charged molecules, including lipids and bile acids. Consequently, these complexes may be excreted in the feces. The interaction between chitosan and oil droplets influences the digestibility of the oil. The presence of chitosan reduces the accessibility of oil to digestive enzymes, resulting in a decrease in oil digestion . Additionally, chitosan features numerous -NH 2 groups, making it soluble in acidic solutions and imparting positive chargeability. The antimicrobial attributes of chitosan can be explained by the interaction between the positively charged ions on chitosan molecules and the negatively charged entities on the membranes of microorganisms. known to disrupt the integrity of microbial cell membranes. The positively charged amino groups in chitosan interact with the negatively charged components of microbial cell membranes, such as LPS in Gram-negative bacteria or teichoic acids in Gram-positive bacteria. This interaction can lead to membrane permeabilization, compromising the structural integrity of the microorganism and resulting in cell death . Low-molecular-weight (LMW) chitosan and its hydrolyzed derivatives can enter cell walls and impact DNA/RNA as well as protein synthesis . The binding of oleoyl–chitosan nanoparticles (OCNPs) to DNA/RNA was investigated by assessing their impact on the electrophoretic mobility of nucleic acids . The results demonstrated that as the concentration of OCNPs increased, interactions among bacterial genomes intensified. When the OCNP concentration reached 1000 mg/L, the migration of E. coli and S. aureus DNA and RNA was completely inhibited. This inhibition is presumed to arise from the interaction between the negatively charged phosphate groups in nucleic acid chains and the positively charged amino groups in OCNPs, thereby affecting the pathogen . In another study, the effect of chitosan on protein biosynthesis was examined by inhibiting β-galactosidase expression . Chitosan, with a high molecular weight, can form a dense polymer film on the cell surface, covering the porins on the outer membrane of Gram-negative bacteria. This process obstructs the exchange of nutrients and the uptake of oxygen, ultimately leading to the death of microbial cells . The presence of this film is evident through the visibly thicker cell walls, indicating the deposition of chitosan on the cell surface . This review emphasizes the use of phytochemicals over synthetic antibiotics or chemicals in developing chitosan composites to address contamination by ARB on food surfaces. The preference for phytochemicals is based on several critical factors. Synthetic antibiotics often impose selective pressure on microbial populations, promoting the emergence and proliferation of ARB, thereby exacerbating the antibiotic resistance crisis . Moreover, synthetic chemicals are associated with potential health risks, including toxicity, allergenicity, and carcinogenic properties. In contrast, phytochemicals derived from plants are generally considered safer alternatives with fewer side effects . In addition to antimicrobial effects, phytochemicals offer dual benefits. They not only exhibit antimicrobial properties but also possess antioxidant activity, which helps prevent food spoilage by inhibiting oxidative processes. These advantages are rarely observed with synthetic antimicrobials . Phytochemicals are naturally occurring metabolites found in plants, commonly referred to as bioactive compounds . These compounds contribute significantly to the diverse array of flavors, colors, and aromas present in fruits, vegetables, grains, legumes, nuts, and other plant-based foods. In addition, phytochemicals encompass a broad spectrum of compounds that help plants to defend against environmental stresses and to attract pollinators . Moreover, phytochemicals play a crucial role in promoting human health and are gaining attention for their potential applications in the food industry, particularly in antimicrobial activities. Phytochemicals are classified into several categories based on their chemical structures and functional properties. Among known phytochemicals, phenolic compounds (e.g., flavonoids, phenolic acids), alkaloids (e.g., caffeine, nicotine), terpenoids (e.g., carotenoids, terpenes), and organosulfur compounds are the most common compounds identified in plant sources . These natural compounds exhibit a myriad of bioactivities that contribute to human health-promoting effects. Phytochemicals, similar to chitosan, are multi-target compounds with the ability to inhibit bacteria by simultaneously affecting different pathways, which challenges bacteria to develop various mutations to resist . There have also been no reports of ARB against phytochemicals. Furthermore, the combination of chitosan and phytochemicals may offer a synergistic inhibitory effect on bacteria by increasing action targets, making it more effective in overcoming antibiotic resistance . This section highlights phytochemicals with potential antibacterial properties that can be combined with chitosan to achieve synergistic effects against ARB. provides an overview of commonly studied phytochemicals and their antibacterial activities. Numerous phytochemicals, as shown in , exhibit antioxidant properties, aiding in the neutralization of detrimental free radicals within the body and diminishing oxidative stress. This antioxidant activity is pivotal in mitigating the risk of chronic ailments like cancer, cardiovascular diseases, and neurodegenerative disorders . Consumption of phytochemical-rich foods has been associated with numerous health benefits. For example, previous studies have shown that diets high in fruits, vegetables, whole grains, and legumes, which are abundant sources of phytochemicals, are linked to a reduced risk of chronic diseases such as heart disease, stroke, diabetes, and certain types of cancer . In the food industry, phytochemicals serve various purposes, including enhancing the nutritional value, flavor, color, and shelf-life of food products . Additionally, their antimicrobial properties make them valuable natural preservatives for extending the storage stability of perishable foods and inhibiting the growth of foodborne pathogens and spoilage microorganisms. For instance, essential oils containing phytochemicals such as thymol, carvacrol, and eugenol have been shown to exhibit strong antimicrobial activity against a wide range of bacteria, fungi, and viruses . These natural compounds can be incorporated into food packaging materials and coatings or directly added to food formulations to inhibit microbial growth and prolong the freshness of food products without the need for synthetic preservatives. Moreover, plant-derived extracts rich in polyphenols, such as green tea extract and grape seed extract, have been used to prevent lipid oxidation and microbial spoilage in meat, poultry, and seafood products. Their antioxidant and antimicrobial properties not only maintain the quality and safety of food but also meet consumer demand for clean-label and minimally processed foods . Several studies have elucidated the primary mechanisms by which phytochemicals inhibit ARB. These mechanisms include membrane permeability alteration, efflux pump inhibition, enzyme inhibition, and plasmid curing . Some ARB strains tend to form non-permeable membranes or modify their cell wall structures to reduce the permeability of antibiotics. Research has shown that certain phytocompounds, such as flavonoids, terpenoids, and hydrophobic compounds (e.g., essential oils), can interact with membrane proteins or lipids, leading to structural changes in the membrane and, ultimately, creating a permeable membrane for antibiotics . Efflux pump inhibition is another critical mechanism by which phytochemicals combat ARB. Bazzaz et al. demonstrated decreased efflux of ethidium bromide in the presence of galbanic acid (a sesquiterpene coumarin from Ferula szowitsiana roots) in six drug-resistant strains of S. aureus . Inhibition of efflux pumps in ARB strains prevents the expulsion of antibiotics, allowing them to remain inside the bacterial cells longer. Moreover, some drug-resistant strains carry antibiotic-modifying enzymes that can degrade or metabolize antibiotics. Allicin (diallyl thiosulfinate), found in Allium sativum , has been reported to enhance the activity of ciprofloxacin, tobramycin, and cefoperazone against P. aeruginosa by inhibiting sulfhydryl-dependent enzymes such as RNA polymerase, thioredoxin reductase, and alcohol dehydrogenase . Of particular interest is the fact that the primary reservoirs of genes encoding antibiotic resistance in ARB strains are plasmids. Plant-derived chemicals and extracts have shown significant efficacy in curing ARB plasmids. For example, 8-epidiosbulbin E acetate from Dioscorea bulbifera has been proven to cure the antibiotic-resistant R-plasmids of isolates of E. faecalis , E. coli , Shigella sonnei , and P. aeruginosa , with an average curing efficiency of 34% . In summary, the ability of phytochemicals to combat ARB and their mechanisms of action have been demonstrated at various experimental levels, from in vitro to clinical tests. The research, discovery, and development of new formulations based on safe, natural materials such as phytocompounds will continue to be a highly attractive research topic. The utilization of synthetic materials based on chitosan has been increasingly researched and applied, particularly in the food industry and biomedical fields . Being a natural polysaccharide, chitosan possesses both hydroxyl and amino functional groups, facilitating its facile conjugation with bioactive compounds and thereby contributing to the diversity in structure and function of synthesized chitosan composites . In fact, other polysaccharides, such as alginate, can act as a carrier of phytochemicals in discovering novel bioactive composites. Alginate contains carboxyl (-COOH) and hydroxyl (-OH) groups, supporting ionic and hydrogen bonding interactions. However, the mechanical strength of the alginate–phytocompound complex is basically low and requires cross-linking agents such as calcium ions for enhanced strength. Unlike alginate and other polysaccharides, chitosan contains reactive amino (-NH 2 ) and hydroxyl (-OH) groups, while phytochemicals possess hydroxyl (-OH) and carbonyl (-C=O) groups. These functional groups enable both covalent and non-covalent interactions, contributing to the formation of stable and bioactive composites. Specifically, the amino (-NH 2 ) group of chitosan reacts with the carbonyl (-C=O) group of phytochemicals through a condensation reaction, forming a -C=N- imine linkage, which significantly enhances the stability and antimicrobial activity of the composite. Additionally, hydroxyl (-OH) groups from both chitosan and phytochemicals facilitate hydrogen bonding, providing further structural stability. Under specific conditions, hydroxyl groups may also participate in esterification reactions, further strengthening the composite structure. These interactions collectively enhance the mechanical and chemical stability of the chitosan–phytochemical composite while improving its bioactivity, particularly its antimicrobial properties, as demonstrated in . Several common methods for the synthesis of chitosan–antimicrobial conjugates have been reported, including (1) Schiff base formation , (2) free radical-induced conjugation , (3) enzyme-assisted coupling reactions , (4) ionic cross-linking , and (5) chemical modification , as depicted in . Most of these methods are selectively employed with phytochemicals, among which phenolic compounds have been widely studied. summarizes some common synthesis methods for CPCs along with their corresponding phytochemicals. In this study, we propose a new strategy for synthesizing CPCs to optimize the compatibility, stability, and activity of the final product. Unlike other reported methods, this report focuses on the careful selection of phytochemical candidates with strong and stable properties, particularly in terms of the structural integrity and composition of functional groups in the synthesis reactions. Spectroscopic techniques such as nuclear magnetic resonance (NMR), infrared (IR), and mass spectroscopy (MS) serve as tools utilized in qualitative methods and quality control of chitosan conjugates under various conditions such as temperature, pH, UV, etc. Ultimately, an essential step involves the evaluation of the activity of the synthesized composites and their practical applications. A typical synthesis strategy for chitosan–phytochemical conjugates includes the following steps. - Selection of phytochemical candidates: The careful selection of phytochemicals is imperative and contingent upon the specific intended application. Various plant extracts and bioactive compounds, renowned for their distinct properties such as antioxidant, antimicrobial, and anti-inflammatory effects, should be meticulously chosen through both in vitro and in vivo assays. - Preparation of chitosan solution: Chitosan is typically dissolved in an acidic solution. The selection of solvents profoundly influences the solubility and characteristics of the resultant CPCs. - Incorporation of phytochemicals: Phytochemicals are then introduced into the chitosan solution, ensuring homogeneity. This step may involve stirring, sonication, or other methods to facilitate thorough blending and ultimately optimize the amalgamation. - Composite formation: The composite is formed by allowing the solvent to evaporate, leading to the coalescence of chitosan and phytochemicals. Techniques such as casting, freeze-drying, or electrospinning are employed based on the desired structural configuration of the composite. - Characterization: The synthesized composite is characterized using various analytical techniques, including spectroscopy, microscopy, and mechanical testing. This step helps assess the structural integrity, chemical composition, and functional properties of the composite. - Evaluation of stability and biological activities: The stability of the synthesized composites is evaluated under diverse environmental conditions, while the biological activity of the product is scrutinized both in laboratory settings and actual applications. In this section, several examples of using CPCs to inhibit ARB growth are reported . highlights the potential of CPCs in combating bacteria that are widely known for their antibiotic resistance. The data demonstrate that CPCs exhibit greater antimicrobial activity than UC. This can be explained by the interactions or synergistic effects of chitosan and phytochemicals, which may enhance the inhibition of bacteria through an increase in the targets of action . The findings may provide an effective approach to addressing the challenges posed by ARB. Consequently, the subsequent section will delve into their applications in ensuring food safety. The differences in the modes of antibacterial activity have been demonstrated. It was reported that caffeic acid–chitosan conjugates could quench free electrons from the electron transport chain, altering the electric potential of the bacterial membrane. Additionally, these conjugates can reduce or completely inhibit bacterial growth by interfering with the proton efflux pump as a result of their interaction with the dehydrogenase enzyme . Hydroxycinnamic acid-grafted chitosan can disrupt cell membranes and cause cytoplasmic leakage . These studies have revealed that CPCs are promising candidates for controlling ARB. However, in practical applications, CPCs have faced challenges such as sensitivity to environmental factors (e.g., humidity and pH), which can impact their stability and mechanical durability . Furthermore, despite being considered safe, in vivo tests, followed by clinical trials, as well as long-term studies, are required to ensure the safety of these candidates for widespread use. On the other hand, the strong flavor of some phytochemicals can affect the sensory quality of food, requiring careful formulation . Continuous research on specific food types (e.g., fish, meat, and postharvest products) is also essential to confirm the potential of CPCs. 8.1. Preparation Techniques for Chitosan–Phytochemical Composite Coatings/Films8.2. Application of Composite Coatings/Films in the Food Industry 8.2.1. Application in the Preservation of Fish8.2.2. Application in the Preservation of Meat8.2.3. Application in the Preservation of Postharvest Products8.2.4. Application in Intelligent Packing (IP) Technology Recently, smart agriculture, via the application of advanced technologies, has been considered the future of global agricultural production. In addition to increasing productivity and the quality of agricultural products, the preservation of these products using IP technology has been receiving rising interest worldwide . For example, IP films have been developed based on the fortification of natural polymers (e.g., starch, cellulose, chitosan, pectin, etc.) with pH-sensing materials . Thus, IP films feasibly monitor the quality and freshness of packed food in real-time through the visible color adjustment due to pH change via the spoilage process, especially in proteinaceous foods . This packing technology helps food supply chain managers control products during storage, transportation, and distribution. Additionally, consumers can easily make better decisions regarding food freshness and quality. In the past, chemical substances such as cresol red, bromophenol blue, bromocresol green, etc., were used as pH colorimetric indicators for producing IP films . However, safety concerns are a major limitation in applying these synthetic dyes . Therefore, an alternative source of pH indicator for IP films can be mentioned as anthocyanins derived from plants. These natural pigments are stable under acidic conditions and unstable at higher pH levels . In addition to anthocyanins, natural products such as essential oils, phenolics, fatty acids, terpenes, flavonoids, and steroid extracts from plants can be directly added to the film-forming matrix, which can improve pH sensing and water vapor resistance . Moreover, these plant-based materials can satisfy safety requirements, as well as contribute to the antioxidant, antibacterial, and antifungal activities, which may enhance the economic properties of the film . Accordingly, these substances are promising candidates for the development of smart packaging technology. In terms of CPCs, while some investigations have explored the potential of CPCs for active packaging, their direct integration into IP systems remains under development. In the context of the rapid advancements in material science and artificial intelligence (AI), IP technology is expected to evolve beyond conceptual frameworks and soon find practical applications. Potential approaches include developing property models for various food types, establishing comprehensive datasets of possible pathogens, and using these to simulate CPC structures tailored to the specific surface characteristics of each food. Such approaches aim to optimize binding affinity and enhance the practical functionality of CPCs in actual IP applications. 8.1.1. Chitosan–Phytochemical Composite-Based Coatings8.1.2. Chitosan–Phytochemical Composite-Based Film8.1.3. Chitosan–Phytochemical-Based Layer-by-Layer Edible CoatingsCoatings refer to the application of a layer or covering on a surface of food using different techniques such as spraying, dipping, or spreading . CPC coating serves as a valuable technique for inhibiting microbial growth, extending shelf life, and preserving overall product quality . The coating procedure involves several sequential steps : Preparation of raw materials by blending appropriate proportions of chitosan and fillers; Creation of coating samples using various methods such as irradiation, heating, mixing, and steam flash pasteurization; Sanitization of food samples using sodium hypochlorite; Application of chitosan-based composite solutions to food using a sterile spreader; Drying under specific conditions; Packaging and storage in suitable environments. Antimicrobial components have the potential to migrate from the films into the food, thereby extending the shelf life of the food. These techniques are employed in the coating of fruits and vegetables to enhance their longevity . Film forming refers to the process of creating a film or coating by dissolving chitosan in a solution and then casting or spreading the solution to form a solid film as the solvent evaporates . The procedure is cost-effective and straightforward, leading to the formation of the polymer structure through intermolecular electrostatic and hydrogen bonding . The typical approach for producing the film through the solution casting method is shown in . The fabrication process comprises multiple steps. (1) Initially, to produce the casting solution, chitosan is dissolved in an acidic solution with a pH below 6.0. This solution subsequently undergoes blending, mixing, or crosslinking with other biopolymers, fillers, and functional materials in varying proportions. The resulting mixture is then stirred to achieve a homogeneous viscous solution, followed by processes such as filtration, sonication, or centrifugation to eliminate air bubbles and insoluble particles. (2) Once prepared, the solution is cast or poured onto a surface for drying in the casting phase. (3) Finally, after complete drying, the resulting film is peeled off . As a consequence, the produced edible film is employed to wrap the surface of a food product and has positive effects on the preservation of postharvest products . The layer-by-layer method for coating food is an advanced technique that entails the sequential deposition of alternating layers of materials onto food products, resulting in a multilayered coating, as illustrated in . This approach offers precise control over the composition, thickness, and functionality of the coating, making it particularly advantageous for various food applications . This technique for creating multi-layered films eliminates the need for intricate equipment. Within the layer-by-layer assembly paradigm, surface modification primarily depends on the mutual attraction and deposition of alternating polyelectrolytes. Layer-by-layer deposition offers the opportunity to formulate active packaging films and coatings by incorporating active agents. It has been suggested that the layer-by-layer method, when used synergistically with other techniques, effectively preserves food quality and extends shelf life . Chitosan has demonstrated its effectiveness in extending the shelf life of fish and fishery products by inhibiting the growth of spoilage bacteria and preserving product quality . In another study, a coating composed of chitosan and curcumin nanoparticles significantly reduced bacteria counts and prolonged the shelf-life of fish filletsup to 15 days . Ehsani et al. explored the effects of chitosan films combined with sage essential oil on the deterioration of fish burgers made from common carp flesh. The coatings effectively inhibited or slowed the growth of harmful and spoilage-causing bacteria, with sage essential oil preventing the production of off-flavors . When bacteria infiltrate the inner layers of meat, it is challenging to eliminate them through cooking. Deterioration caused by microorganisms is the primary factor affecting the quality and shelf life of meat. Packaging materials made from antimicrobial substances are now prioritized over artificial food preservatives. Hence, several studies have been undertaken to develop films based on CPCs and to employ them in the processing of meat products, including red meat, poultry, and pork . Green tea extract has been incorporated into a chitosan-based film as an active component to prolong the shelf life of pork sausages. The results indicated that pork sausages enveloped in a chitosan film containing green tea extract demonstrated fewer alterations in color, texture, 2-thiobarbituric acid value, sensory attributes, and microbial growth compared to the control group. Ultimately, the study suggested that the addition of green tea extract to the chitosan film could enhance its antioxidant and antibacterial properties, thereby preserving the quality and extending the shelf life of pork sausages . In another study, Gaba et al. investigated the efficacy of chitosan films combined with essential oils of oregano and thyme in mitigating meat deterioration and inhibiting harmful microorganisms. The produced films successfully suppressed the proliferation of psychrophilic bacteria . The utilization of natural preservatives in the preservation of postharvest products has become increasingly crucial to enhance food safety, extend shelf life, and minimize food waste. The application of edible coatings and films based on CPCs has gained prominence in postharvest preservation. Edible coatings contribute to preserving fruits and vegetables while offering a sustainable alternative to traditional packaging materials. The use of biopolymer-based edible films for wrapping or coating presents an affordable, straightforward, and effective solution to prevent moisture loss and reduce degradation and respiration rates. Additionally, preserving minimally fresh-cut products presents a significant challenge due to their rapid deterioration. This is primarily driven by increased respiration rates, elevated ethylene production, and accelerated consumption of sugars, lipids, and organic acids during the ripening process. These changes can result in texture degradation, moisture depletion, and undesirable alterations in flavor and color associated with senescence. To address these spoilage issues, applying edible films or coatings can be effective. These films form a semi-permeable barrier to gases and water vapor, thereby maintaining food quality and extending shelf life while enhancing the appearance, flavor, color, and nutritional value of the products . Studies have shown that incorporating chitosan-based composites with bioactive compounds is advantageous in extending the shelf life and maintaining the quality of postharvest products . For instance, a chitosan composite film containing apple peel polyphenol increased the shelf life of strawberries . A chitosan–salicylic acid composite maintained the quality of fresh-cut cucumbers and improved their shelf life . Similarly, a chitosan–ascorbic acid composite extended the storage period of fresh-cut apples . In the case of mushrooms, a chitosan–thyme oil composite successfully prolonged their shelf life . For fresh-cut potatoes, chitosan coatings with cinnamon essential oil have been developed to enhance quality and microbiological safety . Similarly, Yang et al. employed a chitosan coating comprising blueberry leaf extract to enhance the preservation of fresh blueberries . These examples highlight the versatility and effectiveness of chitosan-based composites in fresh produce preservation. CPC coatings are considered to contribute to the preservation and protection of food from production to consumption . Particularly, CPC coatings inhibit bacterial growth on food surfaces, creating a protective barrier that helps to prevent or significantly reduce the risk of foodborne illnesses. By actively suppressing the proliferation of harmful bacteria, CPC coatings contribute to enhanced food safety and extend the shelf life of food . Additionally, these coatings help maintain the freshness and nutritional quality of the food throughout transportation and storage . CPC coatings can also reduce the risk of cross-contamination during handling, packaging, and transportation, protecting food from harmful bacteria . In another aspect, despite good practices (e.g., washing, cooking, or processing before consumption) can reduce the risk of bacterial contamination and food poisoning, these practices require stringent and rigorous standards, such as the hygiene of the person handling the food, cooking utensils, and water used during cooking, among other factors . Furthermore, considering the widespread presence of microorganisms and multiple pathways of food contamination, it is challenging to assert that food processed in such a manner is completely free of bacteria . While improper washing, cooking, or processing can adversely affect food safety, the pre-storage preservation of food before it reaches the consumer is a crucial step. Consequently, the application of CPC coatings shows significant potential in preventing the accumulation of bacteria and their toxins, as well as reducing the risk of contamination by pathogenic bacteria, particularly ARB. On the other hand, it is noteworthy that combining multiple control processes and safety measures simultaneously will yield optimal results in protecting consumer health . Therefore, CPC coatings can act as an additional measure to enhance food safety alongside good practices. For example, if bacteria are inhibited or minimized by the coating, cooking will be more effective in eliminating remaining bacteria in food . Additionally, for uncooked products such as raw vegetables, fresh fruits, and sushi, the use of protective coatings is crucial as they are not cooked before consumption . Similarly, processed or convenient foods require protective coatings to maintain freshness and safety before consumption . Studies suggest that CPC coatings can be a natural alternative to synthetic chemical preservatives, helping reduce the negative impact on consumer health . To effectively apply CPCs in food preservation and ensure food safety, it is essential to select the appropriate CPC formulation for each type of food. Achieving this requires thorough data collection on bacterial contamination risks, including information on ARB found in specific food types, such as meat, vegetables, fish, and dairy, as detailed in . Subsequently, bacterial contaminants can be detected using traditional culture-based methods and rapid molecular techniques . Following detection, model simulations, such as multi-factorial models, ComBase models, bi-dimensional and tri-dimensional models, and Baranyi or Gompertz models, can be employed to simulate bacterial growth under various conditions, including initial contamination levels, storage conditions, and handling methods . Based on the simulation results, CPC research can then be conducted to identify the most effective composite against the predicted bacteria. For example, if a high risk of E. coli contamination is identified, CPCs containing quercetin or gallic acid , known to inhibit E. coli , might be selected. CPC optimization involves adjusting the concentration and application method to maximize effectiveness against the identified bacteria. Experimental testing under real-world conditions follows, where the effectiveness of the CPC is tested against the predicted bacterial strains, and the simulation model is refined based on the experimental results. Continuous evaluation and improvement are critical, involving data analysis to compare experimental results with simulations and refining the model and methods to optimize CPC effectiveness in food preservation. Ultimately, using simulation and prediction methods helps identify the bacterial strains most likely to contaminate food, enabling the selection of an appropriate CPC for preservation, thereby enhancing the effectiveness of food preservation and safety. The emergence of ARB presents a significant challenge to both public health and food safety. Consequently, there has been a notable interest in exploring alternative antimicrobial agents. This article provides a comprehensive review of chitosan, a natural biopolymer derived from chitin, which has demonstrated antimicrobial properties, rendering it an appealing candidate for addressing ARB. When combined with phytochemicals sourced from plants, this composite presents a synergistic effect, enhancing antimicrobial activity. Current research has yielded promising findings, indicating that CPCs possess antimicrobial properties capable of inhibiting the growth of antibiotic-resistant bacterial strains commonly encountered in foodborne pathogens. Furthermore, studies have illustrated the effective application of these composites across diverse food matrices, encompassing meats, fishes, fruits, and vegetables, to mitigate microbial contamination and extend shelf life. The integration of natural compounds like chitosan and phytochemicals derived from crop residues and agricultural by-products aligns with the growing need for sustainable and eco-friendly substitutes to traditional antibiotics in food production systems. This direction holds promise and is deemed appropriate. Future research should prioritize overcoming existing limitations and optimizing the efficacy of these composites. To date, research has primarily focused on chitosan composites with phenolic compounds, while studies involving chitosan composites with other phytochemical groups, such as terpenoids (including carotenoids and triterpenes) and alkaloids, remain limited. Furthermore, a deeper understanding of the mechanisms underlying the antimicrobial action of these composites is necessary to enhance their effectiveness against antibiotic-resistant bacteria (ARB). Standardization of methodologies and thorough characterization of the composites are crucial to ensure the reproducibility and comparability of research findings. For future considerations, CPC coatings present significant potential for diverse applications across various fields, particularly in the food industry, due to their strong antibacterial and antioxidant properties . These coatings are biocompatible, safe for human use, environmentally friendly, and biodegradable . Their flexibility allows for a wide range of applications, from antimicrobial food packaging to preservation coatings . In the field of nanotechnology, chitosan–phytochemical nanoparticles, with their increased surface area compared to larger particles, can interact more effectively with bacteria, cells, and other molecules, thereby enhancing their antibacterial efficacy, including against ARB . These nanoparticles can be employed to extend the shelf life of food , inhibit the growth of harmful microorganisms , and protect consumer health by delivering antioxidants and beneficial plant compounds . Addressing the regulatory aspects related to the safety and approval of these novel antimicrobial agents for food use is also essential. Collaborative efforts among researchers, food manufacturers, and regulatory agencies will be vital for advancing this field and facilitating the adoption of CPCs as sustainable alternatives to conventional antibiotics in food production. Additionally, exploring innovative delivery systems and applications, such as incorporating these composites into food packaging materials, could provide the dual benefits of food preservation and antimicrobial protection. In agricultural production, crop residues and by-products pose significant environmental, economic, and social challenges . The effective utilization of these resources has recently become a global priority, contributing to the sustainable development of agriculture . Notably, crop residues and by-products have been shown to contain numerous bioactive compounds, such as anthocyanins, phenolics, flavonoids, fatty acids, terpenes, and steroids, which can be valuable for producing IP films . Recent advancements in extraction and isolation methods have improved the ability to obtain these compounds from plant sources through simple, convenient, and cost-effective procedures . Therefore, the efficient use of crop residues and by-products could lead to the availability of IP materials, potentially reducing global pollution caused by the overuse of traditional packaging microplastics. Overall, whether in laboratory research or industrial-scale applications, the stability and performance of CPCs are crucial for their effectiveness in applications such as food preservation. Phytochemicals, being sensitive to light, UV radiation, and humidity, benefit from incorporation into the chitosan matrix, which provides a protective barrier against environmental stressors. In a complex, chitosan can shield phytochemicals from light and UV exposure while reducing their interaction with oxygen and moisture. Further stabilization can be achieved through encapsulation techniques, pH adjustments, and the use of cross-linking agents, which enhance the interaction between chitosan and phytochemicals. Forming chemical bonds, such as Schiff base (-C=N-) linkages or ester bonds, significantly improves the stability and retention of bioactive properties. Evaluating the composite involves assessing loading efficiency, release profiles, resistance to environmental factors, and functional bioactivities such as antimicrobial or antioxidant effects. These strategies collectively ensure that chitosan–phytochemical composites maintain their stability and functional performance, making them promising materials for food preservation. In practical production and application, addressing the surface characteristics of different foods is crucial to enhancing the applicability and effectiveness of CPCs in food preservation. Food surfaces exhibit varying hydrophilic or hydrophobic properties depending on their composition. For example, vegetable products often have waxy or cuticle layers that contribute to hydrophobicity, while fruit surfaces may vary in texture and chemical composition, affecting coating adherence. In contrast, meat surfaces are primarily hydrophilic, consisting of proteins and moisture, which necessitate tailored coating formulations for stable adhesion. CPC formulations must be designed to align with the specific surface properties of each type of food, a task that can be supported by advanced simulation technologies integrated with AI. Additionally, the distribution and attachment of pathogenic bacteria differ significantly between food surfaces and interiors. External contamination on surfaces can be more effectively managed with coatings, whereas internal microbial contamination requires preventive measures during harvesting, processing, and packaging. Furthermore, advancements in genetic technologies may enable the development of new plant and animal varieties with superior antimicrobial traits compared to traditional breeds, providing another layer of defense against microbial contamination.
High-reward, high-risk technologies? An ethical and legal account of AI development in healthcare
72d35e68-dc0e-411e-960f-489ec9b4d3af
11734583
Psychiatry[mh]
Recently, researchers, media, and practitioners have taken a keen interest in developments in artificial intelligence (AI). Indeed, since the launch of ChatGPT and GPT-4 by OpenAI at the end of 2022, citizens and professionals from all sectors, including healthcare, have been debating the contributions, impacts, and risks of such technologies. This paper outlines the main ethical and legal considerations associated with the development and deployment of AI within healthcare systems. Medical doctors have used advanced technologies for many years. So why is AI different? First , it is far more disruptive. By allowing autonomous, opaque learning—and sometimes even decision-making—in a dynamic environment , AI leads to some unique technical, ethical, and legal consequences. For the first time since the birth of medicine, technology is not limited to assisting human gesture, organization, vision, hearing, or memory. AI promises to improve every area from biomedical research, training, and precision medicine to public health , thus allowing for better care, more adapted treatments, and improved efficiency within organizations . AI techniques including artificial neural networks, deep learning, and automatic language processing can now for example analyze a radiology image more quickly and precisely than a human , diagnose a pathology , predict the occurrence of a hyperglycemia crisis and inject an appropriate dose of insulin , and analyze muscle signals to operate an intelligent prosthesis . Yet, these improvements need to be balanced by the gap that now exists between the development (and marketing) of many AI systems and their concrete, real-life implementation by healthcare and medical service providers such as hospitals and medical doctors. This “AI chasm” is notably explained by the disconnect that sometimes exists between the information technology (IT) side of system development and their adaptation to the specific needs and reality of healthcare institutions and patients, as well as by the ethical and legal issues discussed in this paper . Investment in the infrastructure that leads to AI solutions capable of “being implemented in the system where they will be deployed (feasibility), [and of] showing the value added compared to conventional interventions or programs (viability)” should also be targeted . Second , health professionals generally seem to have rather poor knowledge of what AI is and what it allows . While there is no unanimous definition of AI, the one proposed by the Organization for Economic Cooperation and Development (OECD) has gained international traction and is often referred to in various policy initiatives. Based on such definition, this paper includes all kinds of computational systems processing input data to generate outputs such as predictions, content, recommendations, or decisions that can influence its healthcare environment of implementation . In healthcare, AI has great potential and it can be integrated to connected objects (e.g., smart blood pressure monitor ), robotic systems (e.g., surgical robot ), virtual assistants (e.g., patient management or appointment scheduling systems ), chatbots (e.g., customer service), contacts tracking during epidemic episodes , or medical decision support (e.g., radio image recognition for diagnosis , choice of optimal treatment options ). The practice of medicine is based on medical doctors’ knowledge and experience, and AI’s dizzying calculation capacities mean that it can develop clinical associations and insights on data derived from this knowledge (i.e., evidence from textbooks) and experience (i.e., lab results from patients) . Thus, to the extent that the “AI chasm” can be reduced, healthcare professionals will increasingly see intelligent tools or machines being integrated into their daily practice . This naturally provokes concerns such as the fear of being replaced and lack of confidence in the machine. In addition, healthcare professionals are poorly informed about the ethical and legal issues raised by the use of AI . Worries about the blind spots, complex implementation, impacts, and risks of AI have generated much political, academic, and public debates . Some have called for new ethical frameworks to guide the responsible development and deployment of AI, which has led to numerous declarations, ethics charters, and codes of ethics, proposed by organizations of every type , including international organizations , public and academic institutions , hybrid groups , and private companies such as Google , IBM , Microsoft , and Telia . AI legislation has also been called for. All these productions are sources of normativity . In other words, they guide human behavior, providing parameters for what “should” and “shouldn’t” be done. However, the disciplines of ethics and law have distinct logics, conceptual frameworks and objectives and respond to different procedures of creation and implementation , making ethics and law two separate sources of normativity. First, law is composed of general, impersonal, external and binding rules, accompanied by potential formal sanctions (by courts or police for instance), while ethical norms do not exist in a coherent and organized set of norms – as in the case within a legal order - and while adherence to ethical principles is voluntary . Second, legal rules derive from the state structure, in force at a given time, in a given legal space. The field of ethics, meanwhile, is derived from philosophy, and more recently social sciences, and relates to a reflexive process that does not freeze ethical principles in time and space, but seek to define them in a more dynamic way. Third, legal rules seek to provide a framework for the coexistence of people in society, to protect its members and to guarantee political, economic and social interests at the same time, whereas ethical norms and discussions are more based on moral values . In sum, legal rules could be defined as the minimal duty that every person must respect (whether one can do something), while ethics encourages reflection on choices and behaviors (whether one should do something). In healthcare, ethics first dealt with the manipulation of living organisms through “bioethics” before considering patient relationships through “clinical ethics” and management and governance through “organizational ethics” . The latter two aspects are still difficult to grasp today, because they demand a global understanding of organizations that encompasses employees’ issues beyond the relationship of care. Interestingly, despite the wealth of literature on AI, there is little to show healthcare professionals the main issues with an eye on the conceptual differences between ethics and law. This confusion is important to clarify, considering the different level of opportunities and limitations they bring forward in medical practice. Therefore, in this paper, we highlight how ethics and law approach the issues of AI in health from different perspectives. While law is mostly a local matter, our reflection does not target any one national jurisdiction. Nevertheless, the examples we use to better illustrate our analysis are focused on western countries and regions most active in the AI field (on the governance and technical sides) , i.e. the United States, Canada, Australia, the European Union and the United Kingdom. In ethical matters, the discussion encompasses a variety of ethical work on AI , but the monopolization of the ethical debate by a few countries from the Global North should be underlined. This paper presents an overview of the main issues pertaining to AI development and implementation in healthcare, with a focus on the ethical and legal dimensions of these issues. To summarize these, we analyzed the literature that specifically discusses the ethical and legal dimensions related to AI development and implementation in healthcare as well as relevant normative documents that pertain to both ethical and legal issues (i.e., AI ethics guides or charters developed by governments, international organizations and industries, as well as legal instruments). After such analysis, we created categories regrouping the most frequently cited and discussed ethical and legal issues. We then proposed a breakdown within such categories that emphasizes the different - yet often interconnecting - ways in which ethics and law are approached for each category of issues. Finally, we identified several key ideas for healthcare professionals and organizations to better integrate ethics and law into their practices. The paper is divided into six sections, corresponding to the most important issues associated with AI in healthcare: (1) Privacy; (2) Individual autonomy; (3) Bias; (4) Responsibility and liability; (5) Evaluation and oversight; and (6) Work, Professions, and the Job Market. In conclusion, we advance a few proposals aimed at resolving some of the highlighted issues for healthcare professionals. In machine learning or deep learning models, the computational algorithm solves problems by seeking connections, correlations, or patterns within the data on which it is “trained” . Since the effectiveness of these models depends heavily on the quality and quantity of training data , one of the most common techniques in AI technology development is to collect, structure, and use as much varied data as possible . In the healthcare arena, this data can take many forms - such as measurements of a patient’s clinical vital parameters, biological analysis results, or genetic characteristics -, and is created and collected from a wide variety of sources, from traditional healthcare system activities to self-tracking of consumers using digital technologies (“quantified self”) . Thus, this type of data is linked to an individual or a group who is directly or indirectly identifiable or targetable. However, health data is much broader than most people realize, and can also cover diet, exercise, and sleep - all collected by private companies outside the health system through connected devices such as smartphones and smart watches. Considering the intimacy and sensitivity of health data and the many actors potentially involved, AI highlights the question of individual privacy. The ethics of privacy Privacy and the law From a legal perspective, privacy refers to the principles, rules, and obligations embedded in law that protect informational privacy and personal information. These rules are also challenged by the characteristics of AI techniques in the field of healthcare. Specifically, it becomes harder to respect principles and rights already enshrined in law, and the application of certain rules is more perilous - either because it ends up blocking the creation or use of a system, or because it does not allow the protection of privacy. While the following discussion is not exhaustive, it represents the bulk of legal discussions about informational privacy. First, a law’s scope of application has a major impact on the protection that it will grant. While the common meaning of “personal data” may be clear , its legal definition can vary between countries (and even within them). For example, it may refer narrowly to data managed and held in a particular file or by a particular entity (e.g., the U.S. HIPAA Privacy Rule, which covers certain entities within the traditional health system , or the Australian Privacy Act, which applies only to health service providers ). It may also extend its protection to information that allows both direct and indirect identification (e.g., first and last name, social security number, address, phone number, race, identification key, depending on countries), and re-identification capacities (e.g., overlaying two sets of data to create a deep learning database for the AI system). An example is the new California Consumer Privacy Act, which includes “reasonable” possibilities for re-identification . Laws can define personal health data as data that is medical by nature (e.g., a medical test result), by purpose (e.g., used medically), or by cross-referencing (e.g., crossed with other data, as in AI analysis, to provide health information in combination)—as it appears to be the case with the French Data Protection Authority based on the European General Data Protection Regulation (GDPR) definition . Second, AI also challenges rules regarding the collection, use, and disclosure of personal data. For example, the requirement to determine the purposes for which data will be used in advance is a fundamental tenet of many privacy laws . Similarly, the legal obligation of proportionality, minimization, or necessity requires that data be processed only to the extent necessary for the purpose at hand. However, many deep learning models require large amounts of data without knowing its purpose or even necessity in advance . These principles will probably need to be revisited or relaxed if legislators wish to allow the widespread deployment of AI. Third, meeting the conditions of access to qualitative and exhaustive health data held and produced by health systems is often a long, arduous, and discouraging journey for researchers . Pooling and managing this data to offer easy but controlled access requires additional legal imperatives on technical security, in particular against cyberattacks. Fourth, health and data protection laws do already consider AI through the way data is used and the consequences for the individual . For example, fully automated decision-making and profiling systems are increasingly subject to special rules through legislative amendments in specific situations. For instance, there may be a specific right to be informed of the use of profiling techniques (as in the new Quebec’s Act modernizing provisions as regards the protection of personal information [ – ] or the new California Privacy Rights Act ); fully automated decisions are prohibited when they cause harm to the individual (as in the GDPR); and the right to have the decision reviewed by a human can be problematic, as the reasoning behind the decision is not always fully comprehensible. From an ethical point of view, issues of privacy are rooted in conflicting moral values or duties. The very concept of privacy has been defined in many ways in the ethics literature, with its origine intertwined with its legal protection , so it can hardly be summarized through a single definition. In the field of health, the search for what is right or wrong, appropriate or inappropriate, commendable or condemnable [ – ] is an ancient reflection that constitutes precisely the foundation of biomedical, clinical, and research ethics . In a context where people reveal details of illness, pain, life, and death , respect for their privacy as confidentiality of their information, and protection of their care spaces, both physical and virtual , from interference or intrusion (e.g., constraint, coercion and uninvited observation) is crucial. Without this assurance of secrecy, patients would be less willing to share intimate information with their doctor, affecting their care or the usefulness of research . Safeguarding confidentiality of health information as well as personal health choices is also crucial in preventing discrimination, deprivation of insurance or employment , emotional stress, psychological consequences of revealing intimate information, and erosion of trust, among others . Thus, preventing the damage caused by a violation of privacy is a major moral imperative in medical ethics . However, this principle of privacy is confronted with the duty to disclose information, either for the direct benefit of the patient (e.g., sharing of information for better care, their reimbursement, their own self-physical protection), for the benefit of others or society as a whole (e.g., disclosure of a communicable disease , protection of other victims , medical research , etc.), or for the commercial gains of AI specialized companies that can all claim a valuable moral interest. This tension between individual privacy and disclosure for potential useful uses is exacerbated by digital innovation, data analytics, and AI for several reasons. First, reliable AI development depends on access to health data, but this is restricted by the imperatives of confidentiality. Second, creating and using AI algorithms implies finding correlations across data sets that can allow the re-identification of individuals , even if the data was initially anonymized , which could cause breaches of confidentiality. Third, the more the data is anonymized, the greater the risk that its utility is reduced. In addition, the portability and diversity of information collection systems (e.g. health, sport, or wellness applications; connected devices; data shared on social networks) make it much harder to guarantee the protection, security, and confidentiality of personal data in comparison to data collected through the traditional health system (e.g., hospitals, clinics) . For example, data that might initially be loosely related to someone’s health (e.g., daily calorie intake) can become more sensitive when correlated with other variables (e.g., a person’s weight), which is almost inevitable in the construction of an AI model . However, taking this kind of data into account can help reveal more factors of a disease, and allows for a more predictive and personalized medicine . These arguments all come as challenges to the principle of privacy. Others take a very different view, departing from the principles of bioethics and privacy protection. For instance, engineers might argue that the astonishing recent advances in computing power, data collection, and the speed and ease of data exchange are realities that make privacy an outdated concept unsuited to our time . In that sense, engineers may see privacy as a hindrance to the profitability of business models and innovation , thus limiting the benefits to health. The second issue is closely related to some of the considerations outlined above. Autonomy is one of the four key principles identified by medical ethics. The Greek terms autos and nomos mean “self” and “law, rule,” so “autonomy” refers to a person creating their own rule of conduct and having the capacity to act without constraint and make their own decisions . Many western jurisdictions incorporate the principle that free and informed consent must be obtained for any medical examination, treatment, or intervention, based on both the ethical principle of autonomy and the legal foundation of the inviolability and integrity of the person . This principle of autonomy, as well as the moral value it embodies and the regulation that frames it, are confronted with several characteristics specific to AI. The ethics of autonomy Autonomy and the law First, the “black box” phenomenon can impair the autonomy of the person whose data is processed for AI purposes. Indeed, some machine learning algorithms (e.g., the “random forest” classification algorithm) and, among them, deep learning algorithms (e.g., neural networks) have a high variability of inputs and a complex data-driven operation (non-linear system, where interactions do not follow a simple additive or proportional relationship), making it difficult for experts, let alone the general population, to understand how and why an algorithm arrived at a result (which we refer to as “intelligibility”) . Whether it is about the process of model generation or the result obtained, the challenge is to provide a satisfactory explanation tailored to the user or person affected by the result, thus increasing the “interpretability” of the AI system . In the medical context, increasing importance is placed on patients’ co-participation in their care and their ability to refuse care or request additional medical advice. In some circumstances, the use of AI can erode the patient’s autonomy (even if the democratization of AI can also enhance people’s autonomy in other ways, including by increasing access to, and interpretation of, medical information). It may be difficult, if not impossible, for a patient to challenge a decision if the health professional cannot clearly explain how or why they proposed a certain treatment or procedure. Thus, the use of opaque, unintelligible AI systems might resurrect a certain medical paternalism, accentuating this loss of autonomy . Refusing the use of the AI system may also be ethically questionable because of the characteristics of informed consent. “Valid informed consent requires clear and accurate recognition of the situation, absence of coercion (physical or psychological), and competence to make decisions (or representation, in the case of minors and incompetent adults)” . Each of these three elements, however, differs depending on the individual’s level of AI literacy and other subjective characteristics (i.e., psychological, cognitive, or contextual), the interpretability of the algorithm used, and the amount and accuracy of information given to the patient. Currie and Hawks consider that “the public and patients are not always sufficiently informed to make autonomous decisions” . Using nuclear medicine and molecular imaging as examples, they argue that people are probably underinformed and underqualified to determine what they want from AI, what they can expect from it, and thus whether they will allow AI to decide on their behalf . Moreover, freedom to consent is called into question when access to a health service or the use of a connected tool is conditional on sharing personal data . However, maintaining trust in the use of AI in healthcare may push towards disclosing the use of AI for purposes other than treatment. In this regard, Amann et al. believe that “appropriate ethical and explicability standards are therefore important to safeguard the autonomy-preserving function of informed consent” . Secondly, some controversial business practices reduce people’s moral agency, i.e. their ability to make moral choices, to exercise a form of evaluative control over them, and be held accountable for these choices , which impacts people’s autonomy. Tools ostensibly sold for healthcare or fitness (e.g., smart watches) become monitoring and information-gathering tools for the firms that collect these data . These personalization technologies allow a “better understanding of consumer behavior by linking it very precisely to a given segment based on observed and inferred characteristics” (our translation) . For example, “dark pattern” practices trigger the brain system that corresponds to rapid, emotional, instinctive, and routine-driven choice, producing an emotional stimulus that tips the consumer towards a purchase . Thus, personalized manipulations join personalized prices in the marketer’s toolbox . On the one hand, the user’s range of choices is narrowed according to their past consumption or the customer segment that the algorithm assigns them to (e.g., filter bubbles, misinformation ). On the other hand, the commercial entity manipulates consumer behavior to create an incentive to purchase or consume a particular product (e.g., dark nudges, emotional pitches, or “dark sludge” ) . The probability of a consumer being manipulated depends on their tech literacy and ability to spot the manipulation. These impediments to autonomy speak to the primordial moral and ethical choices of what constitutes a dignified, free, or satisfying human life, and several authors have exhorted us to deeply reflect on them . Third, healthcare professionals’ autonomy may also be impacted, either because they use, are assisted by, or could be replaced by AI systems, which may have an impact on the delivery of care. The key players involved in the healthcare relationship need to maintain the agency over their actions, and the dilution of responsibility deserves to be thought through . Conversely, “imposing AI on a community by a profession or a part of it is perhaps not ideal in terms of social or ethical standards” . On the legal front, obtaining individuals’ specific, free, and informed consent is considered one of the ultimate expressions of autonomy . Informed consent is usually required before personal information is obtained or used, either as a principle prior to any exchange of information – as in Quebec (Canada), for example – or as a legal basis on which to rely, as in the European Union or United States . This relates to both the creation of an AI model and the context of its use in healthcare activities. Emerging issues include whether informed consent to care includes consent to the use of AI systems, machines, or techniques within such care . Each jurisdiction makes a different choice, and each one is open to question. In Quebec, for example, the right to be informed must specify the professional who performs the therapeutic intervention , but not necessarily whether they used AI to make the diagnosis. Inspired by the ethical reflection defining the contours of valid consent, the law usually requires that the person giving consent is sufficiently informed to decide in an objective, accurate, and understandable manner. In healthcare contexts, it usually encompasses information about the diagnosis, the nature and purpose of the procedure or treatment, the risks involved, and possible therapeutic options . In addition, when personal information is used to make a decision based exclusively on automated processing, there is now a tendency to require data subjects to be informed of the reasons, principal factors, and parameters that led to the decision . These requirements raise questions when using complex machine learning algorithms: the main factors and parameters may be difficult to report in an understandable way and raise questions about legal compliance . Informed consent may therefore be impacted, calling compliance with this obligation into question. Second, valid consent usually implies that consent is obtained without pressure, threat, coercion or promise. However, patients rarely read or check the requirements for obtaining electronic consent, especially when it comes to personal information . The legal discussion ultimately concerns the possibility of respecting these requirements as well as other possible legal bases (e.g., another mode of consent), perhaps based on the notion that the subject’s autonomy resides more in general trust and transparency around AI use than in a button they unthinkingly click about 20 times a day . In these questionable cases, an underlying ethical reflection supports research into solution strategies and the practical implementation of new legal requirements. Finally, respect for autonomy also lies in the capacity to exercise the rights granted in principle to individuals . This question deserves to be asked, in view of the characteristics of data exchanges and computer access that condition the construction of an AI system. The operation of certain AI systems may hinder people from exercising their right to be forgotten, their right to know what data is being used and what for, their right to limit the use of their data, the right to opt out, or the right to human review —at least in certain legal jurisdictions. How can one ensure the deletion of an item of data where initial consent had been given for its use, when one does not know whether and to what extent that item has influenced a decision taken by the system? How can the right to human review of an automated decision be guaranteed when the reasoning behind that decision is unintelligible? What is the scope of the right to dereferencing or deletion if AI can aggregate information from the results of multiple search engines? Algorithms’ reasoning is precisely induced and driven by the data they are trained on. As a result, it can reflect biases present in that data, which will in turn impact the algorithms’ results and potentially exacerbate inequalities and discrimination against marginalized communities and underrepresented groups. The ethical view of bias Bias and the law Some authors have categorized the main types of bias induced by AI . The first is replicating or exacerbating societal and historical biases already present in the learning data (demographic inequality), which can lead to self-fulfilling predictions and disproportionately affect particular groups . One study reports, for example, that “the use of medical cost as a proxy for patients’ overall health needs led to inappropriate racial bias in the allocation of healthcare resources, as black patients were erroneously considered to be lower risk than white patients because their incurred costs were lower for a given health risk state” . Yet, such lower costs also illustrate the inequalities in accessing medical services for black populations. As healthcare delivery varies by ethnicity, gender, housing status and food stability [ – ], among other things, feeding an algorithm with such data can make one of these social determinants of health a salient factor in the outcome . “Creating a tool from data that fundamentally lacks diversity could ultimately result in an AI solution that deepens healthcare inequities in clinical practice” . The second type of bias relates to incomplete or unrepresentative data , especially that which over- or under-represents a subgroup such as a minority, a vulnerable group, a subtype of disease, etc . When the theoretical reference population is not representative of the target population for which the model provides a result, there is a risk of bias, error, and overfitting, which can exacerbate health inequalities. For example, “an algorithm designed to predict outcomes from genetic findings may be biased if there are no genetic studies in certain populations” . The risks of developing certain diseases often depend on other factors such as sex or age, and failure to account for these characteristics in the baseline training data biases the prediction of disease risks in other types of populations. The third type of bias can be induced by the designers of the system themselves through the decisions they make when setting certain variables, the data to be used, or the objective of the algorithm . The ethical issues that arise concern, for example, the possibility of predicting and possibly adding parameters that were not initially present in the data to make it as accurate as possible to eliminate bias. For instance, should the HIV status of a patient who has refused to provide this information be added to the training data? And before even reaching the bias-correction stage, it is crucial to ask whether a potentially biased system should be introduced when it is already known it can reproduce societal biases. Moreover, the tech world seems to focus on eliminating individual-level human bias and training developers . As Joyce et al. are arguing, “sociological research demonstrates, though, that bias is not free-floating within individuals but is embedded in obdurate social institutions” so that “there are severe limitations to an approach that primarily locates the problem within individuals” . When considering the issue of bias from a legal perspective, the primary areas affected are the right to equality and protection from discrimination. Biases can affect decisions taken with respect to individuals, who may be discriminated against based on non-representative data or because some of their characteristics are accentuated by the operation of an AI model. Equal rights legislation is based on the idea that individuals cannot be treated differently because of any personal trait or characteristic such as race or ethnic origin, civil status (e.g., marital status, gender expression, age), sexual orientation, health or social condition, religious and political belief, etc. It generally prohibits differential treatment in similar situations such as service access, employment, or housing unless justified by particular circumstances or legal duties . The law often focuses on the effects on the victim rather than the fault or bad intent of the perpetrator. Although definitions vary by jurisdiction, an AI system used to determine people’s entitlement to reimbursement based on their higher risk in terms of health costs (e.g., that would be indexed to age, race, sexual orientation, etc.) could constitute discrimination under most legal systems in which equality is protected . Yet, the context and the nature of the AI system could make proof of discrimination extremely difficult: determining the criteria behind decisions is difficult enough for the designers of some complex machine learning systems, especially if they are autonomous and evolve over time. One can imagine how much more difficult it would be for the individual victim of discrimination, who must obtain access to the information used and to the parameters of the model, which at present frequently remain opaque. AI algorithms can sometimes make mistakes in their predictions, forecasts, or decisions. Indeed, the very principle of such models’ construction and operation is fallible due to the theory of complexity . The computer program that underlies an AI model comprises a certain number of operations that allow it to solve a given problem. The complexity of the problem can be evaluated according to the number of operations necessary to reach an exact answer . For highly complex problems, no 21st -century machine can surpass the threshold for the number of operations required. The objective of AI programs that tackle such problems, therefore, is “to compute a reasonably correct solution to the problem, in a computation time that remains acceptable” . AI researchers call this type of calculation a “heuristic.” The system cannot ensure absolute certainty in its results, but it can (or at least hope to) propose better predictions than a human in the same situation, especially the least experienced clinicians and is therefore of major interest. Apart from this intrinsic complexity, many different types of error impact on the responsibility of the actors involved throughout the lifecycle of an AI system. The ethics of responsibility Liability and the law From a legal point of view, AI errors are generally linked to the harm suffered by the victim and its reparation. In criminal matters, however, the legal perspective also encompasses the attitude that one wishes to punish, or the protection of society and other individuals from a possible recurrence. Regarding the role of health professionals, we can look at current medical liability regimes to consider how mechanisms for civil liability and compensation for damages can be applied to the use of AI systems in health, and whether they consider the particularities of the operation and context. For example, in many fault-based liability regimes, the victim must prove that (1) the practitioner was at fault, (2) there was a prejudice (i.e., damage or infringement of a person’s rights or interests), and (3) there was a direct and immediate causal link between fault and prejudice . Medical doctors are usually under an obligation of means (for example, products and equipment used) and much more rarely under an obligation of results. So, to determine the fault, the judge asks whether a “reasonably diligent” medical doctor conforming to the acquired data of science and placed in the same circumstances would have acted the same way. Yet, since the use of AI in medicine is so novel, a common understanding of how a “reasonably diligent” practice would look might need to be determined. How far would one consider the level of literacy of the medical doctor in relation to the AI decision support system? A surgical robot carrying out routine sutures under the control of an AI system remains under a medical doctor’s supervision: to what extent does the safety obligation imply liability for damage occurring during the operation, which the doctor might have been able to prevent with better knowledge of the system? We argue that judges will minimally require a sufficient understanding of the AI tools that medical doctors and other healthcare professionals use, based on explanations provided by the system supplier. At present, however, this interpretation is mostly at judges’ own discretion, and to the best of our knowledge, there are no major case-law decisions that could guide us. Moreover, the opacity of AI systems and the many actors involved in their development and implementation make it much harder to prove a causal link between the fault and the damage—and the burden of proof invariably falls on the victim’s shoulders. The patient must know that such a system was used as well as all the steps in the decision-making process if they are to prove that the medical doctor should, for example, have disregarded the recommendation, detected an initial bias, checked the inputs, etc . A first type of error arises from initial coding errors made by the programmer of the model. Unavoidable human error means there is a chance of the model providing incorrect answers in use. So, what probability of error can be accepted in these systems, and proceed to implement them in our society? The need to maintain the quality of training data throughout the model’s lifecycle may also incur other types of liability-related errors. For example, image recognition based on artificial neural networks is one of the most advanced fields in AI . Modifying inputs, “in the form of tiny changes that are usually imperceptible to humans, can disrupt the best neural networks” . Finlayson and co-authors explain that pixels may be maliciously added to medical scans in order to fool a DNN (deep neural network) into wrongly detecting cancer . The quality and representativeness of data (see section on Bias) and the opacity of the system (see section on Autonomy) can also lead to errors with detrimental consequences. The misuse of a system is also problematic. Users’ level of knowledge about AI might vary greatly, whether they are a health worker helping to triage patients in the emergency department, a medical doctor handling an AI-powered surgical robot, or a patient setting up a connected device to measure their physiological vitals at home. Moreover, users might decide to ignore the result that the system provides, either because they misread it or because they consider it too far removed from their own assertions. Intentional malice aside, how should the responsibilities of the actors involved be considered? Over the short term, “human in the loop” approaches are recommended so that medical doctors take responsibility for their decisions while using AI systems, including the way information is used and weighed . But to what extent should medical doctors be held responsible if they are unaware of an initial error in the input data, if they do not know the computational process leading to the result, or if it is beyond their power to modify it? Should doctors be liable for harm even though the model itself contains an error hazard due to the sheer complexity of the problem? Should the final decisions in medical matters systematically depend on human judgment alone? It remains difficult to argue that systems that provide personalized health advice, diagnostic or clinical decision support rely solely on human interpretation . However, should the victims of the various prejudices potentially caused by AI systems (patient refusing care, unfair access to AI, discrimination, prejudice linked to privacy or physical harm…) be able to claim compensation? Indeed, some consider that it is inappropriate for clinicians who use an autonomous AI to make a diagnosis they are not comfortable making themselves to accept full medical liability for harm caused by that AI . For complex systems, some of which work with reinforcement learning, it is still hard to predict what experiences the system will encounter or how it will develop. Like Pesapane and co-authors, one can thus question whether it is the device or its designer who should be considered at fault . Should the designer be considered negligent “for not having foreseen what we have called unpredictable? Or for allowing the possibility of development of the AI device that would lead it to this decision?” Some believe that if an autonomous AI is used according to the instructions, ethical principles require its creators to take responsibility for the damage caused . However, similar to what we mentioned with respect to the risk of losing a certain degree of human agency in some circumstances (see section on Autonomy), the automation bias - which refers to the tendency of clinicians (and people more broadly) to overly rely on assistive technologies like AI , calls into question the extent to which human responsibility should be considered. To minimize the risks of using AI in healthcare, we need to evaluate AI systems before they are marketed, implemented, and used, and monitor them through ongoing oversight, especially for those systems that represent a higher risk for patients. The ethics of evaluation and oversight The legal view of evaluation and oversight From a legal point of view, the issues also concern the regulation of marketing. First, as previously underlined, the definition of AI is neither unanimous nor stable, and this complicates the legal qualification of AI tools . Indeed, tools qualified as medical devices are usually subject to strict rules concerning their manufacturing process, safety, efficacy and quality controls, evaluations, and more. In principle, they have a medical objective, and these constraints are therefore linked to the risks they pose to users’ health and safety. So far, however, the legal definition of medical devices rarely expressly includes all kinds of AI systems, even though some may share many characteristics of certain qualified devices or incur comparable risks. For example, in the United States, some types of medical software or clinical decision support systems are considered and regulated as medical devices , but the FDA’s traditional paradigm of medical device regulation was not designed for adaptive AI and machine learning technologies . The inadequacy of this traditional vision and the lack of clarity on the regulatory pathway can have major consequences for the patient . For this reason, the FDA has been adapting over recent years by specifically reviewing and authorizing many AI and machine learning devices and plans to update its proposed regulatory framework presented in the AI/ML-based SaMD discussion paper , which is supported by the commitment of the FDA’s medical product centers and their collaborative efforts . Second, the quality control and assessment of medical devices are not fully adapted to the growing and constantly evolving nature of AI systems, the safety and effectiveness of which may have to be controlled over time. Classical legal regimes seem to be failing to incorporate all of the realities of AI systems and are in need of revision ; “the law and its interpretation and implementation have to constantly adapt to the evolving state-of-the-art in technology” . While some authors are still questioning possible approaches to the regulation of innovation, some countries have already made their choice. On the one hand, over-regulation could stifle innovation and impair the benefits that AI would bring . Conversely, “over-autoregulation,” or leaving the market to regulates itself, would lead in the other direction, with companies deciding for themselves which norms to develop and follow, solving problems as they arise. Several countries have chosen to rely on risk-based approaches for specific regulatory-device schemes to encompass these challenges. For example, the European Parliament has recently voted for its new Regulation on Artificial Intelligence (better known as the “AI Act”), which defines four levels of risk, where the minimal risk requires a simple declaration of compliance and the maximum risk incurs a ban on use. The Canadian Artificial Intelligence and Data Act (AIDA) proposal also plans, if adopted, to regulate AI systems based on the intensity of their impact . Beyond the medical ethics principle of non-maleficence, the protection and promotion of human well-being , safety, and public interest implies that “AI technologies should not harm people” . This idea, presented as the second of the six principles established by the expert group mandated by the World Health Organization (WHO), implies that the control, measurement, and monitoring of the performance and quality of systems and their continuous improvement must be paramount in the deployment of AI technology . All actors involved should probably be accountable for these aspects. On this theme, there are several elements that merit consideration. First, pre-deployment evaluation of AI systems involves determining the criteria for their evaluation. Today, most systems are evaluated within the framework of existing authorizations, certifications, or licenses, such as those issued by national health authorities for medical devices. These authorities examine the product or technology according to criteria that mostly relate to effectiveness, quality, and safety. Scientific validity is paramount, but should it be the sole criterion for the use and deployment of AI systems? In particular, the likelihood and magnitude of adverse effects should be assessed. In addition, there should be an “ethical” assessment that considers both the individual and collective benefits and risks of the technology, as well as its compliance with certain previously validated ethical principles. For example, the UK’s Medicines & Healthcare products Regulatory Agency (MHRA), the Food and Drug Administration (FDA), and Health Canada have developed “good practice” that aims to promote “safe, effective and high-quality medical devices using artificial intelligence and machine learning.” This document currently seems to incorporate a more global consideration by also integrating ethical concerns over the deployment of AI systems . Second, AI technologies must be monitored and evaluated throughout their use, especially “reinforcement” learning models that take advantage of the data that is continuously generated and provided to carry on training and learning . This is precisely what the WHO advocates, in the name of a final ethical principle that its committee of experts has termed “responsiveness.” Designers, users, and developers should be able to “continuously, systematically, and transparently assess” each AI technology to determine “whether it responds adequately, appropriately and according to communicated, legitimate expectations and requirements” in the context in which AI is used. It is necessary to consider how these standards can be assured, taking into account the procedures and techniques available to do so . The “human in the loop” approach is often seen as part of the responsible development of AI technologies. Applied to system evaluation, it could take the form of establishing several points of human supervision upstream and downstream of the design and use of the algorithm . Establishing such a guarantee, which can also be described as a “human warranty” or “human control” , would make it possible to ensure that only ethically responsible and medically effective machine learning algorithms were implemented. However, the question remains open as to how this approach can be applied to technologies that require no prior approval or regulatory authorization process, in particular because they do not qualify as medical “devices” or “instruments.” Such technologies, which often monitor fitness, women’s hormonal cycles, sleep, or overall well-being, can still have harmful consequences. The companies developing and selling such products often make public commitments through so-called ethical declarations and charters or self-developed ethical quality labels. End users, who are rarely qualified to evaluate whether developers’ actions are in line with these statements, risk falling victim to the phenomenon of “ethics washing” denounced by AI researchers , ethicists, and philosophers . The repurposing of the ethical debate to serve large-scale investment strategies merits intense reflection followed by action by public authorities. In the health sector, AI’s impacts on jobs and work concern medical practice, the delivery of care, and the functions overseen by non-medical staff. The ethics of transforming work Transformation of work and the law The transformation of qualifications questions the relationship between the medical professions and technology, as well as the legislative and regulatory obligations for training. Requiring the medical doctor to be able to explain or interpret the outputs of an AI model remains a legal issue as well as a significant challenge. The upheavals within certain professions may mean that their regulation must be adapted—as the regulatory framework for radiologists in France has already been modified, redefining the acts and activities that can be performed by medical electroradiology manipulators . According to the National Federation of Radiologists, the move towards diagnostic interventional radiology mentioned above has already been integrated by the profession . The High Council for the Future of Health Insurance speaks of the major task of “concentrating and developing the role of medical doctors in expertise and synthesis activities,” which will certainly require regulatory change. From a legal point of view, this issue could also call into question the right to be treated or cared for by AI rather than a healthcare professional. The trend towards quantified self or personal analytics, where data analysis and measurement tools become more powerful every year, has given individuals greater knowledge on managing their health and sometimes implies a different understanding of themselves as patients within healthcare structures. Individuals’ awareness and use of AI services is also growing, despite fears. That considered, some demands for surgery might be best met by AI, particularly if it is safer, quicker, more efficient and more likely to succeed. And if cultural differences or social acceptability lag behind such demands , one might justifiably ask whether they should catch up. Could the right to choose one’s doctor be extended to include the right to access an “AI doctor”? AI systems are destined to become part of medical practice and care delivery, if they have not done so already. For example, an AI system mobilizing image recognition can detect a tumor on a mammogram . In orthopedic surgery, robots with on-board AI are capable of assisting and securing the surgical gesture and ensuring better postoperative results by integrating the anatomy specific to each patient . However, if these kinds of tasks become more widespread, might AI endanger jobs or even replace health professionals, as is often feared in technological transitions ? Healthcare systems, professionals, and administrators will all be impacted by the implantation of AI systems. The first impact consists in the transformation of tasks. The integration of AI is transforming professional tasks, creating new forms of work , and forcing a readjustment of jobs (e.g., changing roles and tasks, modifying professional identities, evolving of professional accountability). For the WHO, readjusting to workplace disruption appears to be a necessary consequence of the ethical principle of “sustainability” identified by the committee of experts on the deployment of AI. In particular, governments and companies should consider “potential job losses due to the use of automated systems for routine healthcare functions and administrative tasks” . Image recognition, for example, makes radiology one of the most advanced specialties in AI system integration . AI is now able to “automate part of conventional radiology” , reducing the diagnostic tasks usually assigned to the radiologist. The authors of the French strategy report believe that this profession could then “evolve towards increased specialization in interventional radiology for diagnostic purposes (punctures, biopsies, etc.) for complex cases or therapeutic purposes guided by medical imaging” . The practice of electrocardiograms in cardiology or that of dentists in their routine and laborious tasks is already undergoing upheaval. The field of general medicine is also being impacted by applications available to the public, such as “medical assistant” chatbots that can analyze users’ symptoms and direct them to a specialist or pharmacist. In the case of minor ailments, such technologies de facto diminish the role of the general practitioner. However, if the medical doctor profession is safe for now, the role of an ethical approach is precisely to set guidelines, which could correspond to the level of social acceptability among the population and professionals’ desire to hang on to certain roles or tasks. For example, the “human in the loop” approach, as well as the principles of non-maleficence and beneficence, imply thinking about when the medical doctor should intervene and how much latitude they have in the face of automation . The profoundly human character of care is a major element in the debate concerning the restructuring of missions and professional pathways . The opportunity to “re-humanize” healthcare is opened up by handing over certain tasks to AI systems and should be seized. For example, the Paro therapeutic robot, which responds to the sound of its name, spoken praise, and touch, is used in geriatric services in Japan and Europe and has received positive reviews from patients . For nurses and care assistants, the integration of these robots would take some of the physical and psychological strain out of their activity. However, while implementing such a tool might help to address human resources shortages, it may only be desirable for certain populations and contexts. Moreover, it will, of course, come up against other existential, social, and cultural issues, e.g., the evolution of social ties and the acceptance of this kind of technology in different cultures. The transformation of skills is another consequence of the introduction of AI technologies into medical practice. As with the influx of computers into the workplace in the 1990–2000s, healthcare workers must learn to work with, or alongside, AI systems . In addition to knowing how to use the technologies, health professionals should be aware of the repercussions and issues “technical, legal, economic or ethical posed by the use of tools based on artificial intelligence” . Here, a risk arises that is similar to those related to the computerization and digitization of medical records: the time spent on training and correct use should not be to the detriment of clinical time, which is rightly considered to be paramount. However, whereas previous technological revolutions concerned lower-skilled workers, AI may herald the opposite . AI can pose the risk of a future deskilling among healthcare professionals, especially by inducing dependence or cognitive complacency . The capacities offered by automating cognitive work that previously required high-skill workers might cause consequences such as altering clinical reasoning processes (e.g., reducing a clinician’s diagnostic accuracy). However, the use and application of AI itself require periodic refinements by experts, including medical doctors . Radiologists’ professional networks allayed this fear by reducing the scope in which AI could enter while recognizing the potential benefits of automating more routine tasks and upskilling their roles overall . In situations where the use of AI is preferred, there are several ways to mitigate the risks of deskilling. For example, Jarrahi and co-authors suggest that some “informating capacities” of AI systems (i.e., capacities beyond automation “that can be used to generate a more comprehensive perspective on organizational reality” ) could be used to generate “a more comprehensive perspective on organization, and equip workers with new sets of intellectual skills” . The impact of AI should also be considered at the more global level of managing organizations and non-medical staff. Areas affected include patient triage in the emergency room and the management and distribution of human resources across different services. This is where organizational ethics comes in, with human resources management and social dialogue figuring as major concerns. Indeed, in the health sector, the layers of the social fabric are particularly thick, diverse, and interwoven: changes in a healthcare institution affect many, if not all, of its workers, with major repercussions in the lives of users and patients too. The care of individuals who interact with medical assistants or diagnostic applications is also shifting. Thus, such “evolutions, introduced in a too radical and drastic way, damage the social fabric of a society” . Moreover, these transformations also blur the boundary between work and private life and alter the link between the company and its employees, both old and new . In this respect, the deployment of AI technologies certainly implies the emergence of new professions, which must be properly understood. For example, new technical professions such as health data analysts, experts in knowledge translation, quality engineers in ehealth, and telemedicine coordinators, as well as professionals in social and human sciences such as ethicists of algorithms and robots are to be imagined . The construction of the organization’s ethical culture will depend in particular on its ability to identify areas of ethical risk, deploy its ethical values, and engage all its members in its mission . The issues raised by AI in healthcare take on different nuances depending on whether one speaks of them in terms of legal compliance, the ethical choices behind practices and decisions, or reflective processes integrated into professional practices. We propose three avenues of reflection to address such issues. Education and training Support and guidance Tool adaptation Limitations We have tried in this paper to present an encompassing view of the ethical and legal issues surrounding the development and implementation of AI in healthcare. However, we recognize that our research has limitations. First, the six issues presented are not exhaustive since they include those most cited in the targeted literature. Second, they are presented in a broad and rather geographically non-specific manner to be able to give an overview in a single paper. Third, our presentation of these issues is based on basic differences between ethics and law and does not integrate all the intersections and intertwined relations between the two disciplines, since it aims to clarify the distinctions. Fourth, we have chosen not to approach ethical discussions through a single normative approach, which would give importance to a specific classical traditions in ethics (e.g., Aristotle’s virtue ethics or Kantian deontology) or to more contemporary currents such as the ethics of care, but to account for a certain diversity in the presentation of the issues, which can present themselves differently depending on the chosen angle. Many AI tools are intended to be used by healthcare professionals (e.g., risk prediction of future deterioration in patients , clinical decision support system ; diagnoses assistance tools from radiological images ). Therefore, these professionals must know about these tools, how they work, and their implications to ensure the quality, safety, and effectiveness of AI. In order to deploy AI while taking all this information into account, there is a need to increase the technical, legal, and ethical AI literacy of healthcare professionals . We propose two main ways to achieve this. First, basic AI training should be integrated into academic programs, where students are the future users of AI in healthcare . A study in Canada revealed that more than half of healthcare students either do not know what AI is or regard it as irrelevant to their field. In addition, few institutions cover the goals of AI in their educational programs . This is a missed opportunity to address misconceptions and fears related to AI and to raise awareness about ethical and legal issues associated with these systems. As Wiens et al. explain, successful training involves bringing together experts and stakeholders from various disciplines, including knowledge experts, policymakers, and users . Second, continuing education on AI for health professionals should be integrated into health organizations and institutions . Apart from illuminating the use of digital tools and data and the internal workings of systems, this training would engage health professionals’ moral responsibility. Confronted with a situation involving moral values, ethical principles, or the application of legal rules, they would question themselves before mechanically applying their technical knowledge. They could then reflect on the ethical consequences of their actions, such as the use of a particular AI tool, depending on the context and the patient involved. Depending on the situation, professionals could refer to the ethical principles and standards defined within the organization, their deontological code or the ethics committee within their organization. These reflexes are not new among medical professionals, since medical ethics has been widely implemented in processes and practices. Moreover, the important regulation of the health sector already forces professionals to question the conformity of their practices to the law or to ethics. However, these mechanisms deserve to be adapted to the use of AI. Such training is widely encouraged by institutions such as the American Medical Association , which supports research on how augmented intelligence should be addressed in medical education, or the Royal College of Physicians and Surgeons of Canada , which recommends incorporating this teaching into the curricula of residents . We believe that the responsibility for integrating training is shared between professional bodies, healthcare institutions and academic institutions. Indeed, we believe that the issues we describe cannot be resolved unless accountability is shared in such a way. The second, complementary theme concerns the accompaniment of health professionals in these new practices. This support would first involve the creation of an internal or external interdisciplinary committee to approve the implementation of new AI technology—a special authority for AI at the organizational or institutional level. Such a committee should include ethicists, AI engineers and developers, healthcare professionals, patients and health organizations administrators would make it possible to assess whether a given technology met predefined evaluation criteria, based on the ethical issues it triggers, before it could be used. It should also include a lawyer to resolve certain legal issues and stay alert to the evolution of the law, which is bound to change to integrate the particularities of this technology. The committee would also ensure that the technology has been developed around the skills, expectations, interactions, or technical or organizational constraints of the user. This would force AI developers to work with potential future users (including both healthcare professionals and patients), from the design stage onwards. The criteria adopted by the committee would then be integrated throughout the creation of the technology, giving it the best chance of being approved and implemented in the safest, most efficient, most collaborative and, therefore, highest-quality manner possible. Unlike institutions that review systems for regulatory and legislative compliance and evolve in parallel, this ethical approval process would be the responsibility of the institution’s administrators, who would also be responsible for building bridges between developers and users. Another solution concerns the AI tool itself, whose interface must be designed to serve the user, taking account of the issues that arise for them and allowing them to play an active role in the system (for example, in terms of control, decision-making, choice of actions, etc.) . Thus, the bridge between designers and users would make it possible to create an interface that is intuitive, ergonomic, transparent, accessible, and easy to use. As we have seen, one of the objectives of training health professionals is to encourage reflective thinking, which is broader than mere concern for legal liability. Functionalities to trigger the desired “ethical reflex” should be integrated into the heart of the interface—for example, alerting the professional about the diversity or source of the data they are entering, or even about the result that the machine has returned. One could even envisage that these alerts be personalized: indeed, some systems know how to personalize alerts based on the information they have about the situation. Instead of alerting users about the contraindication of a drug prescription or how to complete an exploration , the interface could provide alerts on certain ethical considerations. For example, medical doctors entering symptoms into a diagnostic support system could be alerted when specific data points (as input) were atypical and could prove particularly sensitive in the operation of this algorithm. Keeping the approach focused on the user experience, these functionalities should be light enough to preserve the human-machine interaction and the ergonomics of the interface (meaning that tasks can be performed within a reasonable time). Finally, feedback loops should be established, coupled with the obligation for the professional to report any problems that occur when using AI. This functionality would prevent the professional from implicitly trusting the tool and force them to remain alert and critical regarding its recommendations, predictions, previsions, or other results. The six issues we highlighted in this article illustrate the intensity and extent to which healthcare professionals are already being affected by the development of AI, and will be even more so in the future. In order for AI to benefit them, as well as patients, healthcare organizations, and society as a whole, we must first know how to identify these issues in practice. It is vital that healthcare professionals can tell whether ethical or legal problems arise while implementing and using AI tools, so they can react to them in the most appropriate way. Such knowledge can guide their usage of AI, allowing them to better adjust to this new technology and to keep a helpful critical lens - notably through a benefit/risk perspective that is already important in the healthcare field. To achieve this, we suggest reviewing the initial and ongoing training of professionals, supporting professionals in their use of AI tools through ethical and regulatory evaluation, and cultivating new reflexes to respond to a “potential risk” in legal or ethical terms.
Intracapsular and extracapsular fracture types and inpatient mortality in failed hemiarthroplasty
8d09cea9-6a0e-4f79-a7ba-0302239e4c4c
11796156
Surgical Procedures, Operative[mh]
The incidence of hip fractures has risen alongside increased life expectancy, leading to a corresponding surge in hemiarthroplasty procedures . These surgeries aim to alleviate pain swiftly and enable early mobilisation, especially in elderly patients who risk a 40% reduction in muscle strength during 4–6 weeks of bed rest . Recent research indicates that hemiarthroplasty may be a viable treatment option for intracapsular fractures and extracapsular fractures. Studies suggest that the procedure is associated with effective outcomes and relatively low rates of intraoperative and postoperative complications . The use of hemiarthroplasty for both extracapsular and intracapsular fractures has garnered attention due to its role in preventing muscle loss . Revisions of hemiarthroplasty are relatively uncommon, with primary causes including acetabular wear, periprosthetic fractures, infection, dislocation and aseptic loosening . The literature encompasses several studies on failed hemiarthroplasty, addressing topics such as the choice of cemented versus uncemented techniques , the differences between unipolar and bipolar designs , the frequency of reoperations , and the conversion of failed hemiarthroplasty to total hip arthroplasty . However, the role of pre-operative fracture type—specifically intracapsular versus extracapsular—in the context of failed hemiarthroplasty remains inadequately explored. This study aimed (a) to analyse uncemented bipolar hemiarthroplasties revised at our institution over an 8-yr period, (b) determine differences in revision outcomes between intracapsular and extracapsular fractures and (c) evaluate inpatient mortality rates and associated risk factors. This retrospective cohort study was conducted at a single institution. Between 2017 and 2024, 1,690 patients underwent bipolar hemiarthroplasty for hip fractures; 68 patients who required revision hemiarthroplasty were included in the analysis. All patients were initially treated with uncemented bipolar hemiarthroplasty for either intracapsular or extracapsular hip fractures. Data were extracted from institutional records, including age, body mass index (BMI), gender, side of fracture, comorbidities, primary fracture type, type of hemiarthroplasty, time from initial surgery to revision, reason for revision, American Society of Anaesthesiologists (ASA) score, length of hospital stay and intensive care unit (ICU) admission, surgical delay, number of procedures and follow-up duration. Comorbidity indices, such as the Charlson Comorbidity Index (CCI), Age-Adjusted Charlson Comorbidity Index (aCCI), Elixhauser Comorbidity Index (ECI), Almelo Hip Fracture Score (AHFS) and Parker Mobility Score (PMS), were calculated. Comorbidities were categorised as cardiovascular, diabetes mellitus, respiratory or neurological diseases. Failure types were classified based on the system proposed by Morsi et al., which includes five categories (Type I to V). A stable, well-fixed femoral component with acetabular and protrusion issues was categorised as Type I, with two distinct subtypes. Type IA failure pertains to cases involving monoblock femoral components. Notably, the present study excluded all Thompson or Austin Moore monoblock prostheses. Type IB failure involves bipolar hemiarthroplasty cases with associated acetabular issues. Type II failure relates to femoral complications without acetabular involvement. Subtypes of Type II include the following: Type IIA: Aseptic loosening without adequate bone stock. Type IIB: Aseptic loosening accompanied by bone stock loss. Type IIC: Periprosthetic fractures. Revisions necessitated by combined acetabular and femoral complications were classified as Type III. Type IV failures included instability and dislocations, with intraprosthetic dissociations, whether primary or secondary to reduction, also encompassed. Lastly, revisions for infection were designated as Type V. Patients were categorised based on fracture type as intracapsular or extracapsular. The dataset included information on patients who succumbed in the hospital postoperatively. Ethical approval for the study protocol was obtained from the University of Health Sciences Kayseri City Training and Research Hospital Clinical Research Ethics Committee (Approval No. 10.09.2024/182). The study adhered to the principles outlined in the Declaration of Helsinki. Statistical analysis All data were recorded and analysed using Statistical Package for the Social Sciences (SPSS) for Windows (version 22). In the data analysis, the initial step involved verifying the assumptions required to determine whether parametric or nonparametric tests were appropriate. Statistical tests were chosen based on the normality of data distribution and evaluated using the Kolmogorov–Smirnov test, kurtosis and skewness values and histogram graphs. For independent two-group comparisons, the chi-square test or Fisher’s exact test was used for categorical variables, while t tests were employed for continuous variables. Logistic regression analysis was conducted to assess the effects on the two-category dependent variable. A p value of < 0.05 was considered statistically significant. Between 2017 and 2024, 68 of the 1,690 hemiarthroplasties performed at the institution (16 males and 52 females) underwent revision surgery, reflecting a 4% revision rate. Among these, 31 patients (9 males and 22 females) had extracapsular fractures and 37 patients (7 males and 30 females) had intracapsular fractures. All patients received uncemented hemiarthroplasty. The mean patient age was 79.35 ± 9.29 year. Comorbidities were present in 88% of the patients, with cardiovascular diseases in 29% ( n = 20), neurological disorders in 36% ( n = 25), diabetes mellitus in 38% ( n = 26) and respiratory diseases in 20% ( n = 14). Most revisions (60%, n = 41) occurred within 6 months of the initial hemiarthroplasty. Patients were classified as ASA-2 ( n = 10), ASA-3 ( n = 45) or ASA-4 ( n = 13). The posterolateral approach was used in 85% ( n = 58) of cases. Revision indications included Type I ( n = 3), Type II ( n = 16), Type III ( n = 1), Type IV ( n = 34) and Type V ( n = 14). All patients classified as Type I (5%) underwent total hip replacement. For Type II (24%) patients, who were further subclassified as Type IIc (i.e., periprosthetic fractures), monoblock revision femoral stem and plate application were performed. Type III (1%) cases involved total hip replacement. In Type IV (50%) cases, all components were replaced in 22 cases; in two cases, only the femoral head was replaced, and in two other cases, only the acetabular cup and liner were replaced. The remaining two cases involved the replacement of both the femoral head and the acetabular cup. Five patients underwent total hip replacement, while one patient underwent the Girdlestone procedure. Among Type V (20%) patients, a two-stage exchange procedure was conducted in seven cases, while the Girdlestone procedure was performed in seven cases. The mean time to surgery was 1.66 ± 0.97 days, and the mean hospital stay was 16.90 ± 16.89 days. The mean BMI was 27.84 kg/m 2 , while the mean CCI and aCCI were 1.49 ± 1.09 and 5.21 ± 1.33, respectively. A total of 64% of the patients ( n = 44) required admission to the ICU, with an average ICU stay of 8.47 days. The mean follow-up duration was 11.19 months. The presence of diabetes mellitus and high AHFS were the only factors that differed between intracapsular and extracapsular fractures. Diabetes mellitus was more frequently diagnosed in cases of hemiarthroplasty revision surgery due to extracapsular fractures compared with intracapsular fractures ( p = 0.01). Among patients undergoing revision surgery, the mean AHFS was 9.19 ± 2.93 for extracapsular fractures and 7.05 ± 3.95 for intracapsular fractures ( p = 0.01). The rationale for revision surgery was similar in both groups, and the results are summarised in Table . The inpatient mortality rate in this study was 19% ( n = 13). Of these, six patients belonged to the extracapsular group, while seven were in the intracapsular group. Male patients had a significantly higher inpatient mortality rate than females (43.75% vs. 11.54%, respectively; p = 0.00). The mean age of deceased patients was similar to that of those discharged alive (81.15 ± 5.87 year vs. 78.93 ± 9.92 year; p > 0.05). However, the mean hospital stay was longer for deceased patients (22.46 days vs. 15.58 days; p > 0.05). No significant differences were observed in the number of comorbidities or the presence of cardiovascular, neurological, respiratory or diabetes-related conditions between patients who died in the hospital and those who were discharged ( p > 0.05). Similarly, no significant differences were noted in the CCI, aCCI or ECI scores between the two groups ( p > 0.05). However, significant differences were found in the ASA score, PMS and AHFS. The AHFS was higher in deceased patients (10.46 ± 3.23) than in those discharged (7.45 ± 3.54; p = 0.01). Conversely, the PMS was lower among deceased patients (3.46 ± 2.30) than those discharged (5.36 ± 2.50; p = 0.01). Regarding ASA scores, seven deceased patients were classified as ASA-3, six as ASA-4 and none as ASA-2. All patients who died had been admitted to ICUs, with a significantly longer mean ICU stay (15.15 days) than discharged patients (5.75 days; p = 0.02). The number of revision surgeries, surgical approaches and time to surgery did not significantly differ between the two groups ( p > 0.05). These results are detailed in Table . Logistic regression analysis indicated that the model constructed with independent variables had a 63.6% accuracy in predicting mortality status. The Hosmer–Lemeshow test confirmed the model’s compatibility ( p > 0.05), and the Omnibus test produced significant results ( X 2 (3) = 10.7; p < 0.05). Nagelkerke R showed that the independent variables explained 33% of the variance in pre-operative fracture type status, while the − 2 log-likelihood value was 38.15. The regression model incorporated variables that demonstrated a relationship with the outcome variable, while those with multiple connections were excluded to refine the prediction of mortality status. Gender (referent: female) emerged as a significant predictor of mortality ( B = 2.24, SE = 0.94, Wald = 5.72, p < 0.05). Male patients were found to have a 9.37-fold higher risk of mortality than female patients (Exp( B ) = 9.37; 95% CI, 1.50–58.62). However, the AHFS did not significantly influence mortality ( B = 0.03, SE = 0.14, Wald = 0.05, p > 0.05), as increases in the score did not alter the risk of mortality (Exp( B ) = 1.03; 95% CI, 0.78–1.36). Similarly, the duration of intensive care stay did not significantly predict mortality ( B = 0.02, SE = 0.03, Wald = 0.27, p > 0.05), with no notable change in mortality probability linked to longer intensive care stays (Exp( B ) = 1.02; 95% CI, 0.95–1.08). These results are summarised in Table . Revision surgery following hemiarthroplasty is less frequently required than after internal fixation to hip fractures, with rates reported to range from 1.3 to 9% in the literature . Recent studies suggest that revision rates for uncemented hip replacements may increase over time, from 1.1% in the first month to 5.1% after 9 year . The indications for hemiarthroplasty revision are typically classified into five main categories: acetabular erosion, dislocation or instability, periprosthetic fractures, infections and aseptic loosening . The present study identified a revision rate of 4% for uncemented bipolar hemiarthroplasty, with the most common indication being instability or dislocation across both intracapsular and extracapsular fracture groups. Risk factors for hemiarthroplasty revision include male gender, age below 80 year, ASA classification 1–2, posterolateral approach and cementless fixation . In contrast, this study observed a higher prevalence of female patients and ASA classifications 3–4 among those requiring revision. Although evidence indicates that extracapsular and intracapsular fracture patterns in hip fractures vary by gender and age , our study did not identify any differences in the failure rates of hemiarthroplasties performed for these fracture patterns The mean age at revision was 79 year. A posterolateral approach and cementless fixation were employed for all patients, aligning with the preference for bipolar and uncemented stems in Turkey. Acetabular erosion, a significant cause of revision in unipolar hemiarthroplasty, is less common in bipolar procedures . Its incidence decreases with age, from 6.13% in patients aged 70–75 year to 1.96% in those aged 80 year or older . Revisions were performed in 1% of patients with both femoral and acetabulum issues and 3% of patients with isolated acetabular problems. All patients requiring solely for acetabular problems were in the intracapsular group, whereas those with both acetabular and femoral issues were in the extracapsular group. The incidence of dislocation in bipolar hemiarthroplasties has been reported to range from 1–16% . Dementia has been identified as a significant risk factor for dislocation in patients undergoing this procedure . Moreover, comorbidities other than dementia as well as the CCI, Almelo hip score and Parker score, have not been shown to constitute significant risk factors for dislocation . While most studies indicate that dislocation rates are higher in patients with larger cup sizes, some findings suggest otherwise . The posterior surgical approach has also been identified as a risk factor for dislocation . Despite these factors, recent large-scale studies have reported dislocation rates of 1.1% and 1.6% . In this study, the incidence of dislocation requiring revision in bipolar hemiarthroplasty was 2%, with approximately half of the cases attributed to implant-related dissociation, which was not associated with surgical or patient factors. The overall dissociation rate was 0.8% across all hemiarthroplasties but accounted for 22% of revision cases. Emerging technologies, such as the Robotic Arm-Assisted System, offer promising advancements in hip fracture management. These innovations may enhance surgical precision and reduce non-implant-related errors in procedures like hemiarthroplasty and total hip replacement . The ECI and CCI, commonly used in orthopaedics, demonstrate similar sensitivity in predicting inpatient mortality . The ASA score has proven useful in predicting readmissions, infections and cardiovascular complications associated with hip fractures . Recently, the AHFS has gained prominence as it integrates the PMS and ASA scores . Analysis of the AHFS revealed that patients who succumbed to hip fractures had a mean score of 9.5, compared with 11.5 among those who underwent arthroplasty . This study found that the ASA score, PMS and AHFS were significantly higher in deceased patients, whereas the CCI and aCCI were not significantly associated with inpatient mortality. Sociodemographic factors may play a significant role in influencing mortality. Patients undergoing surgery for hip fractures caused by falls often develop a fear of falling, which can lead to increased fragility and reduced mobility . As a result, these individuals tend to have lower PMS and AHFS scores. The mean inpatient mortality rate following hip fractures in patients with dementia was 2.01%, compared with 1.87% in those undergoing hemiarthroplasty . For periprosthetic fractures after cemented hemiarthroplasty, an inpatient mortality rate of 6.3% was reported, increasing to 12.5% within the first 1 month . Length of hospitalisation was the only factor significantly influencing first month mortality, with delayed surgery identified as a modifiable risk factor for reducing mortality . Patients requiring revision surgery due to dislocation or infection often undergo multiple procedures following a failed hemiarthroplasty . Mortality rates are particularly elevated among individuals who have undergone the Girdlestone procedure . At our institution, the inpatient mortality rate was 19%. Prolonged ICU stays and male gender were identified as significant contributors to this outcome, with male patients showing a 9.37-fold higher risk of inpatient mortality. Among the 14 patients who underwent revision surgery due to infection, 50% required the Girdlestone procedure, and two (28%) of these patients died in the hospital. Limitations is study has several limitations. First, it was conducted at a single centre, which may restrict the generalisability of the findings. Second, the surgical procedures were performed by multiple surgeons employing diverse approaches, fixation methods and prosthesis systems, which could have influenced the outcomes. Further, multi-centre studies and randomised controlled trials with larger sample sizes are essential to validate these results. The study findings reveal that pre-operative fracture type is not a significant factor for uncemented bipolar prosthesis revisions. Furthermore, regarding the hemiarthroplasty performed due to extracapsular fracture type, a higher rate of revision was observed in patients with diabetes mellitus. Moreover, several factors that affect inpatient mortality were identified, including the patient’s gender, the need for ICU admission, the ASA score, the PMS, the AHFS and the length of time spent in the ICU.
Fundamental Neurochemistry Review: Lipids across microglial states
f9fe5883-90d7-4af1-b250-5ec9a3afceda
11655966
Biochemistry[mh]
MICROGLIA AND THEIR METABOLISM Microglia are the resident innate immune cells in the central nervous system (CNS). They are incredibly important for CNS health throughout the lifespan and are implicated in almost all neurological diseases. These cells support and regulate all other cell types in the CNS, including astrocytes, oligodendrocytes, and neurons, as well as serve to modulate the physical components of the CNS, such as blood vessels and the blood–brain barrier (Paolicelli et al., ; Sierra et al., ; Šimončičová et al., ; Tay et al., ; Vecchiarelli et al., ; Vecchiarelli & Tremblay, , ). In order to accomplish these regulatory functions, which can notably involve the release of inflammatory mediators and trophic factors and/or phagocytosis, microglia must tightly regulate their metabolic profile (Bernier et al., ). An emergent field of study, immunometabolism, aims to determine how metabolism shapes the function of immune cells. It is currently a focus of the microglial field to illuminate these cells' immunometabolism, particularly since their metabolic reprogramming has been linked to neurodegenerative diseases (reviewed in Miao et al. , but is presumably also important to many health and disease contexts. Microglia, similar to other macrophages and immune cells, can utilize numerous fuel sources, and they possess the cellular machinery to potentially metabolize glucose, amino acids, and fatty acids, depending on the context (Bernier et al., ; Hammond et al., ; Yang et al., ) (Figure ). Specifically, microglia express multiple glucose transporter isoforms, including GLUT3 and GLUT5 (which is specific to fructose) (Bernier et al., ; Payne et al., ), as well as transcripts for the proteins involved in glycolysis and oxidative phosphorylation (Bennett et al., ; Bernier et al., ; Ghosh et al., ; Monsorno et al., ), indicating that microglia utilize glucose metabolism for energy production. Glycolysis leads to the generation of pyruvate (and ATP) which then is converted to acetyl coenzyme A (Acetyl‐CoA) to begin the citric acid/Krebs cycle, through which FADH and NADH are produced and utilized to produce ATP by oxidative phosphorylation. Microglia also possess the transcripts for the glutamine transporter SNAT1 (Jin et al., ); glutamine can be converted to glutamate to participate in glutaminolysis, leading to ATP and NADH generation through a partial citric acid/Krebs cycle participation. Microglia further possess monocarboxylate transporters, which can transport lactate, pyruvate, and ketone bodies (Halestrap, ; Monsorno et al., ; Nijland et al., ). Lactate can be transformed to pyruvate primarily via lactate dehydrogenase B (Monsorno et al., )—however, this can lead to the generation of reactive oxygen species (ROS) (Zhang et al., ); conversely, pyruvate can be metabolized to lactate to end glycolysis and shuttle lactate from oxidative phosphorylation, by lactate dehydrogenase A primarily (Monsorno et al., ). In rodents, lactate dehydrogenase B is expressed primarily in the CNS by microglia during early postnatal development, when there is high microglial heterogeneity and microglial‐dependent synaptic remodeling (Bennett et al., ; Mattei et al., ; Monsorno et al., ). Intriguingly, in oligodendrocytes, lactate shuttled from other cells may contribute to fatty acid synthesis and myelin production (Rinholm et al., ; Sánchez‐Abarca et al., ); although it remains to be seen if this also occurs in microglia. Furthermore, microglia express the cluster of differentiation (CD)36, which may contribute to fatty acid uptake (Coraci et al., ). This was shown in N9 microglia‐like cells and human fetal microglial cells, as well as in post‐mortem brain samples of patients with Alzheimer's disease (AD) (Coraci et al., ). Also, Cd36 transcript and CD36 protein levels were increased in myelin‐phagocytic microglia in mice in the context of demyelinating lesions indicating that the regulation of the expression of this fatty lipid translocase is important for myelin debris removal (Grajchen et al., ). However, whether CD36 leads to protective or detrimental microglial states is still a matter of debate and might depend on the method used to model demyelination and the time points evaluated (Grajchen et al., ; Hou et al., ). Since microglia possess CD36, those translocated fatty acids could be then on the route for mitochondrial fatty acid oxidation or peroxisomal fatty acid oxidation (Morito et al., ). Microglia, at least under steady‐state conditions, may not possess the enzyme that transports fatty acids to mitochondria (CPT1a), which may indicate that fatty acid oxidation may not be a significant source of energy production, at least during normative conditions (Bernier et al., ; Jernberg et al., ). However, Cpt1a expression was found in mouse primary microglial cells and suppressed after different stimuli in vitro (i.e., interleukin (IL)1β/ interferon (IFN)γ or IL4) which indicates that microglia have the capacity to express Cpt1a under different conditions and thus oxidate fatty acids in mitochondria (Geric et al., ). Also, in vitro the CPT1 inhibitor etomoxir was shown to modulate gene expression of pro‐inflammatory cytokines in mouse primary microglial cells, providing metabolic evidence that CPT1a could modulate microglial states in vitro (Qin et al., ). Some work has demonstrated fatty acid oxidation also in microglia‐like BV‐2 cells (Bruce et al., ), but there is other evidence that these cells do not resemble microglia with other aspects of lipid metabolism (e.g., α/β‐hydrolase domain containing 6 activity) (Cao et al., ), complicating these results' interpretation. Besides, regarding peroxisomal fatty acid oxidation, mouse primary microglial cells provided evidence that microglia express acyl‐CoA oxidase (ACOX), the rate‐limiting enzyme controlling the flux through the peroxisomal fatty acid oxidation pathway and multifunctional protein‐2 (MFP2), the enzyme responsible for hydration and second oxidation step in the process (Geric et al., ; Morito et al., ). Mice lacking Mfp2 , specifically in microglia/macrophages, showed increased numbers of microglia and alterations in their morphology (shorter and thicker process) along with an increase in pro‐inflammatory cytokines (Beckers et al., ). Microglia from Mfp2 knock‐out animals displayed a pro‐inflammatory profile compared to wild‐type animals under homeostatic conditions. Upon a pro‐inflammatory stimulus, microglia from Mfp2 knock‐out animals were able to respond as microglia from wild‐type animals suggesting that peroxisomal fatty acid oxidation is required for microglial homeostatic conditions, but does not alter their inflammatory response (Beckers et al., ). Thus, the role of microglial fatty acid oxidation remains undetermined. This work taken together contributes to the understanding that microglia can utilize different energy sources. Currently, the field is investigating how the utilization of these energy sources is altered throughout development and into aging, as well as across disease states (Benarroch, ) (Figure ). Furthermore, there is some evidence that the utilization of different energy substrates may lead to different microglial functions and this is a current major research focus in the field (Bernier et al., ). As the brain is a lipid‐rich organ, it follows that microglia are intimately influenced by lipids. These cells have the capacity to both synthesize and respond to lipids. It is becoming progressively understood that microglial lipid composition and metabolic capacity are important for neural homeostasis and the regulation of other CNS cells (Folick et al., ). For example, microglia may contribute to fatty acid oxidation, contributing to brain energy levels, although as mentioned above, it is unclear if this occurs under steady‐state conditions (Bernier et al., ; Folick et al., ; Yang et al., ). This may change during disease conditions, for example, the in situ expression of enzymes associated with fatty acid oxidation was found in microglia in a mouse model of stroke (Loppi et al., ). Additionally, fatty acids may contribute to alterations in oxidative metabolism (Chausse et al., ; Loving & Bruce, ). Furthermore, alterations and dysregulation of microglial lipids, both their configuration and accumulation, are implicated across neurological diseases (Bruce et al., ; Nugent et al., ). In this review, we summarize the current understanding of lipids in microglia, including how this contributes to driving different microglial states and functions with consequences on brain health. DIFFERENT MICROGLIAL STATES POINT TO LIPID PROCESSING VARIATIONS IN DEVELOPMENT AND DISEASE Given microglia's diverse functions at steady state and throughout neurological conditions, it is important to understand their heterogeneity. In the past decade, single‐cell RNA‐sequencing (scRNA‐seq) and other emerging technologies have led to the discovery of numerous microglial transcriptional signatures and states (Paolicelli et al., ; Vecchiarelli & Tremblay, ). In many cases, the identified microglial transcripts are related to lipid processing. For example, some transcripts altered in developmental signatures, including proliferative‐region‐associated microglia (PAM), are: ApoE , Lpl , and Fabp5 . PAM and microglia with similar transcriptional signatures are found in white matter regions, such as the corpus callosum and white matter areas of the cerebellum (Li et al., ; Vecchiarelli & Tremblay, ; Wlodarczyk et al., ). Some of these transcripts are also altered in microglia in different disease conditions. For example, disease‐associated microglia (DAM) and other microglial translational signatures associated with neurodegeneration and AD (in mice and humans) also show an increase in some of these lipid‐associated transcripts, including APOE and LPL , as do white matter associated microglia, which are found in the white matter of aged mice (Keren‐Shaul et al., ; Krasemann et al., ; Safaiyan et al., ; Sala Frigerio et al., ; Srinivasan et al., ; Vecchiarelli & Tremblay, ). However, these microglial transcriptional signatures were identified via scRNA‐seq, which does not provide functional readouts (although it can inform future functional assays), nor does it provide information about lipids themselves. Additionally, using in situ approaches such as spatial transcriptomics and matrix‐assisted laser desorption/ionization (MALDI) will be important to undercover region‐specific changes in transcripts and lipid species, respectively. 2.1 2.2 Dark microglia Dark microglia are a particular microglial state described using high‐spatial‐resolution electron microscopy (reviewed in St‐Pierre et al.  (Figure ). They were named after their dark appearance because of their electron‐dense cytoplasm and nucleoplasm, a feature that could be related to oxidative stress (Bisht et al., ). Indeed, dark microglia display alterations in organelles related to cellular stress, including Golgi apparatus and endoplasmic reticulum dilation as well as altered mitochondria, characterized by a deterioration of the outer membrane, degradation of the cristae or “holey” shape (i.e., mitochondria forming a donut shape) (St‐Pierre et al., ; St‐Pierre et al., ). Interestingly, this microglial state has been found in pathological conditions, such as mice exposed to maternal immune activation and rodent models of AD pathology, Huntington's disease pathology and amyotrophic lateral sclerosis, but it is nearly absent in the brain of healthy adult mice (El Hajj et al., ; Garofalo et al., ; Guma et al., ; Hui et al., ; Savage et al., ). Also, dark microglia were reported in aging human post‐mortem brain samples indicating that dark microglia also occur in humans (St‐Pierre et al., ). Some of the features described by dark microglia suggest that they have a unique metabolic profile: along with the mitochondrial alteration they show an accumulation of glycogen granules (St‐Pierre et al., ). These glycogen granules were found in microglial cells located near amyloid beta (Aβ) plaques and dystrophic neurites in the hippocampus of aged APP‐PS1 mice, a model of AD pathology. Even when both typical and dark microglia near the plaques and dystrophic neurites showed ultrastructural evidence of glycogen granules in the cytoplasm, they were much more frequent in dark than typical microglia (St‐Pierre et al., ). Similarly, increased glycogen accumulation characterizes macrophages/microglia in aged mice and humans (Minhas et al., ). Human monocyte‐derived macrophages from individuals over 65 years old showed an enhanced prostaglandin E2—a lipid messenger—signaling through the EP2 receptor (Minhas et al., ). This led to the activation of glycogen synthase and subsequent glycogen accumulation, reducing the glucose flux (Minhas et al., ). This is particularly important since macrophages from aged individuals lose their ability to use different substrates aside from glucose to obtain energy (Minhas et al., ). In the mouse hippocampus, EP2 protein expression was restricted to IBA1+ cells (microglia/macrophages) and pharmacological inhibition of EP2 reduced CD68 levels, glycogen synthesis, and enhanced glycolysis and the Krebs' cycle, as well as restored the ultrastructure of mitochondria in microglia (Minhas et al., ). These findings support the idea that metabolic shifts in microglia could be under control of lipid messengers. The metabolic pathways that characterize dark microglia fuel usage and how this impacts their function are under investigation. Dark microglia processes also show some particular features that differentiate them from those of other microglial states as they are thin and form numerous acute angles (Bisht et al., ). They directly contact blood vessels, ensheathing the basement membrane, thus indicating their contribution to the glia limitans and the neurovascular unit (Bisht et al., ). Also, they extensively encircle axon terminals and dendritic spines suggesting that they could be involved in synaptic pruning and plasticity (Bisht et al., ). What signals lead to this microglial state and what their functional outcome is still a matter of research. Foamy and lipid droplet‐accumulating microglia The accumulation of lipid granules/droplets in immune and glial cells (including putative microglia) has been described for over 100 years (Foley, ). Macrophages with irregular lipid accumulations have been of interest for the past 50 years; and in myeloid cells, it is well‐established that lipid droplets, which are cellular organelles that contain glycerolipids and cholesterols, are formed in response to inflammation and cellular stress (Bozza & Viola, ; den Brok et al., ), although the presence of lipid droplets does not necessarily indicate the presence of inflammation or cellular stress. For example, lipopolysaccharide (LPS), a pro‐inflammatory stimulus‐induced lipid droplet formation in N9 and BV‐2 microglia‐like cell lines and in vivo, in the hippocampus of young mice (Khatchadourian et al., ; Marschallinger et al., ). Lipid droplets are also important players in regulating cellular stress and help mouse primary microglial cells reduce their pro‐inflammatory profile after an LPS challenge (Li et al., ; Zadoorian et al., ). Lipid droplets can also be sites of production and storage of inflammatory mediators, including eicosanoids in leukocytes and pro‐inflammatory cytokines, such as IFN‐stimulated viperin in dendritic cells (den Brok et al., ; Saitoh et al., ). There have been numerous studies showing foamy microglia and lipid droplets in microglia ex vivo, in animal models and humans, including in the contexts of optic nerve injury, demyelination, excitotoxicity, hypoxia, encephalitis, and neurodegeneration (Arbaizar‐Rovirosa et al., ; Chali et al., ; Chiang et al., ; Dentinger et al., ; Derk et al., ; Dowding et al., ; Fabriek et al., ; Gedam et al., ; Kaur et al., ; Kodachi et al., ; Li et al., ; Ling, ; Liu & Shen, ; Luchetti et al., ; Machlovi et al., ; Sobaniec‐Lotowska, ; Sturrock, ; Tappe et al., ; Tremblay et al., ; van den Bosch et al., ; Victor et al., ; Wu et al., ; Xu et al., ). However, it is only in the last years that the field has investigated the potential functions of microglia with lipid accumulation. Recently, a microglial transcriptional signature was identified in lipid droplet high CD11b + CD45 low cells from the hippocampus of adult mice, termed lipid droplet‐accumulating microglia (LDAM) (Marschallinger et al., ) (Figure ). Microglia with lipid droplets were found via electron microscopy and fluorescence microscopy (with the lipid marker BODIPY) to be abundant in aged (20‐month‐old) male mice (and humans), but not in young male mice (3‐month‐old) (Marschallinger et al., ). The transcriptional signature associated with these cells included genes related to lysosomal activity, nitric oxide and ROS generation, vesicular transport, and lipid processing, but downregulation of microglial homeostatic genes (Marschallinger et al., ; Vecchiarelli & Tremblay, ). In situ and in vivo these cells have an impaired phagocytic capacity (Marschallinger et al., ). Furthermore, these cells show increased ROS and inflammatory cytokine production, particularly in response to LPS stimulation (Marschallinger et al., ). Others have found similar LDAM in humanized mouse models of AD genetic risk‐associated pathology (Claes et al., ). An enrichment of LDAM microglia was reported in human post‐mortem brains from patients with AD, especially in those individuals with the APOE4/4 genotype (Haney et al., ). These microglia expressed ACSL1 , an enzyme involved in the first steps of lipid droplet biogenesis (Haney et al., ). Also, incubation of human‐induced pluripotent stem cell‐derived microglia with fibrillar Aβ induced the synthesis of triglycerides and accumulation of lipid droplets providing a link between hallmarks of AD pathology (Haney et al., ). Single nucleus (sn)RNA‐seq also revealed transcriptional signatures in microglia isolated from post‐mortem samples of human patients with multiple sclerosis (Absinta et al., ; Vecchiarelli & Tremblay, ). One such cluster of these ‘microglia inflamed in multiple sclerosis’ was termed foamy, and was associated with transcripts associated with lipid storage, lipoproteins, lysosomal activity, and inflammatory response, as well as a foamy morphology in situ (Absinta et al., ) (Figure ). In a recent study combining in situ scRNA‐seq (MERFISH) and electron microscopy in adjacent brain sections, a foamy microglial state was observed following a demyelination injury in adult male mice; these cells upregulated a number of transcripts, including for transmembrane glycoprotein NMB (GPNMB) (a lipid regulating molecule) and those associated with the lysosome as well as cholesterol and lipid metabolism (Androvic et al., ). Intriguingly, progranulin ( Grn ) knock‐out increased the numbers of LDAM, including in white matter regions (Marschallinger et al., ). Progranulin also regulates GPNMB (Houser et al., ), which is found in PAM (localized to white matter regions during development) (Li et al., ). It was also reported that in a zebrafish injury model, the loss of Grn led to prolonged microglial reactivity (including an increase in lipid droplet accumulation), impaired neurogenesis, and reduced microglial flexibility—preventing a return to homeostatic microglia following injury (Zambusi et al., ). Recent work has highlighted that a loss of progranulin led to the accumulation of lipid droplets in microglia in the corpus collosum of male mice with a demyelination injury model, but not female mice, highlighting potential sex differences in this phenomenon (Zhang et al., ). In aged female mice (18–22‐month‐old), microglia similarly accumulated lipids, while these cells had increased levels of perilipin, a molecule that is associated with the production of lipid droplets; furthermore, perilipin was associated with the production of the pro‐inflammatory cytokine, tumor necrosis factor (TNF) (Shimabukuro et al., ). In a mouse model of toxin‐induced demyelination, perilipin‐2 protected lipid droplets from lipolysis‐mediated degradation—therefore, a loss of perilipin‐2 leads to an increased turnover of lipid droplets in foamy microglia as well as improves remyelination in this injury model (Loix et al., ). Alterations in lipid droplets in microglia point to potential dysregulation of microglial lipid metabolism. In primary microglial cells from male rat pups, lipid droplet accumulation was abrogated with 48 h of glucose deprivation, and this perhaps indicates that microglia switch to utilizing lipid energy stores, reducing lipid accumulation during conditions of reduced glucose (Churchward et al., ). Antagonism of the leukotriene receptor in a mouse model of AD pathology had reduced expression of lipid droplet‐accumulation‐related microglial transcripts, indicating a potential role for arachidonic acid derivatives in regulating lipid droplet formation (Michael et al., ). In male and female adult mice, the dark (active) phase of the light cycle was associated with an increased expression in microglia of uncoupling protein 2, a mitochondrial transporter protein that regulates ROS production, following synaptic remodeling (increased synapse phagocytosis) in the light phase; knock‐out of this protein in CX3CR1+ microglia cell led to impairment of this light phase associated reduction of synapses and further led to increases in ROS and lipid droplets (Yasumoto et al., ). Interestingly, microglial knock‐out of uncoupling protein 2 also led to altered hippocampal circuit electrophysiology and increased anxiety‐like behavior (in males); this may indicate that there is a normative accumulation of lipid droplets for the clearance of phagocytosed elements, but in conditions of disease or aging, there is impairment of this process (Yasumoto et al., ). Triggering receptor expressed on myeloid cells 2 (TREM2), a receptor for lipids expressed by microglia, is important for their phagocytic capacity, immune responses, and lipid metabolism (Gouna et al., ). Knock‐out of Trem2 prevented the formation of foamy microglia with lipid droplets in an acute toxin‐induced demyelination model, which was associated with impaired remyelination (Gouna et al., ). Furthermore, in induced pluripotent stem cell (iPSC)‐derived microglia from patients with Nasu‐Hakola disease, which is caused by mutations in TREM2 , there were reduced lipid droplet numbers (Filipello et al., ). This is additional evidence that lipid droplets may be useful to buffer excess lipids, particularly in the acute injury phase, creating an environment for regeneration, but that may in some contexts or with chronic accumulation, this buffering capacity being overwhelmed, perhaps, leading to impaired beneficial functions of microglia. The use of complementary in vivo technologies will allow for further examination of the function of these cells. For example, in rats exposed to a hypoxia/stroke model (medial cerebral artery occlusion), fatty acyl proton signals detected by magnetic resonance spectroscopy were positively correlated with lipid droplets in OX42+ cells (microglia/macrophages) (Gasparovic et al., ). LIPID SIGNALING IN MICROGLIA Different microglial transcriptional signatures and states are associated with altered lipid processing, as described previously (Vecchiarelli & Tremblay, ) and above. In the following section, we discuss some commonly altered lipid pathways, using those that are altered in these states as a guide. 3.1 3.2 3.3 3.4 3.5 3.6 Lipoprotein lipase Lipoprotein lipase (LPL) is an extracellular enzyme that hydrolyses triglycerides into non‐esterified fatty acids and monoacylglycerol (Mead et al., ) (Figure ). It is highly expressed in adipose tissue, cardiac and skeletal muscles, and the lactating mammary gland (Mead et al., ). Its catalytic function enables posterior lipid storage in the white adipose tissue or storage/oxidation in muscular cells (Mead et al., ). In addition to their metabolic role in the periphery, non‐catalytic functions have also been described and are still under study, particularly in the CNS (Wang & Eckel, ). The analysis of Lpl expression throughout the CNS gave the first insight that Lpl was expressed in defined regions and not only associated with the vasculature (Bessesen et al., ). In the adult rat brain, Lpl mRNA expression evaluated by in situ hybridization and coincident LPL protein expression assessed by immunohistochemistry were reported in neuronal cell bodies of the hippocampus and Purkinje cell layer of the cerebellum (Bessesen et al., ). A subsequent study that evaluated Lpl mRNA expression by in situ hybridization confirmed that Lpl was indeed expressed in several brain areas with the highest expression found in the pyramidal cell layers of the hippocampus in adult mice (Paradis et al., ). Interestingly, Lpl mRNA levels were increased in the penumbral area of an ischemic lesion in mice and colocalized with IBA1+ cells (Paradis et al., ). This suggests that Lpl expression can be induced in pathological contexts, particularly in phagocytic cells. Indeed, a particular microglial state, DAM, that upregulates LPL/Lpl expression was found near Aβ plaques both in the 5xFAD mouse model of AD pathology and in post‐mortem brain tissue from patients with AD (Keren‐Shaul et al., ). Interestingly, dark microglia have also been found in the vicinity of plaques containing fibrillar materials and encircling dystrophic neurites and synaptic elements in AD pathology mouse model (APPSwe‐PS1Δe9) and post‐mortem brain samples from patients with AD suggesting that these cells could be involved in active phagocytosis (El Hajj et al., ; St‐Pierre et al., ). Coincidently, treatment of BV‐2 cells or rat primary microglial cultures with Aβ induced the expression of LPL. In this context, Lpl silencing with short hairpin (sh)RNA in BV‐2 cells reduced Aβ phagocytosis, linking the expression of LPL to microglial phagocytic performance (Ma et al., ). During early postnatal development in rats, the activity of LPL in the whole brain increases reaching a peak between postnatal day (P) 5 and P10 (Nuñez et al., ). The hippocampus shows the most remarkable increase (Nuñez et al., ). The hippocampus also remains the structure with the highest enzymatic activity in adulthood, coincident with LPL expression (Bessesen et al., ; Nuñez et al., ). Its temporal pattern of activity coincides with the onset of the myelination process during development (Zeiss, ). Considering the need of lipids for this process, LPL arises as a promising key player in myelination (Nuñez et al., ; Zeiss, ). Of note, PAM upregulated genes involved in lipid processing including Lpl , Fabp5 , and Apoe at P7 in mice (Li et al., ). Supporting the hypothesis of PAM's involvement in myelination was that their main location in developing white matter (Li et al., ). This microglial state was shown to be involved in active phagocytosis of oligodendrocytes and astrocytes during development (Li et al., ). Although studies regarding LPL function during development are scarce, recent studies associate LPL with the remyelination process, especially associated with microglial functions. In mice with experimental autoimmune encephalitis as a model of demyelinating disease in humans, the activity of LPL was increased after 30 days coincident with initiation of symptom improvement (Bruce et al., ). Using an in vitro approach, the authors showed that treatment of BV‐2 cells with an Lpl shRNA lentiviral vector to reduce LPL expression induced a pro‐inflammatory state and their secretome impeded remyelination in brain slices treated with lysolecithin, a demyelinating agent (Bruce et al., ). These findings suggest that LPL expression is necessary for a microglial repair phenotype, and indeed BV‐2 LPL‐expressing cells showed increased internalization of lipids in vitro (Bruce et al., ). In fact, the reduced expression of LPL led to the accumulation of lipid droplets, along with a diminished lipid uptake and fatty acid oxidation, suggesting that LPL is a key regulator of lipid storage and usage (Loving et al., ). In post‐mortem brain samples of patients with multiple sclerosis, LPL immunostaining was increased close to active demyelinating lesions and colocalized with IBA1+ cells (Kamermans et al., ). Treatment of macrophages in vitro with antagonists of LPL activity reduced lipid uptake but did not affect myelin phagocytosis suggesting that other mechanisms could be involved in lipid metabolism of myelin by macrophage and microglia (Kamermans et al., ). Overall, microglial expression of LPL seems to be tightly regulated and is among the set of genes that characterize different microglial states' transcriptomic signatures (i.e., PAM and DAM). This suggests that LPL is involved in key processes during development and pathology, with myelination and its relationship to microglial phagocytic activity being the most promising link to focus on. Apolipoprotein E APOE is mostly expressed by astrocytes in the CNS under homeostatic conditions (Xu et al., ). This apolipoprotein is secreted and lipidated with cholesterol and phospholipids forming discoidal high‐density lipid (HDL) particles and reducing the lipid content within the cells (reviewed in Wolfe et al.  (Figure ). These lipids can be then redistributed to other cells, making APOE a main regulator of cholesterol distribution within the CNS. In the case of microglia, even under conditions such as a kainic acid insult in mice, their expression of APOE was restricted to only 6 ± 3% of CD11b+ cells (Xu et al., ). Recently, using scRNA‐seq, ApoE was found to be upregulated in the early transition from homeostatic microglia to DAM in the 5xFAD mouse model of AD pathology (Keren‐Shaul et al., ). Overall, ApoE is upregulated by and characterizes a subset of microglia under pathological conditions. APOE displays three different allelic variants in humans, which leads to different isoforms named APOE2 , ‐ E3 , and ‐ E4 , with APOE4 being the major genetic risk factor for AD. A recent study addressed the relationship between this isoform and 18‐kDa translocator protein (TSPO) expression in the brain. The study, which included 79 healthy controls, 23 participants with mild cognitive impairment, and 16 participants with AD dementia, showed using positron emission tomography that [ 11 C]PBR28‐TSPO uptake was higher in APOE4 carriers in cortical areas such as the transentorhinal and entorhinal cortices, as well as the hippocampus, and this was further associated to cognitive decline and hippocampal atrophy (Ferrari‐Souza et al., ). These results could be interpreted as overall “neuroinflammation,” but altered neuronal activity can increase TSPO and thus can lead to a mistaken interpretation (Notter et al., ). However, in support that microglial responses could be associated with APOE isoforms, it was observed that microglia stained for IBA1 were less ramified in patients with dementia, particularly in the hippocampus and superior and middle temporal gyrus. APOE4 carriers with dementia further showed an increase of rod microglia in the superior and middle temporal gyrus (Kloske et al., ). Rod microglia is an elongated microglial morphology with two long primary processes but scarce secondary ones, which has been consistently reported in preclinical models and post‐mortem samples of patients with different pathologies, including AD and epilepsy (reviewed in Giordano et al. . Also, patients with AD who carry APOE4 showed an increase in perilipin‐2 colocalizing with IBA1+ cells and NeuN+ cells, suggesting that there is an increase in lipid droplets in both microglia and neurons, respectively (Wang et al., ). In rodents, APOE4 was shown to regulate microglial metabolism in aged mice (Lee et al., ). A subset of microglial cells obtained from aged mice expressing human APOE4 showed an increase in Apoe4 expression along with upregulation of genes related to glycerolipid metabolism and glycolysis. Of note, this subset was also characterized by genes associated with the DAM/microglial neurodegenerative phenotype (MGnD) signatures, including Lpl , Ch25h , Fabp5 , and Apoe (Lee et al., ). When addressing these cells' metabolism through a Seahorse platform, E4 microglia showed enhanced glycolysis and reduced mitochondrial respiration compared to E3 microglia, and when exposed to an inflammatory stimulus as LPS, they increased further the glycolytic ATP production whereas E3 microglia were able to increase mitochondrial respiration (Lee et al., ). In a cuprizone model of demyelination, microglia in Apoe4 mice showed an accumulation of lipid droplets and decreased phagocytic activity suggested by a decrease in CD68 staining compared to Apoe2 mice, which rendered impaired remyelination (Wang et al., ). These results indicate that APOE isoforms can shape microglial metabolism and phagocytic function. APOE can interact with low‐density lipoprotein receptor (LDLR) and LDLR‐related receptor 1 (LRP1), as well as with TREM2 which is expressed in myeloid cells including microglia (reviewed in Wolfe et al. . Indeed, TREM2 and APOE signaling are tightly linked to the microglial state. Apoptotic neurons are able to activate this cascade transforming microglia from homeostatic to MGnD (Krasemann et al., ). Also, TREM2 proved to be critical for microglial metabolism in the 5xFAD model of AD. The absence of Trem2 leads to an increase in autophagy along with a derailment of the mTOR pathway in microglia (Ulland et al., ). In line with these findings, the absence of Trem2 in APPSwe‐PS1Δe9 mice led to an increase in cognitive impairment, and reduced recruitment of microglia to the plaques and larger plaques (Fitz et al., ). Mounting evidence suggests that the APOE‐TREM2 pathway is crucial to determine the microglial state during AD pathology. In contrast with these findings, during postnatal development, even though PAM share similarities to the DAM transcriptional signature, they do not seem to depend on APOE‐TREM2, suggesting that other mechanisms could be involved during development (Li et al., ). Microglial TREM2 also plays a protective role in stroke (Wei et al., ). TREM2, as well as TGFβ1, was upregulated in mouse primary microglial cells under oxygen–glucose‐deprived conditions (Wei et al., ). By silencing Trem2 , microglia accumulated lipid droplets and decreased TGFβ1 expression (Wei et al., ). This was also proved in vivo, where knocking down Trem2 resulted in increased cerebral infarct size and worsened behavioral outcomes (Wei et al., ). Polyunsaturated fatty acids (focus on n‐3 PUFA ) There are two primary families of polyunsaturated fatty acids (PUFA), n‐6/omega‐6 and n‐3/omega‐3, the prototypical of which being, respectively, linoleic acid—the precursor for arachidonic acid—and α‐linolenic acid—the precursor of eicosapentaenoic (EPA) and docosahexaenoic (DHA) acids (Layé et al., ) (Figure ). Linoleic acid and α‐linolenic acid are essential fatty acids obtained solely from diet/consumption (Layé et al., ). Recent work indicates that microglia are enriched in EPA (Cisbani et al., ). While n‐3 PUFA was previously considered to be inert components of cellular membranes, research from the past two decades has focused on ascertaining a role for these molecules in microglial function. N‐3 PUFA supplementation or deprivation alters lipid profiles in microglia. For example, n‐3 PUFA‐enriched diets given to mouse dams increased EPA and phosphatidylinositol, and phosphatidylserine levels in microglia isolated from pups (Rey et al., ). Deficiency of n‐3 PUFA throughout gestation and lactation in mice changed the lipid composition in the pups' brain, decreasing n‐3 PUFA and increasing n‐6 PUFA (Madore et al., ). In the hippocampus, a deficient n‐3 PUFA diet leads to a reduction in microglial process motility as well as slower processes retraction (Madore et al., ). Also, microglia from n‐3 PUFA‐deficient animals showed an altered bioactive lipids profile with greater levels of arachidonic acid‐derived mediators and lower amounts of DHA‐ and EPA‐derived mediators (Madore et al., ). Boosting n‐3 PUFA levels, whether through dietary or genetic means, has been shown to reduce microglial numbers, both in the steady state and in response to disease or injury states (Baazm et al., ; Dinel et al., ; Hakimian et al., ; Hopperton et al., ; Jiang et al., ; Kalogerou et al., ; Mondal et al., ; Tenorio‐Lopes et al., ). Conversely, reducing DHA levels increased microglial numbers (Talamonti et al., ). This also occurred in mouse pups when their dams were deprived of n‐3 PUFA; furthermore, microglia from pups of dams with n‐3 PUFA deficiency had increased dendritic spine phagocytosis, showing increased cellular inclusions (including spines) and increased post‐synaptic density‐95 levels (Madore et al., ). However, the outcomes of n‐3 PUFA on the microglial number may be dependent on age and brain region, as DHA deficiency in dams reduced microglia number, as well as contact with myelin, in pups at P10 in white matter regions, as well as increased microglial mitochondria number and cell proportion (Decoeur et al., ). This corresponds to postnatal time points when there is high microglial heterogeneity and the emergence of microglial states, such as PAM or dark microglia (Vecchiarelli & Tremblay, ). This may indicate that n‐3 PUFA changes in microglia may be particularly important for altering microglial non‐homeostatic microglia states in development. Furthermore, there are sex differences in the effects of DHA deficiency on microglia, as in females, DHA deficiency has been shown to reduce microglial number (Rodríguez‐Iglesias et al., ). Together this highlights a need for increased specificity using more in vivo analysis of the effects of fatty acid supplementation and depletion on microglia, paying careful attention to region, duration, time point, and sex. There are a number of reports showing that n‐3 PUFA exposure can alter microglial inflammatory signaling. Exposure to n‐3 PUFA prevented LPS‐induced increases in matrix metalloproteinase (MMP)9, an enzyme involved in the degradation of the extracellular matrix, in rat primary microglial cells (Liuzzi et al., ). Exposure to n‐3 PUFA EPA and DHA in primary microglial cells reduced nitric oxide in response to LPS, as well as, to IFNγ and myelin (Chen et al., ). N‐3 PUFA administration also reduced levels of CD16+ IBA1+ microglia but increased CD206+ IBA1+ microglia in the cortex following traumatic brain injury in adult male rats, potentially through reducing HMGB1/NF‐kB pathway signaling (Chen et al., ). DHA promoted small lipid droplet formation and normalized LPS‐induced large lipid droplet formation in microglia from organotypic hippocampal slices from P6‐P8 mice (Chang et al., ). In rat primary microglial cells, DHA administration blunted Japanese Encephalitis Virus infection induced increases in pro‐inflammatory cytokines (IL‐1β and ‐6, and TNF) (Chang et al., ). Additionally, n‐3 PUFA impacts microglial phagocytic activity. Incubation of mouse primary microglial cells with arachidonic acid increased the phagocytosis of synaptosomes in vitro while incubation with DHA or EPA decreased their phagocytic activity (Madore et al., ). This increase in phagocytosis was partially mediated by 12/15‐lipoxygenase (LOX)/12‐HETE signaling (Madore et al., ), supporting the idea that bioactive lipidic mediators play a key role in modulating microglial functions. In contrast to the phagocytosis of synaptosomes, DHA and EPA led to an increased phagocytosis of myelin by mouse primary microglial cells (Chen et al., ), suggesting that their effect on phagocytic activity depends on the stimuli and context. Together these works reveal an emerging role for n‐3 PUFA in regulating microglial form and function. As n‐3 PUFA can be supplemented in the diet, the study of their function is increasingly important, as both an understanding of the effects of this common dietary supplementation, as well n‐3 PUFA usage as potential therapeutic agents (Layé et al., ). Cholesterol Cholesterol is abundant in the brain—this organ contains around 20% of all whole‐body cholesterol content—and most of it (up to 80%) is localized in myelin sheaths (Yutuc et al., ; Zhang & Liu, ). Cholesterol is a key component of cellular membranes and serves as a precursor for multiple active metabolites. Since blood‐circulating lipoproteins are prevented from crossing the brain–blood barrier, cholesterol synthesis in the brain occurs de novo. In fact, there is a burst of cholesterol synthesis that occurs during developmental myelination and after this process, cholesterol turnover becomes low and steady through adulthood (Zhang & Liu, ). Microglial cholesterol metabolism is currently gaining attention, particularly for its role in regulating pro‐ and anti‐inflammatory states, as well as their relation to the remyelination process. As myelin sheaths are enriched in cholesterol, it is interesting to consider how microglia deal with phagocyting myelin as it could give a hint on potential therapeutic targets for demyelinating diseases. Mouse primary microglial cells exposed to LPS in vitro secreted 25‐hydroxycholesterol, a cholesterol metabolite, that was internalized by astrocytes, suggesting that this metabolite could be involved in cell–cell communication in a pro‐inflammatory context (Cashikar et al., ) (Figure ). Indeed, it induced the secretion of APOE containing cholesterol in mouse primary astrocytes, facilitating cholesterol distribution among cells and possibly influencing the function of other cells including microglia (Cashikar et al., ). For example, the presence of APOE in mouse primary microglial cells enhanced the efficiency of Aβ degradation by reducing the intracellular levels of cholesterol (Lee et al., ). Also, the presence of 25‐hydroxycholesterol regulated the expression of key genes involved in cholesterol transport—upregulation of Abca1 and downregulation of Ldlr —and biosynthesis—downregulation of Srebf2 , Insig1 , Acat2 , Hmgcr and Dhcr24 —which resulted in increased cholesterol efflux and reduced biosynthesis in astrocytes (Cashikar et al., ). All these changes resulted in a diminished intracellular free cholesterol content together with an increase in intracellular cholesteryl esters stored in lipid droplets in astrocytes (Cashikar et al., ). Also, exposure to 25‐hydroxycholesterol mimicked the LPS‐detrimental effects on long‐term potentiation in rat hippocampal slices (Izumi et al., ). Since knock‐out mice for Ch25h —a gene that encodes an enzyme required for the synthesis of 25‐hydroxycholesterol—did not show these alterations, it is postulated that this cholesterol metabolite is required for the detrimental effects of LPS (Izumi et al., ). Additionally, primary microglial cells obtained from Ch25h knock‐out mice showed a reduction of secreted IL‐1β after LPS exposure, suggesting that this metabolite is also involved in specifically pro‐inflammatory cytokines release. Primary microglial cells obtained from mice expressing human APOE4 produced higher amounts of 25‐hydroxycholesterol and this was associated with an increase in IL‐1β secretion following LPS challenge compared with microglial cells obtained from mice expressing ApoE2‐ or ApoE3‐ (Wong et al., ). In line with these findings, it was reported that an injection of 25‐hydroxycholesterol into the corpus callosum of mice induces the recruitment of IBA1+ cells and increased levels of IL‐1β (Jang et al., ). However, another study showed anti‐inflammatory properties of 25‐hydroxycholesterol, as it reduced IFNγ signaling in mouse primary microglial cells by disrupting lipid rafts and impeding the increase of caveolin‐1 necessary for the internalization by endosomes and subsequent signaling by IFNγ (Lee et al., ). This resulted in a reduction of neurotoxic effects of mouse primary microglial cells when co‐cultured with mouse primary neuronal cells (Lee et al., ). These opposing effects of 25‐hydroxycholesterol could be because of different concentrations used across studies, with low concentrations more associated with an anti‐inflammatory effect and higher concentrations with a pro‐inflammatory response (Jang et al., ; Lee et al., ; Wong et al., ). Cholesterol metabolism is also implicated in pain regulation. Cisplatin, a platinum‐based alkylating agent used in chemotherapy, caused tactile allodynia in mice and spinal microglia of these animals showed an increase in lipid raft formation. This could lead to a hyperactivation of Toll‐like receptor (TLR)4, since dimerization of TLR4 which occurs in lipid rafts is the first step of TLR4 activation (Navia‐Pelaez et al., ). Bulk RNA‐sequencing showed that CD11b+ TMEM119+ FACS‐sorted spinal microglia from cisplatin‐treated mice down‐regulated cholesterol transporters Abca1 and Abcg1 , lysosomal related genes and upregulated arachidonic acid metabolism genes (Navia‐Pelaez et al., ). Spinal microglia from animals treated with cisplatin shared an RNA expression profile with DAM, as well as an increased number and size of lipid droplets (Navia‐Pelaez et al., ). When Abca1 and Abcg1 were knocked down in microglia, animals showed allodynia in the absence of stimuli along with higher TLR4 expression and dimerization as well as lipid raft content in spinal microglial cells compared with wild‐types (Navia‐Pelaez et al., ). These data support the idea that cholesterol transport in microglia is tightly related to their immune response and function. Another example is Niemann‐Pick type C (NPC) disease, a rare lipid storage disorder in which most of the patients carry mutations in the NPC1 gene (Colombo et al., ). This gene encodes for a protein involved in the transport of lipids from late endosomes/lysosomes to other organelles, so when this route is impaired, lipids accumulate in the endosomes/lysosomes. Microglia express NPC1 and microglia from animals lacking Npc1 showed an increase in intracellular cholesterol which colocalized with CD68, suggesting that the cholesterol was accumulated in endosomes/lysosomes (Colombo et al., ). When analyzing their proteomic signature, microglia showed a similar signature to DAM, downregulating homeostatic proteins and upregulating TREM2, TYROBP, APOE, ITGAX/CD11c, and many late endosomal/lysosomal proteins, including LAMP1/2, LIPA, CD68, CTSB, CTSD, and GRN (Colombo et al., ). This shows that the phagolysosomal pathway is strongly upregulated even when lipid flux is impeded. Indeed, microglia obtained from Npc1 knock‐out mice showed an increase in phagocytic activity. This preceded neuronal loss in this animal model. Also, myelin phagocytosis was enhanced but myelin turnover and recycling were impaired leading to an intracellular accumulation of myelin in the late endosomal/lysosomal compartment of microglia. This was in contrast to what happened in wild‐type microglia, where phagocytosed myelin was stored in lipid droplets. Indeed, microglial cholesterol regulation is key for remyelination in the context of demyelinating diseases, such as multiple sclerosis (Berghoff et al., ). Under basal conditions, microglia did not express high levels of sterol genes but under acute demyelination (mice treated with cuprizone) they upregulated these genes (Berghoff et al., ). Mice carrying mutations both in sterol synthesis (squalene synthase) or cholesterol efflux ( Abca1 and Acbg1 ) genes specifically in CX3CR1+ microglia also demonstrated that both processes are needed for restoration of myelin after acute demyelination induced by cuprizone treatment (Berghoff et al., ). These mutant mice showed foamy microglia suggesting a compromised lipid turnover and recycling (Berghoff et al., ). In particular sterol synthesis at the level of squalene synthase was critical to switch a microglial pro‐inflammatory state towards one that enables remyelination (Berghoff et al., ). This was possible through the accumulation of desmosterol and activation of liver X receptor (LXR)‐signaling, which led to increased cholesterol efflux and recycling (Berghoff et al., ). Another study showed that stearoyl‐CoA desaturase‐1, the rate‐limiting enzyme in the desaturation of saturated fatty acids, was responsible for perpetuating macrophage/microglial pro‐inflammatory state, thus leading to free cholesterol accumulation because of inhibition of ABCA1‐mediated cholesterol efflux (Bogie et al., ). Inhibition of stearoyl‐CoA desaturase‐1 also resulted in an improvement in remyelination in ex vivo cerebellar brain slices demyelinated with lysolecithin and in vivo in the cuprizone‐induced de‐ and remyelination model (mice treated with cuprizone) (Bogie et al., ). TREM2 was additionally found to control lipid metabolism in the chronic demyelination induced by cuprizone (mice fed with cuprizone for 12 weeks) (Nugent et al., ). After demyelination induced by cuprizone, microglia upregulated Trem2 ‐dependent genes involved in cholesterol transport and metabolism, including Apoe , Apoc1 , Ch25h , Lipa , Nceh1 , Npc2 , and Soat1 (Nugent et al., ). This was impeded in Trem2 knock‐out mice, which resulted in an accumulation of cholesteryl ester and its oxidized form in the brain (Nugent et al., ). Understanding cholesterol regulation in microglia would open new avenues, particularly in terms of treatment for demyelinating diseases. Sphingolipids Sphingolipids are a very diverse class of abundant membrane lipids in the CNS; they are formed by de novo synthesis of sphinganine from serine and a long chain fatty acyl‐CoA and are divided into three main classes depending on their polar head group: glycosphingolipids, sphingomyelins and ceramides (Alaamery et al., ). Microglia are rich in sphingolipids (Fitzner et al., ) (Figure ), particularly in brain regions and timepoints where altered microglial states (i.e., PAM or DAM) are found, such as the developing white matter, and during disease or injury conditions (Amat et al., ; Cammer & Zhang, ; Dehghan et al., ; Ellison & de Vellis, ; Muscoli et al., ; Pan et al., ; Saito et al., , ; Wang et al., ). Rat primary microglial cells incubated with gangliosides had increased levels of nitric oxide and pro‐inflammatory cytokines (Jou et al., ; Min, Pyo, et al., ; Min, Yang, et al., ; Pyo et al., ). Gangliosides contributed to activating the Janus‐kinase/signal transducers and activators of transcription (JAK–STAT) pathway in microglia via the accumulation of phosphatases in lipid membrane rafts (Kim et al., ). Furthermore, gangliosides administration facilitated the linkage of TLR2 to its adaptor protein, myeloid differentiation primary response protein (MYD88) (Yoon et al., ). Sulfatide, a galactosylceramide, was shown to induce nuclear factor kappa B subunit 1 (NF‐κB) stimulation in mouse primary microglial cells (Jeon et al., ). Delivering glycosphingolipid‐enriched exosomes intracerebroventricularly into an AD pathology mouse model led to their uptake by microglia and reduced Aβ accumulation, potentially though these exosomes acting as a scavenger for Aβ (Yuyama et al., ). It seems that the ganglioside types may influence microglial pro‐inflammatory cytokine release, with GM1, GD3, GD1a, GD1b, and GT1b reducing, and GM3 and GQ1b promoting this release (Galleguillos et al., ). GM1 administration also ameliorated TREM2‐mediated microglial phagocytosis, through activating CD33, as well as ameliorated chronic sleep restriction‐induced synapse loss and memory impairment (Tan et al., ). This also appears to be true for ceramides. Rat primary microglial cells incubated with C8‐ceramide released brain‐derived neurotrophic factor (BDNF), without increasing pro‐inflammatory cytokine production (Nakajima et al., ). C2‐ceramide in rat primary microglial cells promoted prostaglandin E2 synthesis (Akundi et al., ), as well as increased levels of IL‐1β in mouse primary microglial cells (Scheiblich et al., ). β‐glucosylceramide accumulation in microglia in Gaucher Disease led to phagocytosis of neurons in a mouse model of the disease, as well as in surgical resection samples from patients (Shimizu et al., ). However, other studies have shown that administration of C2‐ (as well as C6‐ and C8‐)ceramides inhibited pro‐inflammatory cytokine production in rat primary microglial cells and LPS‐exposed mice; furthermore, C2‐ceramide exerts these effects via penetrating into microglia and contributes to interfering with TLR4 and LPS interactions (Jung et al., ). A current understanding in the field suggests that there is a delicate balance between types of sphingolipids, as their ratio is altered in a number of neurodegenerative disease conditions (Alaamery et al., ; Allende et al., ; de Wit et al., ; Sood et al., ; Xiao, ; Yuan et al., ). More research is needed to determine the effects of modulating sphingolipids in microglia, throughout development and in the context of CNS pathological alterations. Plasmalogens Plasmalogens are a particular group of glycerophospholipids that contain a vinyl‐ether at the sn‐1 position in the glycerol backbone (Gorgas et al., ). This particular bond confers plasmalogens hydrophobicity and acid/oxidation lability (Gorgas et al., ). At the sn‐2 position, plasmalogens are usually esterified with a long chain of n‐6 or n‐3 PUFA (Gorgas et al., ). At the sn‐3 position, they are esterified to a phosphate group linked to an alcohol, being the most abundant ethanolamine (PlsEtn) and choline (PlsCho), which together represent the polar head (Gorgas et al., ; Vance, ). Plasmalogens with ethanolamine are particularly abundant in human brain, heart, eosinophils, and neutrophils (Braverman & Moser, ). Also, in mice, this type of plasmalogen is abundant in the brain (Braverman & Moser, ). They constitute up to 20% of the phospholipids in cell membranes (Braverman & Moser, ). They are part of subcellular membranes including the nucleus, endoplasmic reticulum, Golgi apparatus, and mitochondria (Braverman & Moser, ). Of note, the highest amount of plasmalogens in mammals is present in the myelin sheath (Dorninger et al., ). The levels of plasmalogens are tightly regulated by their biosynthesis de novo and their degradation (Bozelli et al., ). CNS inflammation has been consistently related to lower levels of plasmalogens, although the cause of this reduction is still unknown (Bozelli et al., ). Plasmalogens can be targeted by ROS because of their high oxidative liability and cytochrome c in oxidative conditions; they can be substrates of phospholipase C which catalyzes the removal of the polar head and substrates of phospholipase A2 which leads to the generation of lipid bioactive molecules, commonly PUFA, such as n‐6 arachidonic acid and n‐3 DHA (Gorgas et al., ; Jenkins et al., ; Yang et al., ). Particularly, different inflammatory stimuli in vitro (i.e., LPS and IFNγ) induced the phosphorylation of the cytosolic phospholipase A2 in mouse primary microglial cells and this was required to increase ROS/nitric oxide production through the lipoxygenase pathway, showing a direct link between pro‐inflammatory stimuli and plasmalogen metabolism in the microglial pro‐inflammatory response (Chuang et al., ). Also, reduction of the enzyme involved in the biosynthesis of plasmalogens ( Gnpat knock‐down) led to the activation of NF‐kB pathway of mouse primary microglial cells and enhanced the expression of pro‐inflammatory cytokines such as IL‐1β induced by LPS. In vivo, the knock‐down of Gnpat in mice resulted in more rounded microglial morphology and increased the expression of Il‐1b and Mcp‐1 (Hossain et al., ). Indeed, pro‐inflammatory stimuli, such as LPS in the MG6 microglial‐like cell line and primary mouse microglial cells led to a reduction of Gnpat , while mice injected intraperitoneally with LPS for 7 days showed reduced hippocampal Gnpat expression (and levels of PlsEtn) along with NF‐kB activation in IBA1+ microglia and GFAP+ astrocytes, but not NeuN+ neurons (Hossain et al., ). These results provide in vitro and in vivo evidence that the levels of plasmalogens are related to glial reactivity to pro‐inflammatory stimuli. In fact, incubation with plasmalogens delayed the endocytosis of TLR4 in mouse primary microglial cells and the BV‐2 cell line, which could explain the reduced pro‐inflammatory response to LPS (Ali et al., ). On the contrary Gnpat knock‐down in BV‐2 cultures accelerated endocytosis after LPS treatment (Ali et al., ). Administration of plasmalogens containing 47.6% PlsEtn, 49.3% PlsCho, 2.4% sphingomyelin, and 0.5% other phospholipids, prevented the increase in IBA1+ cells in the hippocampus observed when injecting LPS intraperitoneally for 7 days to induce CNS inflammation in mice (Katafuchi et al., ). Recently, a study showed that the intragastric administration of plasmalogens enriched with EPA, DHA, and other n‐3 PUFA for 2 months to 16‐month‐old female mice improved cognitive performance and overall physical appearance (absence of gray hair and their hair was glossier and thicker) (Gu et al., ). Plasmalogen supplementation also increased synaptophysin levels and the number of synapses in the hippocampus as well as restored microglial age‐associated morphological changes (Gu et al., ). Plasmalogen levels were found to be reduced in patients with AD, Parkinson's disease, autism spectrum disorder, and Down syndrome (Dorninger et al., ). Preclinical evidence points to plasmalogens as potential regulators of glial inflammatory functions and diet supplementation showed efficacy in counteracting the effects of their diminution both in models of pathology and aging, potentially through their actions in microglia. CHALLENGES REGARDING METHODOLOGY TO STUDY MICROGLIAL LIPID METABOLISM Assessing lipid metabolism in microglia is still a challenge, especially when it comes to understanding homeostatic and pathological conditions in vivo. Our current understanding of microglial lipid metabolism is mainly provided by experiments in culture which allow to study intracellular cascades in controlled conditions and response to different stimuli. However, microglial primary cultures, cell lines, or derived from human cells are devoid of their microenvironment and usually exhibit a more reactive phenotype far different from the microglia in vivo (Pesti et al., ). Data generated in vitro provides useful information as a starting point to disentangle microglial lipid metabolism. Regarding in vivo studies, microglia's role in brain metabolism and how microglial metabolism impacts their functions have been overseen for several years and most lipid studies are not microglia‐specific, for examples, staining for lipids or lipidomic of a whole brain area. Nowadays, scRNA‐seq allows for the analysis of transcripts related to lipid metabolism particularly in sorted microglial cells. Despite this advance, as stated before, a transcriptomic signature does not necessarily provide functional information. Another important consideration is that lipid metabolism is a dynamic process and many imaging and molecular approaches like lipidomics are unable to provide information about kinetics (Kim et al., ). CONCLUSION Lipid signaling and metabolism in microglia are gaining attention since many genes that are upregulated during development and pathological states pertain to lipid processing proteins or transporters (Keren‐Shaul et al., ; Li et al., ; Paolicelli et al., ). Indeed, lipids proved to shape microglial functions particularly their immune response and phagocytic activity. It is particularly interesting that diet or supplementation of different lipid mediators can influence microglial functions and impact beneficially or detrimentally the CNS across a lifetime. This could be because of a modulation of microglial metabolism which adapts to the available nutrients or through changes in microbiota composition (Erny et al., ; Thion et al., ). In addition, new treatment targets to modulate lipid metabolism could provide new treatment avenues for incurable pathologies, including AD and demyelinating diseases. FUTURE DIRECTIONS AND PERSPECTIVE Microglial states are broadening our view of their roles across physiology and pathology. In particular, states associated to pathology share as a common trait the convergence of lipid metabolism. The next step in the field is to understand how lipids modulate microglial functions considering their dynamics in vivo. Having a clear picture of lipid metabolism and signaling in microglia would allow us to propose targets to shift microglia to states that could be beneficial for a specific condition. Microglia have been shown to respond quickly to changes in lipid intake which is promising for potential treatments (Drougard et al., ). Moreover, we still have a long way to go to understand the regulation of lipid metabolism during development and aging, as well as how sex influences these processes. Marianela E. Traetta: Conceptualization; visualization; writing – original draft; writing – review and editing. Haley A. Vecchiarelli: Conceptualization; writing – original draft; writing – review and editing. Marie‐Ève Tremblay: Conceptualization; funding acquisition; supervision; writing – nc.16259 .
Development and evaluation of a deep learning segmentation model for assessing non-surgical endodontic treatment outcomes on periapical radiographs: A retrospective study
fbec5252-669e-48d9-bb7a-c00c2c794f42
11687807
Dentistry[mh]
Modern endodontic treatments are highly effective in saving teeth that might otherwise need to be extracted. However, like any medical procedure, there is always a chance of failure. Non-surgical endodontic treatment, commonly referred to as root canal treatment, is a dental procedure aimed at treating infection or damage within the tooth’s pulp without the need for surgical intervention. The outcome of endodontic treatment is crucial especially if the clinical decision regarding a compromised tooth is to be made either through root canal treatment or extraction . Root canal treatment outcomes are dominantly influenced by the nature of prior dynamic host/infection interaction (pre-operative patient factors), the active efficacy of the operators’ root canal treatment protocol to sustain a microbial ecological shift and resolve periapical inflammation (intra-operative treatment factors), and the passive ability of the functional tooth and its restoration margin to maintain its integrity to resist infection reversal (postoperative restorative factors) . Evaluating the treatment outcomes of non-surgical endodontic treatment involves several methods, focusing primarily on clinical assessments and imaging techniques . The most commonly used imaging modalities are parallel digital periapical radiographs and cone beam computed tomography (CBCT). The combination of clinical assessments, parallel digital periapical radiographs, and CBCT provides a comprehensive approach to evaluating the outcomes of non-surgical endodontic treatment. Traditional two-dimensional radiographs remain a staple due to their wide availability in dental practices, low radiation dose, and provide rapid feedback for immediate diagnosis and treatment planning. CBCT provides three-dimensional imaging of teeth, bones and surrounding structures, offering invaluable information in complex cases . One of the main factors that may influence outcomes of endodontic treatment is the effect of tooth integrity . Preoperative clinical evidence of compromised tooth structure, such as in the form of reduced amount, distribution, quality (sclerosed dentine) or integrity (cracks) of enamel or dentine may reduce the prospect of periapical healing . Often, endodontically treated teeth experience tissue loss due to prior pathology and compromise the mechanical integrity of the remaining tooth structure . This important factor is considered further under postoperative factors. Fractures of restored endodontically treated teeth are a common occurrence in clinical practice. Severely fractured teeth that cannot be salvaged are typically extracted and replaced with implants, bridges, or dentures to restore function and aesthetics. Hence, predicting potential failure during the preoperative phase of non-surgical endodontic treatment is crucial to ensure that patients receive the most appropriate treatment. Artificial intelligence (AI) technology, especially deep learning, has demonstrated significant potential in the field of medical and dental imaging analysis, including applications in oral health care . As AI technology continues to advance, its integration into dental practice can contribute to more accurate diagnoses, enhanced treatment planning, and ultimately improved patient outcomes . AI can contribute to the improvement of diagnosis and treatment that can lead to an increase in the success of endodontic treatment outcomes . Deep learning, or deep neural networks, is built with multiple layers of convolutional neural networks designed to autonomously learn and extract features from image data. Deep learning models can outperform or match the diagnostic accuracy of dental specialists in identifying and diagnosing endodontic issues, such as root canal abnormalities and periapical lesions . The integration of deep learning into endodontic treatment is a promising trend that has the potential to revolutionize the field by improving diagnostic accuracy and enhancing treatment planning. This aim of this study was to develop and evaluate non-surgical endodontic treatment outcome prediction model using deep learning technology. A Mask R-CNN segmentation algorithm was implemented to outline and separate the root from other structures on preoperative periapical radiographic images with known treatment results and to predict class label into healed, healing and disease. The performance of the Mask R-CNN model was evaluated on a test set and also by comparison with the performance of clinicians (general practitioners and endodontists) with and without the help of the model on independent periapical radiographs. The model evaluation was based on precision, recall, F1 score, the area under the precision-recall curve (AUC), and mean average precision (mAP). The clinician evaluation was based on sensitivity, specificity, precision, and mAP. The hypothesis posited that the integration of the Mask R-CNN model with clinicians would improve the accuracy of predicting endodontic treatment outcomes on preoperative periapical radiographs compared to predictions made by clinicians alone. The proposed model is expected to provide AI second opinions for preoperative endodontic treatment planning to ensure that patients receive the most appropriate treatment. This study employed a retrospective experimental design to develop and evaluate a deep learning model for assessing non-surgical endodontic treatment outcomes on periapical radiographs. The study involves two key phases: model development (retrospective phase) and model evaluation (experimental phase) . This study was approved by the Human Research Ethics Committee of the author’s University (review board number COA 047/2567) and was performed in accordance with the tenets of the Declaration of Helsinki. Informed consent was waived from all patients because of the retrospective nature of the fully anonymized radiographic images. The radiographic images were accessed on May 14, 2024 for the development of the Mask R-CNN model. Data preparation Deep learning model Clinician evaluation Statistical analysis Electronic health records of patients aged 18 years or older with routine root canal treatment history were retrieved from the endodontic clinic at Thammasat University hospital for a period from January 2015 to June 2021. The cases chosen for assessing treatment outcomes were selected from those categorized as having a low to moderate degree of endodontic treatment difficulty, as per the AAE Endodontic Case Difficulty Assessment Form and Guidelines (AAE Endodontic Case Difficulty Assessment Form and Guideline, 2022). All intraoperative procedures adhere to the endodontic treatment protocol at the University Hospital. The operators were board certified endodontists. The non-surgical endodontic clinical protocols followed the standard procedures of the Endodontic Clinic at Thammasat University hospital. All cases were strictly performed under rubber dam isolation, involving conservative access cavity preparation, cleaning and shaping with standardized endodontic instruments, and irrigation with 2.5–5% sodium hypochlorite, saline, and 17% EDTA with ultrasonic activation. Obturation was done using consistent materials and techniques, specifically the warm vertical compaction technique with a resin sealer, ensuring a proper coronal seal. Cases with intraoperative and/or postoperative errors were excluded. Therefore, the accuracy of the results was assured. In evaluating endodontic treatment outcomes, the parameters include clinical and radiographic examinations, which must be synchronized to accurately classify cases. According to the American Association of Endodontists (AAE) and American Academy of Oral and Maxillofacial Radiology (AOMR) Joint Position Statement (2016), 2-D intraoral radiographs should be the imaging modality of choice for evaluating endodontic patients. CBCT should be considered only when conventional radiographs do not provide adequate information. In this study, cases that required CBCT imaging were excluded. Three board certified endodontists reviewed the results of endodontically treated teeth over a 3-year follow-up period, assessing outcomes through both radiographic (periapical radiographs) and clinical measures. Employing criteria for clinical and radiographic evaluation, the three endodontists categorized the periapical radiographs from the 3-year follow-up into three groups: healed, healing, and disease. In assessing the outcome of endodontic treatment, we used the guidelines for clinical and radiographic assessment as stated by Friedman and Mor . The criteria were as follows: Healed–No clinical signs or symptoms and radiographic evidence of normal periapical tissues; Healing–Reduced size of periapical radiolucency without clinical signs and symptoms; Disease–Presence of clinical signs or symptoms and/or radiographic evidence of periapical radiolucency. Digital periapical radiographic images were obtained with equipment from different manufacturers using standard imaging protocols. The digital periapical radiographs were taken using the paralleling technique. Exposure settings were 60–70 kilovoltage peak (kVp), 4–15 milliamperage (mA), and an exposure time between 0.1 to 1.0 seconds, depending on the tooth site and patient size. The digital sensors used were Size 1 for anterior periapical images and Size 2 for posterior periapical images, with a resolution of 20 line pairs per mm. The Rinn XCP (Extension Cone Paralleling) system was used to hold the digital sensor. All healed and disease teeth were evaluated and included in this study. To overcome the non-distribution of datasets given the low number of healed and disease teeth, preoperative periapical radiographic images of 1200 teeth with 3-year follow-up results were included and divided into healed (440 teeth), healing (400 teeth) and disease (360 teeth). All preoperative periapical radiographic images were uploaded to the VisionMarker server and web application for image annotation. The public version is available on GitHub (GitHub, Inc., CA, USA). The tooth crown has various characteristics, such as different stages of tooth decay and different types of restoration materials. To reduce such confounding variables, this study focuses only on the part of the root in model development. Annotation is the process of outlining the root and identifying images to be classified into healed, healing or disease categories. The images were annotated by drawing the root area with polygon shape representing healed, healing and disease class . The root boundaries of the periapical radiographic images were annotated by three board certified endodontists. Owing to the differences in manual annotating from one endodontist to another, the ground truth used was the largest area of intersection between all of the endodontists’ annotations. A total of 1080-image dataset was used for model training, validation and testing. To avoid using the training images for further testing, the dataset was split into three parts: 80% training, 10% validation, and 10% testing. The training dataset was used for training the model while the validation dataset was independent of the training of the model. The model was tested on this dataset to stop training or revise training variables. The hold-out test dataset was used to test the trained model. An independent 120-image dataset was used for clinician evaluation. This work applied a segmentation algorithm focusing on the root status for the prediction of the treatment outcome class. Segmentation is a fundamental task in image processing that involves dividing an image into meaningful segments. Mask Region-based Convolutional Neural Network (Mask R-CNN), an extension of the Faster R-CNN object detection algorithm, was used in this study. Mask R-CNN is a powerful deep learning model which combines object detection and instance segmentation . The images were pre-processed by augmentation using Keras Image Data Generator (open-source software). The framework then resized an input image to 256 × 256 pixels to feed into Mask R–CNN model. The model was pre-trained on ImageNet and COCO (common objects in common) datasets. The training was performed on an on-premises server with GPU, Nvidia Tesla V100 32GB vRAM (Nvidia Corporation), Nvidia Driver 470.82 (Nvidia Corporation), and CUDA 11.4 (Nvidia Corporation) for 20000 iterations, with 0.025 learning rate, 1882 epochs, and a batch size of 64 images on the training dataset of annotated radiographs. The training loss was reduced and maintained between 15000 and 20000 iterations (data in Mask R-CNN model development and annotation). The training loss graph of Mask R-CNN revealed that the reported scale and decreased to a value close to 0. This indicates that the model has effectively learned from the training data, enabling it to recognize object shapes and make accurate classifications . In this study, Mask R–CNN used the annotated preoperative periapical radiographic images with known treatment outcome to segment the root area by learning from each pixels from the ground truth images. After the positions and shapes of the root were determined, predicting the treatment outcome class was performed. The treatment outcome probabilities are shown next to the bounding boxes of the mask area . The image in the Fig includes root masks in addition to bounding boxes and matching scores. An independent 120-image dataset with known treatment results (healed ‐ 40 teeth, healing ‐ 40 teeth, disease ‐ 40 teeth) was evaluated to compare the performance of the Mask R-CNN prediction model with that of 20 clinicians; 10 experts who are board certified endodontists and 10 general partitioners (GPs) who have at least 2 years of experience in endodontic practice. None of these readers participated in the clinical care or assessment of the enrolled patients, nor did they have access to their medical records. All clinicians each independently evaluated preoperative periapical radiographic image of these 120 teeth manually and reevaluated them with the assist of Mask R-CNN prediction. For each tooth, the clinicians verified whether the prediction result generated by Mask R-CNN matched their personal evaluation. If there was a discrepancy, the clinicians made the final judgment based on their own clinical experience, taking into account the machine-generated result. After a 1-month interval, they evaluated the same images again in a shuffled order. The data analyses were conducted using IBM SPSS Statistics version 22.0 (IBM Corp., Armonk, NY, USA). The performance of the segmentation model was evaluated using 10% testing dataset to detect a segmentation with bounding box relative to the ground truth region in the healed, healing and disease images by the following matrices : Precision: the accuracy of the model’s positive predictions calculated by the ratio of true positives (correctly predicted objects) to the total number of positive predictions made by the model. Recall (sensitivity): the ability of the model to find all positive instances calculated by the ratio of true positives to the total number of actual positive instances in the dataset. F1 score: the harmonic mean of precision and recall providing a single metric that balances the trade-off between false positives and false negatives. Area under the precision-recall curve (AUC): created by plotting precision (positive predictive value) against recall (true positive rate) at various classification thresholds. Mean average precision (mAP): a single scalar that summarizes the accuracy of object segmentation across multiple object classes. Segmentation accuracy was measured with the intersection over union (IoU) metric between segmentation with bounding box detection and ground truth, and was calculated by a pairwise IoU operation in Detectron. If the IoU value between the generated segmentation with bounding box and the ground truth was less than 0.5, then the produced segmentation with bounding box was considered to be a false detection. The statistical analysis for segmentation algorithm was calculated as follows: IoU = area of overlap / area of union,      (1) Precision = TP/TP + FP,        (2) Recall (Sensitivity) = TP/TP + FN,       (3) F1 Score = 2 (Precision x Recall) / (Precision + Recall).  (4) True positive (TP) is positive outcomes that the model predicted correctly, in which IoU > 0.5. False positive (FP) is positive outcomes that the model predicted incorrectly, in which IoU < 0.5. False negative (FN) is negative outcomes that the model predicted incorrectly. mAP is the mean Average Precision of all classes. 95% confidence intervals (CI) were calculated in evaluating these metrics. In clinician evaluation, the average sensitivity and specificity, as well as the mAP of predicting endodontic treatment outcomes from preoperative periapical radiographs with and without the help of the Mask R-CNN model were calculated. The intra-rater reliability analysis of each endodontist and GP, as well as the inter-rater reliability analysis of the endodontist group and GP group, were calculated by Cohen’s kappa . The intra-rater and inter-rater reliability analysis was interpreted using the benchmark thresholds proposed by Landis and Koch , with Cohen’s kappa ≥ 0.80 representing excellent agreement. A total of 1,200 cases (46.3% male and 53.7% female; 79.8% were 25–64 years old) treated by board certified endodontists were included in this study . The majority of cases were primary endodontic treatments with pulpal diagnoses of asymptomatic irreversible pulpitis (34.3%), symptomatic irreversible pulpitis (28.2%), and necrosis (32.2%). The periapical diagnoses included normal (4.5%), apical periodontitis (81.9%), chronic apical abscess (12.7%), acute apical abscess (0.3%), and other conditions (e.g., condensing osteitis 0.6%). The types of final restorations included direct composite restoration (4.3%), inlay or onlay (2.1%), crown (26.7%), post and core with crown (66.2%), and endocrown (0.7%). Performance of Mask R-CNN model Comparison with clinician performance The deep learning-based endodontic treatment outcome prediction model was evaluated on the test set and the results are reported in . The segmentation and class prediction performance of Mask R-CNN segmentation model achieved high precision, recall, F1 score and AUC of precision-recall curve. The mAP of Mask R-CNN was 0.88 (95% CI 0.83–0.93). The overall prediction performance of endodontic treatment outcome with AUC of precision-recall was 0.91 (95% CI 0.88–0.94), 0.83 (95% CI 0.81–0.85), 0.91 (95% CI 0.90–0.92) on healed, healing and disease, respectively . An AUC of 1.0 indicates perfect prediction performance. An AUC of 0.5 suggests random prediction performance (equivalent to chance). AUC values between 0.5 and 1.0 indicate varying degrees of prediction accuracy above chance. Therefore, in our study: AUC of 0.91 for healed indicates a high accuracy in predicting healed outcomes; AUC of 0.83 for healing indicates a moderate accuracy in predicting healing outcomes; AUC of 0.91 for disease indicates a high accuracy in predicting disease outcomes. Examples of segmentation and class prediction outputs from Mask R-CNN segmentation model in this study are provided in . To thoroughly assess the applicability of Mask R-CNN model, we conducted a comprehensive comparison of its performance with clinicians for predicting endodontic treatment outcome on preoperative periapical radiographs. Results of the clinician prediction with and without the help of Mask R-CNN are shown in . The prediction metrics of general practitioners and endodontists significantly improved with the help of Mask R-CNN outperforming clinicians alone with mAP increasing from 0.75 (0.72–0.78) to 0.84 (0.81–0.87) and 0.88 (0.85–0.91) to 0.92 (0.89–0.95), respectively. The intra-rater reliability of each GP and endodontist showed excellent agreement (Cohen’s kappa ranging from 0.87 to 0.95). Regarding inter-rater reliability, both the GP group (Cohen’s kappa of 0.81) and the endodontist group (Cohen’s kappa of 0.85) reached excellent agreement. Mask R-CNN is designed to perform instance segmentation, which involves not only object detection but also pixel-wise segmentation of objects within an image . This capability makes it particularly useful in medical and dental applications where precise object localization and segmentation are crucial. Mask R-CNN has been used to identify and segment tumors in medical images such as ultrasound images . This work was valuable in classifying the benign or malignant nature of breast nodules. Medical professionals use Mask R-CNN to segment and identify specific organs or structures within the body, which is essential for surgical planning and image-guided interventions . In dentistry, tooth segmentation and numbering were performed using Mask R–CNN on bitewing radiographic images. High quality segmentation masks were obtained in addition to the bounding box and class scores compared to other convolutional neural networks . The model can be used to identify and segment dental caries in radiographic images, helping with the early detection and treatment of dental caries . From our knowledge, this study was the first to implement Mask R-CNN to predict non-surgical endodontic treatment outcomes. Unlike classification or object detection that uses the entire radiographic image or root bounding box, the segmentation algorithm was selected because it separated the root, area of interest, from the surrounding structures to train the prediction model. The results of this study demonstrated the high performance with mean average precision of 0.88 of a deep learning-based Mask R-CNN for predicting endodontic treatment outcomes via root segmentation in preoperative radiographic images. There is room for enhancement, and we aspire to achieve this in the future. This could involve incorporating more input data to further train the model and adopting more advanced, accurate deep learning technologies as they emerge. Prediction performance was highest for ‘disease’ class followed by ‘healed’ class, and the lowest prediction performance was for ‘healing’ class, as shown by the area under the precision-recall curve of 0.91, 0.91 and 0.83 respectively. The category ‘healing’ received the least prediction score. This finding related to the clinician evaluation. With the assist of the Mask R-CNN prediction model, GPs and endodontists achieved superior metrics in prediction of endodontic treatment outcome from periapical radiographs. This study confirmed our hypothesis that integratingWe demonstrated a significant improvement in predictive performance when clinicians used the Mask R-CNN model alongside their own assessments. Specifically, the mean Average Precision (mAP) increased from 0.75 to 0.84 for general practitioners and from 0.88 to 0.92 for endodontists. These results suggest that integrating AI technology can enhance the diagnostic accuracy of clinicians in endodontic practice, potentially leading to improved treatment planning and patient outcomes. The results of this study align with previous research on the automatic detection of dental caries in periapical radiographs using convolutional neural network architecture . Artificial intelligence technology is increasingly being applied in endodontics. Studies on AI applications in endodontics have shown that AI can enhance diagnosis and treatment, leading to improved endodontic treatment outcomes . Numerous studies have demonstrated the effectiveness of deep learning applications in endodontics, including the identification of periapical lesions and root fractures , investigation of root canal system anatomy, and assessment of working lengths , detection of separated root canal instruments , and integration of tooth and root detection to improve surgical planning . These results suggest that such applications may benefit beginners and non-specialists by providing expert judgment and clinical decision support. In this study, all endodontic cases were selected based on the criteria of clinical and radiographic outcomes . The presence or absence of periapical lesions was one of several factors assessed during the classification of treatment outcomes. Although, three-dimensional (3D) imaging has become increasingly important in the field of endodontics for diagnosis and treatment planning, providing a more detailed and accurate understanding of tooth anatomy and pathology . However, in this study, 2D periapical radiographs were considered as the ground truth because 2D radiography remains the routine choice for most clinicians and endodontists. Our work on developing a high-performance Mask R-CNN model for classifying endodontic treatment outcomes has significant implications for endocontic treatment planning. By providing accurate and reliable classifications of treatment outcomes (healed, healing, and disease), the model can assist clinicians in making more informed decisions regarding the necessity and type of further interventions. This precision can lead to optimized treatment plans tailored to individual patient needs, potentially reducing the incidence of unnecessary procedures and improving overall treatment efficiency. The model serves as a decision-support tool, augmenting the clinician diagnostic capabilities and potentially reducing the cognitive load and uncertainty associated with assessing treatment outcomes. This can be particularly beneficial for less experienced practitioners or those dealing with complex cases. The results of applying Mask R-CNN model in endodontics can inspire further studies on its application in other dental specialties. It sets a precedent for the use of deep learning models in clinical diagnostics, encouraging researchers to develop, refine, and validate similar technologies. There were several limitations to our work. First, the preoperative radiographic image data used for the experiments were retrospective data from a single hospital, involving cases with a low to moderate degree of difficulty. This potentially limits the generalizability of the prediction model. Second, we only included preoperative periapical radiographic images, omitting other important preoperative patient history, signs, and symptoms that should be included in the model. Lastly, compared to large-scale medical imaging datasets, our dataset was extremely small. Algorithm development could benefit from more data from other hospitals or institutions, which would provide more categories and lead to better performance. For future work, the multicenter collection of preoperative radiographic image data of all difficulty categories and the inclusion of intraoperative and postoperative complications with the integration of AI algorithms for image analysis and cognitive analysis should enable the generalization of the use of the prediction model in clinical decision-making. Integrating AI into clinical applications can be difficult due to clinicians’ distrust of computer predictions and the potential risks associated with erroneous results . Future work should be designed to use AI models to trigger a second opinion in cases of disagreement between the clinician and the algorithm. By keeping AI predictions hidden throughout the diagnostic process, the risks associated with distrust and incorrect predictions could be minimized, relying solely on human predictions. Under the conditions of this study, the deep learning-based Mask R-CNN model demonstrated high performance in classifying endodontic treatment outcomes into healed, healing, and disease categories using preoperative periapical radiographic images. The accuracy of clinicians in assessing non-surgical endodontic treatment outcomes was improved when assisted by the Mask R-CNN model compared to their assessments alone. This model is expected to aid in endodontic treatment planning. S1 File Mask R-CNN model development and annotation. (DOCX)
Explaining variation of implementation outcomes of centralized waiting lists for unattached patients
b67b3fbd-48ac-4c4a-9a0f-210f3e9ae935
7068727
Family Medicine[mh]
Implementation of innovations is a key step in the diffusion–dissemination–implementation process in terms of maximizing the likelihood of achieving beneficial outcomes ( ; ; ). In the health services field, implementation of tailored healthcare innovations is recognized as a critical strategy for improving health service delivery, health system performance and patient outcomes ( ; ). However, ensuring implementation in a real-world setting and integrating the innovation into daily routine practice remains complex and challenging ( ; ; ). The success or failure of implementation is often associated with context ( ), which encompasses, according to , “not only the physical structure but also the dynamic roles, interactions and relationships, within which the innovation unfolds and interacts (p. 6)” ( ; ; ). Moreover, contextual factors may explain variations in the way an innovation is implemented. Research investigating variations in the implementation of a particular innovation can help to understand which implementation strategy works best for which patient group, and under what conditions variations in implementation influence healthcare delivery and patient outcomes ( ; ; ). Despite increasing recognition that context must be considered when looking at variations in the effectiveness of implementation, this research domain is still evolving. Some authors ( ; ; ; ) have stressed the need to conduct empirical studies that include multiple sites and explore how stakeholders in different organizational positions (e.g. health professionals, managers, administrative staff) perceive the implementation of an innovation, in order to identify contextual factors that distinguish high from low-performing cases ( ; ). Indeed, identifying differences across contexts regarding how to embrace complex innovations call for multisite studies to assure replicability ( ). Innovations are introduced regularly in all countries to address problems that appear in healthcare systems. In Canada, the high number of patients without a regular primary healthcare provider has gained increasing attention in reform efforts ( ). Canada’s rate of unattached patients compares poorly to other countries in the Organisation for Economic Co-operation and Development, such as France, Norway and Germany, where under 5 percent of the population reports lacking a regular primary care provider ( ). With approximately 15 percent of the population reporting not having a regular primary care provider, Canada ranks only somewhat better than the USA (23 percent of population unattached) and the UK (19 percent) ( ; ). Canadian provinces range from 25 percent in Quebec to 8 percent in Ontario. To confront this problem, seven provinces have implemented an innovative organizational model, creating centralized waiting lists (CWLs) for unattached patients in primary healthcare ( ). CWLs are used to centralize requests for family physicians in a given territory, and match patients with physicians according to urgency of medical needs and availability of primary care providers ( ; ). CWLs have been implemented in many fields of healthcare, particularly as a way to manage waiting lists for elective surgery ( ). To our knowledge, CWLs have not been used outside Canada to match primary healthcare providers with unattached patients. Moreover, these complex models in primary care implementation are unexplored. Questions remain regarding what differentiates high from low-performing CWLs, and what contextual (e.g. social, political, geographical factors), and organizational factors (e.g. culture, climate for change), and characteristics of both the innovation (e.g. relative advantage, adaptability) and the individual (e.g. knowledge, attitudes) are associated with the effectiveness of implementation ( ; ). Indeed, with regard to improving the effectiveness of those innovations, the gap is greatest in the literature that opens up the black box and investigates contextual conditions involved in implementation success or failure ( ). The present paper attempts to address this gap in the implementation research. Study aim Study setting Innovation Conceptual framework This study aims to explain and understand variations in the outcomes of implementation by analyzing the characteristics of CWLs and contextual factors that influence their implementation. In Canada, seven provinces have implemented CWLs to better manage supply and demand for attachment to a primary healthcare provider ( ). Although significant efforts have been invested to improve patient’s attachment access through this innovation, very few empirical studies have been conducted on CWLs implementation across Canada ( ). This study analyzes CWLs implemented in the province of Quebec, considered a pioneer in CWLs for unattached patients ( ). This province of 8m residents has attached more patients through CWLs – over 1m – with a primary care physician, than any other in Canada. Quebec has a tax-based system that provides universal access to medical services ( ). The healthcare structure is based on three levels of governance: provincial, regional and local. At the local level, 94 Health and Social Services Centers (HSSCs) are responsible for meeting population needs, and particularly the needs of the most vulnerable, on their local territories ( ). The CWLs in Quebec, called “Guichets d’accès aux clientèles orphelines” (GACOs) were implemented in 2008 by the Ministry of Health and Social Services (MSSS) in collaboration with the Quebec Federation of General Practitioners. They have a dual objective of: attaching patients to a family physician, and prioritizing vulnerable patients ( ). GACOs were implemented in the Local Health Network (LHN) which includes both the HSSC in which the GACO is physically located in each of the 94 HSSCs across the province, and all the primary care structures (e.g. Family Medicine Units, network clinics, etc.) included in the LHN related to that HSSC. Each HSSC, composed of several healthcare organizations, was responsible for their own GACO and had much discretion over local adoption of the innovation (see ). The GACO in each HSSC is managed by a clerk who receives requests; a nurse who evaluates patient requests on a priority scale based on their clinical profile in collaboration with a local medical coordinator; a physician mandated to help attach patients with a family physician ( ). People lacking a family physician can either register directly with the GACO themselves, or be referred by a health professional (e.g. nurse, social worker, physician). Once the person is registered on the GACO, the nurse assigns a priority code based on the urgency and/or complexity of that person’s health needs. The MSSS framework recommends maximum waiting times for attachment at each of the five priority levels (1–5), ranging from less than 30 days for Priority 1 patients who require immediate medical care (e.g. complex pathologies) to no specified wait times for people in good health, considered as priority 5. Ultimately, patients are matched with a family physician based on the availability and practice characteristics of family physicians participating in the GACO, the patient’s priority category and the date of the request. Family physician participation in the GACO is voluntary and they can choose the number of patients they wish to attach from the GACO. To encourage participation, physicians receive a financial bonus for accepting an unattached patient, which is paid at the time of their first visit ( ) ( ). Almost a decade after launching the GACO model of CWLs, a research team in Quebec published a performance assessment focused on four outcomes of the implementation process: new requests for a family physician, change in the number of patients on the waiting list and numbers of patients and vulnerable patients attached to a family physician through the CWLs ( ). The assessment is based on a one year-period of clinical-administrative data from the information systems of 86 of the 94 CWLs in Quebec, and shows very large performance variations between the GACOs of different regions, and even between GACOs in a same region, on all implementation process indicators ( ). The authors concluded that, to understand these variations, qualitative case studies were needed to compare GACOs with relatively high performance on process outcomes indicators against those with relatively weak performance. The Consolidated Framework for Implementation Research (CFIR) developed by was used in this study as it is applicable to complex implementation efforts, and comprehensively captures the interplay of factors that influence implementation of innovations in healthcare services. A meta-theoretical model based on a synthesis of 19 theories and frameworks, the CFIR is widely used in the field of implementation research ( ; ) to understand factors underlying variation in the implementation of innovations. Studies of innovations in obesity management ( ), in health information technology ( ) and supportive housing programs for persons with serious mental illness ( ), among others, have employed this framework in their analysis. It is organized in five major domains that incorporate 39 constructs considered important to the effectiveness of implementation ( ). Taking a whole system approach, the five domains were operationalized and adapted in the present study as follows: the outer setting is defined by the larger socio-demographic, economic, political, geographic context surrounding the inner setting within which the GACO is implemented (e.g. patient needs, cosmopolitanism, peer pressure, external policy, resources); the inner setting refers to characteristics (e.g. structural characteristics, networks and communication, organizational culture and climate, readiness for implementation) of the organization where the GACO is implemented: the LHN. It includes both the HSSC in which the GACO is physically located, and all the primary care structures included in the LHN related to that HSSC (refer to ); the characteristics of individuals involved in the implementation, including GACO staff members and the family physicians of the LHN, consider knowledge and beliefs, self-efficacy, stage of change and identification with the organization; the innovation characteristics describe characteristics of the GACO related to its adaptability to a given setting, complexity, trialability, evidential support and its relative advantages (or disadvantages); and the process of implementation refers to planning, engaging, executing, reflecting and evaluating the GACO process as interrelated sub-processes known to have a key role in implementation. Study design Data collection and participants Analysis All three forms of data (documents, interviews, field notes) were analyzed. Interviews were transcribed and coded using NVivo software. Documents and field notes were coded manually. Analysis followed a three-stage process. First, we applied a deductive approach using a codebook based on the CFIR to guide data coding and analysis. The codification was controlled by the technic of double coding: a research assistant and a researcher, both blinded to the implementation performance status of the four GACOs, coded all interviews independently, meeting periodically to compare and revise codes. When disagreements were observed, the codebook was adjusted and codes inspired from the CFIR were redefined to better fit the context of GACO implementation. Finally, persistent discrepancies were resolved through discussion with the larger research team. A narrative summary, based on the CFIR constructs, was created for each interview within each case to provide a rich description of each GACO’s story. A case-specific matrix with illustrative quotes for each GACO summarized information related to how each CFIR construct influenced implementation. This process led to four narrative summaries and four case-specific matrices organized according to the CFIR constructs. Second, the first author applied ratings, based on the summaries and matrices, to the coded constructs at the GACO level, using the rating rules described by (see for definitions of the criteria used for rating): valence (negative, positive influence on implementation, neutral effect); and, strength/magnitude (extent of discussion of each construct by study participants). Application of the ratings was checked by two co-authors (M-AS; MB) for a random subset of constructs within each case; and a double rating was done by (M-AS) for another random subset of coded constructs. In the third stage of analysis, we compared ratings for each construct across all four GACOs using a cross-case analytic matrix that was developed to identify patterns of variation by construct across cases. Finally, a detailed matrix of the specific CFIR constructs with a special focus on categories that distinguished between high and low implementation performance allowed us to draw conclusions on patterns of variation in factors that influenced GACO implementation outcomes. As per rules, constructs were coded as: (missing) when missing too much data to discern a pattern; (0) when data did not allow us to distinguish between high and low implementation GACOs; or (−1/+1) weakly or (−2/+2) strongly distinguishing low from high implementation performance GACOs. In our study, we examined the difference in positive or negative ratings between high and low-performing cases to determine if the construct was an important distinguishing factor and also relied on the supporting qualitative summary. If a difference was of at least two points, the construct was considered to make a strong distinction. To ensure scientific rigor, double blinded coding was performed on all transcripts and double rating and checks were also conducted on a random subset of constructs within each case. Also, an audit trail was kept of all changes made to the codebook as coding and analysis progressed throughout the study ( ). Preliminary results were discussed between three authors (SAM, M-AS, MB) on several occasions early on in the study, and with the broader research team as the study advanced. A formative evaluation using qualitative multiple case study design was chosen to gain an in-depth understanding of phenomena in “real world settings [where] the researcher does not attempt to manipulate the phenomenon of interest” ( ). Our study builds on results of the recent performance assessment of Quebec’s CWLs for unattached patients, which emphasized the need to better understand variations arising both from within the GACOs and from external local or regional influences ( ). Cases were purposefully selected by an advisory committee composed of six decision makers from the three levels of health system governance (provincial, regional and local), four healthcare professionals (two nurses and two physicians) involved in implementing and monitoring GACOs in Quebec, and four researchers from our team. Case selection was based on four performance indicators available for 2013–2014 in the CWLs’ clinical-administrative database related to the GACO implementation processes (new requests for a family physician, change in the number of patients registered with the GACO who were waiting to be assigned a family physician, and the number of patients, and vulnerable patients attached to a family physician through the GACO). To compare performance across the province, indicators for each GACO were transformed into rates per 10,000 population, and were classified into tertiles of relative performance: low, average or high. Contrasting cases were selected (lower vs higher performing GACOs) to understand and explain contextual conditions that led to different process outcomes ( ) and support achieving theoretical replication ( ). Cases were chosen from within two neighboring regions, Montréal and Montérégie, the two most populous regions of Quebec (accounting together for approximately 42 percent of the province’s population), to include GACOs managing similar numbers of patients and providers. Note that none of the cases were categorized as low or high performing for all process outcomes; we selected cases based on a global assessment of performance across all indicators. For instance, our low-performing cases were classified as low for some indicators and average for others, but overall ranked among the worst GACOs in the province. presents the relative performance (implementation process outcomes) of the four selected GACOs. Data collection took place between May 2015 and June 2016 and drew on three main sources in order to triangulate data: semi-structured interviews with a range of GACOs stakeholders, documentation from the GACOs and field notes taken during and after interviews ( ). For every case, we conducted semi-structured interviews with between five and eight stakeholders with different perspectives. All staff involved in the GACO implementation (nurses, clerks, local medical coordinators and managers) and a few family physicians, with experience in attaching patients from the CWLs, were asked to take part in the study. A total of 26 key stakeholders were interviewed, using an interview guide based on the main domains of the CFIR. Interviews lasted between 45 and 90 min. It should be noted that in one case (Case 2), the local medical coordinator position was vacant at the time of data collection and the nurse declined our invitation to participate in the study. Field notes were taken during and after each interview by the interviewers to summarize the main elements discussed with participants, to capture factors emphasized by participants, to describe observations made by interviewers, to reflect on potential explanatory factors and to note modifications to make to the interview guide or clarifications to seek in upcoming interviews. In total, 22 documents (e.g. internal communications, monitoring reports, internally developed tools) about the GACO structure and process were collected and reviewed to understand each component of the GACOs, and key stakeholder roles at strategic and operational levels. In this section, we discuss the five main constructs that helped identify relevant and rich explanations of variation between high and lower performance level cases. These constructs were: network and communications; leadership engagement; available resources; adaptability; and engaging. Outer setting constructs (patient needs and resources; external policies and incentives); inner setting constructs (e.g. goals and feedback, relative priority, tension for change); innovation characteristics (relative advantage; complexity), and individual characteristics (knowledge attitudes and beliefs, personal attributes) exerted influence on GACO’s implementation, but did not distinguish cases by level of performance. show the constructs (valence and strength) pertaining to each domain based on the CFIR framework. Below, we discuss the most relevant results and present key illustrative quotes from the interview data. In addition, we provide one example related to constructs (inner setting) and themes distinguishing low and high-performing GACOs described in details in . Inner setting Readiness for implementation Characteristics of the innovation Process Engaging The promotion of the GACO to patients and family physicians in Cases 1 and 2 (rated low) was slower and less developed than in the high-rated cases. Participants from lower performing Cases 1 and 2 acknowledged that promoting the GACO, attracting family physicians to attach patients from the GACO and encouraging patients to use GACO services, were major concerns. In interviews, participants stressed the enormous efforts required to publicize the GACO. For example, in Case 1 (rated low), they suggested promoting the GACO by distributing pamphlets to patients in clinics. In Case 2, the manager admitted that there is still a lot of work to be done with family physicians to inform them about the GACO in order to obtain results: My own family doctor didn’t know about the GACO, and she’s on the territory. Not all small clinics attend the regional department of family physicians meetings, which keeps them up to date, so there is a need for, like in [one region], they do a lot of PR [public relations], they even have a little pamphlet, I kept it, I found it cute that the coordinator himself went around to the clinics and handed them out. Definitely, new doctors who arrive, if they’re looking […]. (Case 1 Clerk) Compared to the other sites, Case 3 (rated high) was keen on marketing and driving use of the GACO, providing clinics with updates on adjustments made during implementation (e.g. distributing bookmarks when launching the online registration) to increase GACO use by family physicians and patients: We promoted on-line registration at that point. We made little bookmarks that we distributed to the clinics on our territory. (Case 3 Manager) Case 4 (rated high) limited promotion of the GACO at first, because they were afraid of having too many patient requests, but later distributed flyers in French and English through clinics, pharmacies, and community organizations (Case 4 Clerk, Nurse Manager): We left them in pharmacies and clinics. Little pads like this. They were all over the area. (Case 4 Clerk) Networks and communication wo levels were considered: within services of the HSSC where the GACO is implemented and between the services of the LHN. Within the HSSC Between services of the local health network There was evidence in all cases of good communication and network relationships sustained through informal and regular formal meetings of GACO staff (e.g. clerk and nurse). However, among GACO staff, collaborative practice to address common work issues and rapidly match a patient to a family physician appeared more developed in Cases 3 and 4 (rated high). In Cases 1 and 2 (rated low), staff were still trying to improve work procedures. Moreover, differences were seen between high and low performers in the extent of communication among GACO staff around the goal of minimizing delays to prioritize patients. In Cases 3 and 4 (rated high), staff had developed innovative communication strategies to solve problems arising from the complexity of the GACO process. While all sites were aware of this problem, only GACOs 3 and 4 had smoothed workflow procedures by instituting a communication system involving formal or informal procedures. In Case 4, staff recorded their initial comments and nurses and clerks developed symbol systems and codes that helped the nurse identify priority patients quickly, ensured work continuity, and enhanced patient access to a family physician: So when patients call me back, they might tell me between noon and two o’clock. We’ve created codes so that I write that the patient called back, and can be reached between noon and two, and I put a code, for example PM. In prioritizing, the nurse will see that it’s between noon and two, with the PM code to call that person. (Case 4 Clerk) The most innovative practice was a formal communication strategy developed in Case 3 (rated high) using an online computer-based system to perform a quick pre-prioritization of patients with special needs and reduce delays for urgent patients (e.g. vulnerable patients) to avoid compromising their health status. It is noteworthy that in Case 1 (rated low), despite attempts by the clerk to help the nurse prioritize participants, both manager and clerk admitted that problems persist, and said they were eager to put in place a computer communication system to facilitate pre-prioritization of patients: We’re really waiting for the computer program that will help us prioritize them, which will help. We’d certainly like to do more to prioritize them, but we still have to take the time to assess them. (Case 1 Manager) In Cases 3 and 4 (rated high), participants described good working relationships between GACO staff and health professionals working in collaborating clinics. In Case 3, for instance, good communication between the GACO clerk and some of the local clinics facilitated patient care in case of emergency. In Cases 3 and 4, transmission of medical information and health assessments to family physicians via the medical record of patients referred from the GACO was seen to speed follow up and increase family physician willingness to participate in the GACO. Some participants (nurse, manager, family physician) in Case 4 (rated high) mentioned that communication and collaboration between GACO staff and physician clinics was sometimes difficult (e.g. GACO staff have to follow up with clinics to know if a patient has been refused by the physician). Nevertheless, GACO staff made efforts to organize discussion with clinics to overcome breaks in communication and minimize their negative impact on patients. In contrast, in Case 1 (rated low), most physician clinics on the territory operated in isolation, which hampered their participation in the GACO. Poor communication between clinics limited collaborative practice and hindered family physician participation in GACO: Here, we have many clinics, 30 or so clinics that work in silos and don’t speak to each other. There’s no real spirit of belonging to a HSSC. Now it’s starting to improve between certain clinics, but there are some who don’t participate in anything. There are some we never see. These people will almost never take on patients from the GACO. (Case 1 Medical Coordinator) In Case 2 (rated low), some participants reported that the GACO nurse facilitated referral of certain cases through personal contact with family physicians, and helped with patient transfers following retirement of a family physician. However, local family physicians in clinics had different perceptions and complained of poor communication from the GACO about patients on the waiting list and changes in GACO procedures. They even mentioned that the medical director of the clinic had to seek out this information himself: Regarding patients, no, (physicians have not received information from the GACO). The […] information we received, and it’s been a while, it’s our physician lead (of the clinic) who went to get it. (Case 2 Family Physician 1) Leadership engagement Available resources In Case 2 (rated low) and Cases 3 and 4 (rated high), participants emphasized the leadership of formally appointed leaders (medical coordinator) or emergent leaders (Case 4) to achieve GACO goals. In Cases 2 and 3, medical coordinators played a role in solving problems, encouraging family physicians to attach patients, managing patient requests and attaching many vulnerable patients. In Case 4, the medical coordinator’s lack of engagement was compensated by the leadership executed by two professionals who emerged as champions and played an active role in implementation efforts: the GACO nurse, who was enthusiastic, proactive and keen on reducing long waiting times for patients, and created adaptive strategies to leverage resources; and a young family physician who was highly engaged and attached a large number of babies and vulnerable patients (400) from the GACO during his first year of practice. Despite, similarities across Cases (2, 3, 4) regarding leadership, some key differences between low and high-performing cases were highlighted. Participants in Case 2 mentioned the difficulty of mobilizing family physicians to attach patients during a period when the medical coordinator was absent and there was a void in leadership. The manager who temporarily filled his role admitted having limited capacity to recruit family physicians and emphasized that implementation had to be physician driven to incite family physicians to attach patients from the GACO: I went to see several clinics because I knew a few physicians who had worked here in the past. But with doctors, it takes a doctor to talk to them. I may have the best intentions, make action plans, say “I’ll go see this or that clinic.” Little things get done, but the bulk of it has to go through physicians. (Case 2, Manager) Also, despite the Case 2 (rated low) medical coordinator’s support for staff in resolving problems encountered with family physicians, he did not, despite intending to do so, implement a strategy to offer patients alternative health services; a more proactive approach was evident in Cases 3 and 4 (rated high). Moreover, staff expressed their concerns about lack of continuity in the medical coordinator role and difficulties sustaining improvements to the GACO after the coordinator’s departure. In Case 1 (rated low), no specific champion was mentioned. Participants noted that the medical coordinator fulfills a traditional role, providing support when needed and dealing with problematic family physicians. Compared to other GACOs, he was insufficiently committed to enhance family physician participation. The tie with family physicians working in clinics was weak: He (the medical director) tried to do a little public relations with the clinics. […] He doesn’t have a very aggressive management style and, as well, these are his colleagues. We can’t forget that this is a doctor talking to other doctors. It’s a delicate balance. He can’t impose on them: he’s a doctor himself. He practices in a clinic, he has his clientele, he knows what he’s talking about because he lives it every day; he can’t impose upon them […]. I don’t know what his relation is, but he’s a very nice man, not aggressive, very forthcoming, but […] that’s that. (Case 1 Nurse) The nurse’s statement was endorsed by the medical coordinator himself, who admitted “not twisting the arms of family physicians” to attach patients from the GACO. “No, I didn’t twist their arms like that” (Case 1 Medical coordinator). He was not known by one of the family physician interviewed practising in one clinic. “The medical coordinator at the GACO? No I don’t know who that is” (Case 1 Family Physician 2). Lack of adequate staffing, inadequate technology, high staff turnover and financial constraints were highlighted in all cases as barriers to implementing the GACOs. Inadequate human resources caused problems in both processing patients’ registration on the GACO, and in attaching them to a family physician. For example, a lack of family physician capacity on the territory had a negative impact on GACO ability to handle requests for registration. Lack of stability in the clerk’s role led to delays as new clerks had to be trained. As well, the lack of additional funds allocated for implementation efforts constrained recruitment of additional human resources. No additional staff was made available for GACO implementation efforts and no extra support was provided to the HSSC within which the GACO operated. The HSSC had to make do with existing staff, who were assigned additional tasks to implement the GACO: That’s it? They start cutting: the nurse who was supposed to be there five days a week is now on three days. Then she leaves on retirement, and there’s a chance she won’t be replaced. I may end up on my own and then I’ll really lose my mind. (Case 1 Clerk) Along with human resource challenges, GACOs had difficulties accessing the medical insurance database to verify patients’ attachment status (whether they were already registered with a family physician). All cases faced these challenges, but Cases 3 and 4 (rated high) found creative ways to deal with them. Case 4 developed an adaptive strategy to optimize GACO resources by making full use of the local HSSC nursing staff. For example, a full-time clerk and several nurses who were not initially assigned to patient prioritization worked on GACO efforts whenever their schedule allowed. Training many nurses to conduct patient evaluations expanded the potential resource pool and reduced reliance on a single person: The process works well and the fact that we’ve – I’ve talked to others about it when I find myself with other managers – trained a good number of nurses, here and there; we’re open on weekends, and sometimes people don’t show up for their Saturday or Sunday appointments. It’s a good time to catch people at home. The weekend nurses, everyone knows, turn on the computer and sign into the SIGACO (database). All that contributes to reducing the need to bring in pregnant nurses and all that. When I mention this elsewhere, nobody’s doing it […] “That’s a great idea,” they say. Because no one has a budget for this. We have a 0.8 clerk and I believe the salary of one nurse, and in some places that can be a very part-time nurse, so […] in that way, we manage to make the machine roll along smoothly. (Case 4 Nurse) Along the same lines, Case 3 (rate high) managed to overcome human resource constraints through their innovative pre-prioritization system. They recruited a nurse clinician working in another department who agreed to help GACO staff prioritize patients, and check their status in the medical insurance database: I would say that she [the nurse in the other department] has a lot on her plate. She helps us depending on the number of patients she has. There are evenings when she can’t do anything for the GACO, and evenings when she can help us out […] She does some prioritization, telephone evaluations for prioritization. It depends. Sometimes we ask her to do the RAMQ [Régie d’Assurance Maladie du Quebec-Quebec Health Insurance Board] checks. Sometimes, in researching on the RAMQ site, you find that they (the patients) have found a family physician by themselves. So there’s an elimination in that time. (Case 3 Nurse) Adaptability In all four cases, the GACO was adapted to meet community needs. Adaptations were related to one of three GACO activities: registration, evaluation of priority and attachment. Registration Evaluation of priority Attachment to a family physician response to an overwhelming volume of calls, all four cases decided to replace telephone registration by an online registration form. What differentiated high-rated and low-rated GACOs were the procedures designed to support registration of specific population groups such as newborns and homeless patients. Cases 3 and 4 (rated high) implemented procedures to facilitate registration, which was not done in low-performance cases. In Case 3, GACO staff developed an internal procedure to register newborns on the waiting list even though the provincial GACO software did not permit registration until the baby had received a health insurance number. Case 4 used an outreach strategy whereby the GACO nurse visited homeless shelters monthly to help patients sort out their eligibility for the GACO and register: If they don’t come to the GACO, the GACO will come to them […]. So I tell them “I’m the system that comes to see you.” I’m the HSSC nurse who goes out to present them the services offered by the CLSC, and register them for the GACO as well. So then, personally giving them the leaflet. “So now you’ll get your life on track, call us back. Your wait to get a family doctor has already begun” […] I prioritize them on the spot. You check if they have their RAMQ [Quebec Health Insurance Board] number […] I look, because I have access, to see if they have a doctor. With some, I’ve even called RAMQ with them because even I get lost in the system when I call there, so you can imagine what it’s like for this poor man. (Case 4 Nurse) A major difference between the two groups of GACOs was that Cases 3 and 4 (rated high) put in place creative initiatives to offer alternative health services while patients waited on the GACO. This was not done in lower-rated Cases 1 and 2. For instance, Case 3 offered the possibility of receiving transitional care while waiting for a regular physician from an ambulatory unit for patients with high needs. Case 4 developed a center for disease prevention for patients in good health, which helped to evaluate non-urgent patients and offer them a check-up by a nurse, based on the guidelines, while they were on the waiting list. If a change in their health status or needs was noted, the process of being attached to a family physician could be accelerated. Cases 3 and 4 (rated high) developed strategies to deal with family physician preferences, whereas Cases 1 and 2 (rated low) complied fully with physician demands. Case 3 implemented a restrictive approach, and did not allow physician preferences to influence patient referrals. Case 4 implemented a more flexible strategy, with GACO staff adjusting referrals to physician preferences while also taking into consideration the patient’s health profile and the importance of their needs. In Case 3, GACO staff urged family physicians to attach patients referred to them, keeping an inventory of the characteristics of each family physician’s practice to minimize the chance they would refuse a patient; GACO staff would also reduce the number of referrals as needed to a given family physician to ensure that patients would be seen by them within appropriate timelines. In Case 4, despite trying to accommodate family physicians, participants decided to refer patients with mental health problems even when family physicians did not want to take them: In general, we try to provide a mix of cases. If he just wants diabetic patients, we’ll try to maybe send him four out of five, with the fifth being a patient with diabetes and mental health issues. We put some effort into accommodating them, but also have a clear view of our reality, which is far from easy. (Case 4 Clerk) In Cases 3 and 4 (rated high), adaptation according to distance and region was a noteworthy theme, whereas this was not done in Cases 1 and 2 (rated low). Case 3 used postal codes to refer patients to the closest GACO; in Case 4, this was not always possible due to limited family physician capacity on the territory covered by the GACO, which did not allow patients to choose a region for attachment. Our study succeeded in identifying factors that enhance implementation effectiveness and may be used to address performance shortcomings in CWLs. Five main influencing factors were seen to operate at different levels, interact synergistically and work together in mutually reinforcing ways to produce implementation process outcomes. These factors were also seen in four other similar studies ( ; ; ; ) that used the CFIR to explain variation in implementation outcomes. At the level of the inner setting, high-performing GACOs displayed greater readiness for implementation than low-performing GACOs. Consistent with three of the four similar studies ( ; ; ), we found that a key ingredient for successful implementation of healthcare innovations was the leadership engagement demonstrated by those who played an active role in supporting implementation (physician champions in our case). Only in study did leadership not appear as an important distinguishing factor, likely due to the fact that nursing staff in all units relied on self-management rather than a unit manager. In our study, medical coordinators in the high-performing GACOs showed a high level of commitment, making connections with family physicians at different organizational levels, and positively influencing their peers in the clinics to attach vulnerable patients. They exhibited proactive leadership and responded to the needs of patients on waiting lists (e.g. by adapting interventions). Low-performing GACOs were characterized by a leadership void in one case, and a leader less enthusiastic about the GACO mission in the other. Our results also show that in one high-performing GACO, the lack of formal medical leadership was compensated by nursing and front-line physician leaders, who emerged naturally and actively championed the innovation. These findings suggest that, involving champions based on their motivation and willingness to take an active role in the implementation process and strong belief in the cause, is an essential step ( ). This should likely precede (and influence) the appointment of a champion with formal power. Our results align with suggestions by and that the leadership team include not only those in positions of power, but also stakeholders from different levels of the organization, who can make significant contributions to the implementation process. While local senior physicians have the authority to mobilize their peers, and are essential to implementation, operational front-line staff may also be crucial. For example, the GACO nurse who was deeply engaged in day-to-day operations exerted effective leadership. In a complex setting, distributed leadership has been found to increase the capacity for learning ( ) and champion teams promote change more effectively than lone champions ( ). With regard to networks and communication, differences were seen between high and low-performing GACOs. Cohesion and collaboration in the GACO team were more prominent in high-performing cases, and were reflected in regular interaction and enhanced communication, as seen between the nurse and clerk in one GACO who collaborated to solve problems and respond to patients’ needs. The role of communication is well established in implementation science ( ; ). Our results concur with findings that cases of effective implementation are more likely to exhibit better working relationships, ongoing communication flows and higher-functioning teams. likewise emphasized the importance of language and communication channels in the implementation of a client-centered service for persons with mental illness. One notable finding in the inner setting relates to how staff dealt with resource constraints, which is a common challenge in the implementation literature. While all sites faced resource challenges, only high-performing GACOs were able to develop creative strategies to optimize existing resources; low-performing sites displayed inertia and were unable to overcome resource barriers. Our results offer useful insights not only around factors that influence implementation, but also into strategies that key stakeholders (GACO staff) put in place to overcome implementation challenges. Similar strategies have been adopted when implementing advanced access, an innovation to improve timely access to primary care ( ), showing that stakeholders are not passive recipients of innovation ( ), but rather active players in the change process who interact creatively with an innovation and react to challenges with internally developed solutions ( ). Nevertheless, decision makers should know that resource availability (human, time, financial resources) is an essential condition to enhance implementation success and sustain innovations, as shown in studies similar to ours ( ; ; ), and in many other studies related to implementation of healthcare innovations in general ( ; ; ). With regard to the innovation (GACO) characteristics, our results indicate that high-performing GACOs were more innovative in embracing complexity than low-performing GACOs, and adapted the innovation to their local settings and the needs of patients on the waiting list. GACOs required continuous adaptation by staff. Indeed, adaptation is likely to occur when complex innovations unfold in real-world context ( ) involving multiple organizational levels: the GACO is implemented within the HSSC where it is physically located, and in between all the primary care structures of the LHN around the HSSC. Flexibility in adaptation is described in the literature as being closer to a user-based or bottom-up approach ( ). This might be explained in part by the fact that the province has mandated implementation at a local level without developing detailed national guidelines ( ) to which implementers/users must strictly adhere, and to different leadership responses in the GACOs studied. Regardless of why flexibility exists, the ability to adapt an innovation increases its acceptability among local users ( ). In our study, adaptability contributed to achieving GACO outcomes (e.g. prioritizing vulnerable populations, etc.) as some GACO staff were able to challenge the existing power relationships with family physicians. Researchers acknowledge that a less prescriptive state-mandated reform encourages creativity and may provide an opening for innovation by institutional entrepreneurs who adopt a proactive strategy to influence the change process ( ). The implementation, dissemination and sustainability of complex organizational models such as GACOs is likely to require a balance between strategic (top-down) directives and the tacit knowledge of local stakeholders (patients served by the GACO, healthcare providers), who contribute empirically grounded knowledge of the local context and their own lived experience ( ). The strategies used to adapt GACOs in the cases studied here could be tested on other sites and refined through the participation of diverse healthcare providers and GACO beneficiaries. Such a process could lead to co-produced and contextualized national guidelines that could improve the quality of delivery of GACOs, and better equip them to achieve the desired outcomes. Conducting implementation research on the GACO innovation over time and across settings, such as recommended by , could help to generate the information needed to continue refinements and meet the needs of broader and more diverse populations. It is important to note that in 2016, the innovation underwent major changes: centralization at the Quebec health insurance board of Quebec (the RAMQ) and management at provincial level; and prioritization according to five new categories reflecting urgency and health needs ( ). The changes do not, however, reduce the value of our results that remain informative: adaptation is still needed to overcome external factors shown to be consistent across four GACO sites (e.g. patients who miss appointments, cultural/language barriers) ( ) and to meet the population needs. Despite the numerous studies examining models that similarly aim to improve timely access to healthcare in different countries (England, USA, Canada), very few have identified and explained factors that differentiate high from lower-performing sites in the implementation of such innovations. Studies that have attempted to address this relationship ( ; ) show that variation with respect to implementation stems mainly from organization-level factors (leadership strategy, availability of human resources: nurses, physicians) ( ) and from factors such as misunderstanding of the innovation ( ). Given that there are no comparative studies on implementation of CWLs in primary healthcare, and very few on similar models, it would be worth exploring additional sites to expand knowledge regarding CWLs design and identify the most influential factors involved in variation between high- and low-performing sites. Strength and limitations The strength of this study lies, first, in its coverage of almost all GACO staff within the four sites (apart from Site 2), and interviews with family physicians, who are knowledgeable informants directly involved in GACOs implementation and/or impacted by the GACOs. Second, the researchers who coded and analyzed data were blinded to the status of implementation in the four GACOs, which helped reduce bias in qualitative findings and ensure trustworthiness. Double coding was performed using an iterative process, which also helped to increase credibility. Analyzing the results through the CFIR helped synthetize results, will facilitate future comparison of findings across other similar studies adopting the same methodology. A few limitations to the current study should be mentioned. First, the number of participants at Site 2 was less than the number of the other three sites and may have yielded a less rich picture. Additional results may have been captured if the GACO nurse had agreed to be interviewed. Second, we selected four sites among 94, given our objective was not to produce statistically generalizable results, but rather a rich contextualized understanding of each GACO. The final limitation of this study relates to the absence of patients’ experience with the GACO. Future research should include interviews with patients who were attached with a family physician through these CWLs, and with those who remain on the waiting list. We also point out that we have presented in detail only 5 among a possible 39 distinguishing constructs. We do not consider this problematic given that use of a limited number of constructs has been recommended by the designers of the framework ( ) and others ( ) for implementation analysis aiming to differentiate between high and low implementation effectiveness. Moreover, we did not use the constructs to guide data collection; our semi-structured interview guide was based on the large domains of the CFIR. is study provides the first in-depth analysis of CWLs implementation. Findings can be used to develop strategies to overcome barriers to implementation, better manage wait lists, and improve performance. Ultimately, they could contribute to reducing inequities in access to a family physician, and in health outcomes, notably for vulnerable populations and those with complex physical and/or mental healthcare needs. Findings are also relevant for decision makers responsible for designing complex innovations whose decisions shape the development, implementation and scale-up of CWLs. They may also more generally inform the dissemination efforts of similar complex organizational models in different contexts. When implementing this innovation in similar real-world healthcare delivery contexts, and when redesigning implementation strategies, greater consideration should be given to the combination of organization-level factors (leadership engagement, resource availability, networks and communication), intervention characteristics (adaptability) and the process domain (engagement) identified as factors important to achieving implementation outcomes. Moreover, a mandated innovation that is simultaneously top-down and less prescriptive creates a good opportunity for stakeholders in the field to identify practical ways to bring about change.
Identification of targetable kinases in idiopathic pulmonary fibrosis
7bbe9075-4167-44f9-9f3b-ff5a1f2796ff
8822646
Anatomy[mh]
Idiopathic pulmonary fibrosis (IPF) is a chronic, progressive lung disease that results in fibrotic scarring of the alveolar tissues. Globally, the incidence of IPF is increasing, with approximately 3–9 cases per 100,000 individuals being reported each year . Anti-fibrotic drugs, such as pirfenidone and nintedanib, which suppress disease progression, have been clinically approved for the treatment of IPF [ – ]. However, the overall survival of IPF patients is low, ranging from 2 to 3 years . Although a few patients with IPF and severe respiratory failure have been treated with lung transplantation , strict eligibility criteria and shortage of organ donors often limits transplantation therapy. Multiple studies have shown that damage to the respiratory epithelium and impairment in its repair mechanism play a central role in the development of IPF . In particular, alveolar type II (AT2) cells play important roles in the pathogenesis of IPF because they act as progenitor cells and help in regeneration of the respiratory epithelium . Currently, multiple gene mutations affecting the function or survival of AT2 cells have been reported in IPF lung tissues . In addition, single nucleotide polymorphisms in mucin 5B (MUC5B), resulting in the abnormal production of mucin, are known to play a role in IPF pathogenesis . Furthermore, the incidence of IPF increases with aging, which suggests the existence of a complex relationship between chronic environmental exposure, infection, host defense/repair pathways, and disease progression. Currently, there are multiple clinically approved kinase inhibitors for a wide range of diseases, including fibrosis and malignant diseases. Furthermore, there are a few similarities in the pathogenesis of IPF and non-small cell lung cancer (NSCLC), a chronic respiratory disease with abnormal cell proliferation. In addition, the activation of tyrosine kinases and overexpression of growth factors have been known to play important roles in the progression of both pulmonary fibrosis and lung cancers . Nintedanib has the potential to inhibit the activity of multiple kinases, including vascular endothelial growth factor receptor, fibroblast growth factor receptor, and platelet-derived growth factor receptor . Interestingly, nintedanib has also demonstrated a beneficial effect on tumor suppression in clinical trials involving patients with advanced NSCLC . Based on these findings, we hypothesized that other kinase inhibitors may also have the potential to inhibit the progression of IPF. In this study, we analyzed the expression of 612 kinase and cancer-related genes to identify the potential therapeutic targets of IPF. We used next-generation sequencing to perform gene expression analysis of 13 and 8 surgically resected lung tissues from patients with and without IPF, respectively. Further, we validated the expression of selected genes at the protein level in fibrotic lesions using immunostaining. Patients and sample preparation Targeted RNA sequencing and data analysis Histological analysis Immunohistochemistry (IHC) Signal-to-noise weighted-voting score Statistical analysis This study (registered number: K1505-033) was approved by the Ethics Committee. Patients with or without IPF were enrolled in this study between April 2015 and November 2016 after obtaining written informed consent. The tissue samples with or without IPF were obtained from the organs removed for lung transplantation or resected during the treatment of lung cancer at Okayama University Hospital. The diagnosis of IPF was based on the official ATS/ERS/JRS/ALAT statement . The collected samples were immediately cut into small sections and fixed using PAXgene Ⓡ Tissue System (PreAnalytiX, Hombrechtikon, Switzerland) or RNAlater™ (Sigma-Aldrich, St. Louis, MO, USA). RNA was extracted from RNAlater™-fixed samples using the RNeasy Micro Kit according to the manufacturer’s protocol. The concentration and quality of RNA were measured using an Agilent 2100 Bioanalyzer (Agilent Technologies, Santa Clara, CA, USA). The SureSelect RNA Human Kinome Kit (Agilent Technologies) that targets 612 genes, including 517 protein kinase-coding genes and 46 cancer-related genes, was utilized for library preparation using the RNA samples with an RNA integrity number > 7. Sequencing was performed on an Illumina MiSeq Sequencing System using the V2 Reagent Kit (Illumina, San Diego, CA, USA). The sequencing data were analyzed using the CLC Genomics Workbench (CLC bio, Aarhus, Denmark). Gene expression data were normalized based on the reads per kilobase per million mapped reads (RPKM). Hierarchical clustering analysis was performed using Cluster 3.0 software with the following adjusting method: centralization with median value of each sample, and normalization by dividing centered values by the standard deviation of all samples. Gene expression diversity was calculated with the following adjusting method: the difference between the maximum and minimum RPKM values was calculated for each gene in each patient, and the obtained values were divided by the standard deviation of all samples. The paraffin-embedded tissue blocks fixed using PAXgene Ⓡ Tissue System were cut into 5-µm thick sections and stained with hematoxylin and eosin. The severity of fibrosis in these samples was evaluated using the Ashcroft score, as previously described . The paraffin-embedded tissue blocks fixed using PAXgene Ⓡ Tissue System were cut into 5 µm thick sections, placed on glass slides, and deparaffinized in d-limonene and graded alcohol. The tissue sections were then incubated in 1 mM EDTA buffer (pH 8.0) for 10 min at 95 °C in a water bath and blocked for endogenous peroxidase activity with 3% hydrogen peroxide in methanol for 5 min. Following the incubation, the slides were rinsed with Tris-buffered saline containing 0.1% Tween 20 and blocked with normal goat serum or normal horse serum for 60 min. The sections were then incubated with primary antibodies overnight at 4 °C. The primary antibodies against doublecortin-like kinase 1 (DCLK1) (ab31704), pyruvate dehydrogenase kinase 4 (PDK4; ab71240), and spleen-associated tyrosine kinase (SYK; ab40781) were obtained from Abcam (Cambridge, MA, USA). Anti-Pim-2 proto-oncogene, serine/threonine kinase (anti-PIM2; HPA000285) and anti-serine/threonine kinase 33 (anti-STK33; HPA015742) antibodies were purchased from Sigma-Aldrich (St. Louis, MO, USA), and anti-Erb-B2 receptor tyrosine kinase 4 (anti-ERBB4) antibody (19943-1-AP) was obtained from Proteintech Japan (Tokyo, Japan). Following overnight incubation, the sections were incubated with EnVision + Single Reagents HRP Rabbit (Dako, Glostrup, Denmark) or ImmPRESS Reagent Anti-Mouse IgG (Vector Laboratories, Burlingame, CA, USA) secondary antibodies for 20 min. Finally, the sections were stained with 3,3-diaminobenzidine and counterstained with hematoxylin. Samples of IPF case 1 were used to select the genes for weighted-voting score calculation. Samples of IPF cases 2 and 3 were used as an independent validation set to evaluate the model versatility. The signal-to-noise statistic (S x ) was calculated as described previously . Briefly, the signal-to-noise statistic (S x ) is calculated as the weight for gene x as S x = (μ Ashcroft≥6 − μ Ashcroft<6 /σ Ashcroft≥6 + σ Ashcroft<6 ), where, for each gene, μ Ashcroft≥6 represents the mean value and σ Ashcroft≥6 represents the standard deviation for that gene in all samples of the Ashcroft score ≥ 6. In all, 26 genes were selected as “induced” genes showing (μ Ashcroft≥6 − μ Ashcroft<6 ) > 1, whereas 15 genes showing (μ Ashcroftt≥6 − μ Ashcroft<6 ) < -1 were selected as “suppressed” genes in Ashcroft score ≥ 6 samples (Additional file : Table S1). A weighted-voting classification algorithm was employed to predict Ashcroft score ≥ 6 and/or Ashcroft score < 6 samples using the genes selected as described above, and the resulting classifiers were tested using the independent dataset (IPF cases 2 and 3). “Weights” were calculated based on triplicated Ashcroft score ≥ 6 samples and triplicated Ashcroft score < 6 samples of case 1. In this scheme, gene x of a test sample γ in the predictive gene set has a vote based on its expression in this sample (g x γ ) using weight S x , boundaries b x = (μ Ashcroft≥6 + μ Ashcroft<6 )/2, and weighted voting score V x = S x (g x γ − b x ). The final voting scores were summed (∑ x V x ). Statistical analyses were performed using STATA software version 15.1 (StataCorp, College Station, TX, USA). Expression of targetable kinase genes between the individual case and the control lung samples were compared using two-tailed paired Student’s t-tests. Characteristics of IPF and control patients Gene expression signatures are altered with the progression of fibrosis Gene expression diversity in IPF lung tissues Expression of targetable kinase-coding genes in IPF lung To explore the targetable kinase-coding genes in IPF lung tissues, we sought to identify genes that were upregulated in IPF lung tissues (n = 13) compared to control lung tissues (without IPF, n = 8) with a fold change of > 2. The integrated analysis indicated three genes ( DCLK1 ; STK33 ; and cyclin-dependent kinase 1 , CDK1 ) to be upregulated with the considered threshold of > twofold in 13 IPF samples compared to the 8 control samples (Table ). Further, considering the heterogeneous nature of IPF, we independently compared the gene expression profiles between the IPF lung samples collected from each of the three cases and control lung. In addition, we evaluated the expression of the 46 selected genes encoding kinases having clinically available kinase inhibitors (Additional file : Table S3) . However, of the 46, six genes ( ALK receptor tyrosine kinase , Ret proto-oncogene, neurotrophic receptor tyrosine kinase 1 , neurotrophic receptor tyrosine kinase 3 , Fms-related tyrosine kinase 3 , and protein kinase C gamma ) were excluded from further analysis because their RPKM values were lower than the overall median RPKM value. The findings from each of the IPF cases are shown below. Case 1: A 57-year-old male with smoking habit (132 pack-years) and history of right upper lobectomy for squamous NSCLC. High-resolution computed tomography (HRCT) revealed bilateral honeycombing, indicating a usual interstitial pneumonia (UIP) pattern (Fig. a). Pathological examination confirmed that the fibrosis was UIP. Based on these findings and exclusion of other causes of pulmonary fibrosis, such as collagen disease, the patient was finally diagnosed with IPF. Lung fibrosis gradually progressed, and total pneumonectomy and bilateral cadaveric lung transplantation were performed. We obtained six lung tissue samples from each of the bilateral lung segments from this patient (Fig. b). Figure c shows the top ten kinase-coding genes upregulated in the IPF samples. DCLK1 , PDK4 , ERBB4 , CDK1 , and ribosomal protein S6 kinase A6 were upregulated by more than twofold (Fig. c). STK33 , which is significantly upregulated in the integrated analysis, was also upregulated (log 2 ratio 1.35), but with no statically significant difference. Furthermore, IHC revealed that DCLK1 and PDK4 proteins were mainly expressed in the epithelial layer and smooth muscle cells of fibrotic lesions in IPF lungs, whereas they were expressed in the airway epithelium of control lungs (Fig. d, e). Additionally, PDK4 expression was observed in the alveolar macrophages. Of the 40 genes that had clinically available kinase inhibitors, only ERBB4 was found to be significantly upregulated by more than twofold in IPF lung samples compared to the control lung samples (Fig. f). The ERBB4 protein had similar expression pattern to that of DCLK1 and PDK4 proteins, and it was mainly expressed in the epithelial layer and smooth muscle cells of fibrotic lesions in IPF lungs and in the airway epithelium of control lungs (Fig. g, Additional file : Fig. S2). Case 2: A 67-year-old male with a smoking history (23.5 pack-years) who was diagnosed with lung squamous cell carcinoma (T2N0M0, stage IB) along with IPF. HRCT revealed lung cancer of the right lower lobe and bilateral honeycombing, indicating a UIP pattern (Fig. a). The patient underwent right lower lobectomy, and the pathological examination confirmed UIP in the lung tissue. Three samples were collected from the resected right lower lobe (Fig. b). Figure c shows the top ten genes upregulated by more than twofold in IPF lung tissues compared to control lung samples. STK33 and PIM2 were the top two genes based on fold change in IPF lung tissues. IHC indicated that STK33 protein was mainly expressed in the epithelial layer of fibrotic lesions in IPF, whereas it was observed in the airway epithelium of control lungs (Fig. d). Furthermore, PIM2 protein was mainly detected in the epithelial layer of fibrotic lesions, smooth muscle cells, and alveolar macrophages in IPF tissues, whereas it was expressed in the airway epithelium and alveolar macrophages in control lungs (Fig. e). Of the 40 genes having clinically available kinase inhibitors, SYK ; Bruton tyrosine kinase ; cyclin-dependent kinase 4 ; FGR proto-oncogene , Src family tyrosine kinase ; and colony-stimulating factor 1 receptor were upregulated by more than twofold in IPF lung tissues (Fig. f). IHC indicated that SYK protein was mainly expressed in alveolar macrophages and epithelial layer of fibrotic lesions (Fig. g). Case 3: A 61-year-old male with a smoking history (82.5 pack-years) who was diagnosed with IPF. HRCT revealed bilateral honeycombing, indicating a UIP pattern (Fig. a), and pathological examination confirmed UIP. The patient underwent a cadaveric transplant of the right lung. We obtained two lung samples from the right upper lobe and lower lobe (Fig. b). The top 10 upregulated kinase-coding genes are shown in Fig. c, with DCLK1 being upregulated by more than twofold in IPF samples compared to the control lung samples. STK33 was also upregulated (log 2 ratio 1.30) with no statical significance. IHC indicated the expression of DCLK1 protein in the epithelial layer of fibrotic lesions and smooth muscle cells, similar to that observed in case 1 (Fig. d). Of the 40 genes having clinically available kinase inhibitors, Janus kinase 3 was upregulated by more than twofold (Fig. e); however, the fold change was not statistically significant. In cases 4 and 5, no genes were upregulated by more than twofold in IPF lung samples compared to the control samples (Additional file : Fig. S3a, b). The characteristics of five IPF and four control patients are shown in Table . The median age of IPF patients was 57 years (range, 56–67 years). IPF lung tissues were harvested from two patients, who underwent lung transplantation, and from three patients, who underwent surgical lung resection for NSCLC. Multiple samples (2–6 samples) were obtained from three IPF patients (cases 1, 2, and 3), whereas a single sample was collected from each of the remaining two patients (cases 4 and 5). Thus, in total, 13 lung tissue samples were obtained from five IPF patients. The median age of the control patients was 64 years (range, 45–82 years). The lung tissues without IPF were harvested from the control patients who underwent surgical lung resection for NSCLC. A total of eight lung tissue samples were collected from four control patients (two from each patient). Gene expression analysis was performed using 13 IPF and 8 control (without IPF) lung tissue samples. Clustering analysis indicated that the IPF and control samples were clustered together (Fig. A). Hence, considering that the severity of fibrosis may affect gene expression, we divided the IPF tissue samples into two subgroups based on the Ashcroft score that estimates the severity of pulmonary fibrosis on a numerical scale (Ashcroft score < 6: normal to moderate fibrosis; Ashcroft score ≥ 6: severe fibrosis) . As expected, the severe fibrotic samples (Ashcroft score ≥ 6) showed an independent gene signature compared to the moderate fibrotic (Ashcroft score < 6) and control samples, whereas the gene expression signature of the moderate fibrotic samples (Ashcroft score < 6) was not independent of that of the control lung samples (Fig. a). Additionally, only five genes were found to be differentially expressed by more than twofold in moderate fibrotic samples compared to the control samples, whereas 51 genes were differentially expressed by more than twofold in severe fibrotic samples compared to the control samples (Fig. b, Additional file : Table S2). Moreover, we performed an independent clustering analysis of the six IPF samples obtained from a single patient (case 1) and confirmed the correlation between the gene expression signature and Ashcroft score (Fig. c). Clustering analysis confirmed the correlation of gene expression signature and Ashcroft score in IPF cases 2 and 3 as well (Additional file : Fig. S1a, b). Further, we examined whether the change in gene expression according to the severity of fibrosis in IPF case 1 was also observed in IPF cases 2 and 3. As expected, the signal-to-noise weighted-voting score analysis showed that the gene expression signature in the fibrotic tissues with Ashcroft score ≥ 6 or < 6 in IPF case 1 was reproduced in both IPF cases 2 and 3 (Additional file : Fig. S1c). Altogether, these results suggest that the gene expression signatures are altered with the progression of fibrosis. As IPF lungs typically present with temporal and spatial heterogeneous histological findings, we assessed the gene expression diversity in IPF lung tissues harvested from different segments in the same patient. Therefore, only patients contributing multiple lung samples were included in this analysis (i.e., IPF cases 1–3 and control cases 1–4). Expectedly, a stronger diversity was observed in IPF lung tissues than in control lung tissues, suggesting not only histological, but also genetic heterogeneity of IPF lungs (Fig. d). urrent study revealed a correlation between the gene expression signatures and degree of fibrosis, as assessed by Ashcroft score and indicated heterogeneity among IPF lung samples based on gene expression. In addition, we identified potentially targetable kinases, such as DCLK1, PDK4, ERBB4, STK33, PIM2, and SYK, which were overexpressed in IPF. Our results demonstrated that DCLK1 followed by STK33 were the most upregulated genes in IPF lung tissues compared to control lung tissues. Consistent with our data, other studies have also reported the increased expression of these genes in IPF lungs [ – ]. Therefore, these genes may be universally upregulated in IPF lung tissues. DCLK1 regulates epithelial-mesenchymal transition (EMT) . STK33 has been reported to be associated with cell proliferation as well as EMT in various cancer types . In the current study, DCLK1 and STK33 proteins were expressed in the epithelial layer of fibrotic lesions in IPF lungs. Based on the evidence that epithelial cells differentiate into myofibroblasts through the EMT and that myofibroblasts promote lung fibrosis , DCLK1 and STK33 may serve as therapeutic candidates for IPF. In addition, recently, selective DCLK1 and STK33 inhibitors have been reported, which may provide alternative therapeutic strategies for IPF by suppressing the proliferation of aberrant epithelial cells and inhibiting EMT, thus hindering the progression of fibrosis. Owing to the heterogeneous nature of IPF, its pathogenesis may vary in each patient. In some patients, IPF may be caused due to the dysfunction of AT2 cells, whereas in others it may be caused due to MUC5B gene aberration. Thus, the development of fibrosis and expression of genes may vary across individuals. In the present study, each patient showed a different gene expression pattern (e.g., case 1: DCLK1, PDK1, and ERBB4 expression; case 2: STK33, PIM2, and SYK expression). Except for DCLK1 and STK33 , these genes were neither identified by our integrated analysis nor found in the IPF Gene Explorer database, indicating that these genes are not universally expressed in IPF. However, these genes could serve as potential targets for personalized IPF therapy because they were uniquely upregulated in individual patients. Among the selected genes having clinically available specific kinase inhibitors, ERBB4 was the most upregulated gene in IPF case 1. Dreymueller et al. reported that the release of inflammatory cytokines, such as CXCL8 and IL-6, from the smooth muscle cells was suppressed by inhibiting ERBB4 expression . In addition, ERBB4 is reportedly associated with EMT in lung and gastric cancer cells . Therefore, ERBB4 could be a potential therapeutic target for IPF. Moreover, preclinical studies on SYK expression (upregulated in IPF case 2) have reported that its inhibition suppresses TGF-β1-induced myofibroblast activation and progression of fibrosis in the liver, kidney, skin, and lung [ – ]. Collectively, these results suggest that ERBB4 and SYK are attractive targets for IPF treatment; however, further preclinical studies are needed to confirm the suppression of lung fibrosis following the inhibition of the expression and activation of these kinases. This study had several limitations. First is the small sample size, potentially leading to skewed results because of selection bias. Second, of the 21 samples analyzed in this study, 13 were obtained from residual specimens of lung cancer surgery, which included a relatively high proportion of patients with lung cancer. Although we microscopically confirmed that the samples for this analysis did not contain lung cancer cells, we cannot completely rule out the possibility that the lung tissue that develops cancer may bias our results. Further, IPF case 3 was treated with pirfenidone prior to tissue collection, which may have affected gene expression . Third, we used bulk RNA sequencing analysis to explore the kinome expression profile in IPF lung tissues, as opposed to single-cell sequencing (scRNA-Seq). Unlike in scRNA-Seq , the current study does not provide information on cell type-specific expression of the genes (e.g., fibroblasts and alveolar epithelium). Thus, our study results should be cautiously interpreted. However, the upregulation of the genes by RNA sequencing was confirmed at the protein level by IHC in the epithelial layer and smooth muscle cells of fibrotic lesions. We believe that future studies using scRNA-Seq will delineate the cell type-specific dynamic changes in the expression of genes during the process of fibrosis and identify better therapeutic targets. Fourth, to select a therapeutic target based on genetic profile, the part of the heterogeneous IPF lung that should be biopsied remains unclear. A large-scale integrated analysis with multiple patients in whom tissue sampling can be performed from each lobe of both lungs such as in case 1, may be able to provide clues regarding whether areas with strong fibrosis or weak fibrosis are more appropriate biopsy sites. We performed a comprehensive kinase expression analysis using RNA sequencing to explore potential therapeutic targets for IPF and found that DCLK1 and STK33 may serve as potential candidates for molecular targeted therapy of IPF. In addition, PDK4, ERBB4, PIM2, and SYK might also be attractive targets in individual cases. Additional large-scale studies are warranted to develop personalized therapies for patients with IPF. Additional file 1: Table S1. List of 41 genes selected based on the signal-to-noise statistic. Table S2. Genes expressed differentially by more than twofold, in moderate and severe fibrotic samples compared to control lung samples. Table S3. List of 46 kinases having clinically available inhibitors. Additional file 2: Fig. S1. a Clustering analysis of three samples from IPF case 2. b Clustering analysis of two samples from IPF case 3. c Signal-to-noise weighted-voting score based on 41 genes from IPF case 1. Fig. S2. Immunohistochemistry of ERBB4. Scale bar = 200 µm. Fig. S3. Expression of the 40 selected genes encoding kinases having clinically available kinase inhibitors in IPF cases 4 (a) and 5 (b).
Advancing pediatric palliative care in a low-middle income country: an implementation study, a challenging but not impossible task
6d927065-3627-4fff-94ee-2b130420a667
7648318
Pediatrics[mh]
The prevalence of complex life-threatening diseases in children has increased significantly worldwide . In the United Kingdom, the prevalence of limiting and life-threatening diseases increased from 12 per 10,000 population in 2003 to 16 per 10,000 by 2007 . The prevalence of these diseases in Latin America is still to be determined since there is still a lack of information. The World Health Organization (WHO) stated that such conditions could benefit from the palliative care (PC) approach . In this light, the burden of these diseases and thus the need for PC becomes a growing necessity for healthcare systems worldwide and a moral imperative . A recent regional study assessing the development of pediatric palliative care (PPC) in Europe estimated that approximately 150,000 children need PPC every year . The region offers a total of 680 services to address the need, of which 133 are hospices, 385 are home-care services and 162 are hospital services . The statistics about PC need are expected to be larger in Latin America due to the health inequalities that the region faces. The Lancet Commission on Palliative Care reported an unaddressed lack of access to PC and pain relief worldwide which mainly affects low- and middle-income countries (LMICs) where the provision for PPC and pain relief are estimated to be neglected . A systematic review reported that 66.7% of the countries in South America did not have any PPC activity until 2011, when the first integrated PPC service was documented . The field of PC has continued to develop and become an active area of research and advocacy. Even though a new definition of the concept is available, the generalized provision of healthcare in Latin America has focused on improving access to specialized adult services . This results in an untrained health workforce in the field of pediatrics and a lack of PPC services to address the current need . The 2012 Atlas of Palliative Care in Latin America reported Colombia as having four hospice-type residencies, only one second-level service, and 13 tertiary care units that serve both adults and children . There are still no available data regarding the national need for PC; however, according to the Childhood Cancer Outcomes Surveillance System (VIGICANCER), the incident rate of pediatric cancer from 1977 to 2011 is comparable with affluent countries . To date, national PC services are estimated to have grown almost 500% in the past 5 years. However, developing PPC in Colombia requires overcoming even greater barriers than for adults, mainly due to the absence of PC academic training programs for pediatricians and the lack of related educational objectives for other healthcare professionals . In light of the pressing need to promote PPC in Latin America and considering the characteristics of PPC in Colombia based on the recommendations from the American Academy of Pediatrics (AAP), the Institute of Medicine (IoM), and the International Meeting for Palliative Care in Children’s Trento (IMPaCCT) for the introduction of PPC teams , the objective of this paper is to report the process of implementing the PPC program “Taking Care of You” ( TCY ) in the city of Cali in Colombia. As with other LMIC health inequalities and a lack of scientific literature on the subject, Cali represents a context where promoting PPC might seem to be an impossible task. This article reflects on the strategies attempted for developing a multidisciplinary program that could provide coordinated care and symptom management to children with life-threatening and limiting conditions, geared toward reducing suffering and improving patients and families’ quality of life. Geographic and demographic context Colombian legal framework for PC regulation The program “Taking Care of You” Program strategies To fulfill the baseline program goals and objectives, an eight-step strategy was implemented. This included education and awareness in PPC (a question-based strategy, summarized in Appendix under supplementary data), institutional support, the participation of the PPC team in academic and healthcare activities, advocacy with other actors in the healthcare system, capacity building in PPC, the formation of a multidisciplinary team led by a pediatrician with training in PPC, and research that is described in Table . Colombia has a population of 48,258,494 inhabitants, 51.2% of the population is female, and 22.6% are between 0 and 14 years of age . Cali is a city with 2.5 million inhabitants . There is limited data available regarding the number of pediatric patients with a need for PC or PPC services available in the country; however, the age-standardized annual incidence rate of childhood cancer in Cali from 1977 to 2011 was 141 cases per million . Since 2014, the country has passed relevant measures related to PPC regulation. Important milestones include the regulation of services in the country, including those aimed at the pediatric population , guidelines for inclusive care in PC, differentiating attention for adults and children , specific standards for Children Cancer Care Units (CCCU) where PC and pain treatment must be guaranteed from the beginning of treatment , a cancer control plan with a mention of PC , and regulating the process of dignifying death for children and adolescents, allowing euthanasia . (Colombian regulatory framework is summarized in Appendix under supplementary data). The program operates in the Fundación Valle del Lili, a nonprofit, teaching hospital that functions as a referral center for the southwestern region of Colombia. It has 177 pediatric beds and cares for approximately 50,000 children annually. Our institution has a general PC program that has operated since 2007 and is led by family medicine physicians with a multidisciplinary team. The general PC program had served the entire population including children until late 2017, when the specialized PC program for pediatric patients called Taking Care of You ( TCY) was launched. TCY is led by a pediatrician with a team that includes a family medicine physician, a nurse, a social worker, and a psychologist. The main objective of the program is to provide coordinated, multidisciplinary and humanized care to promote quality of life for patients under the age of 18 with life-threatening and limiting conditions in any hospital care service [emergency room, hospital room, pediatric intensive care unit (PICU), neonatal intensive care unit (NICU), and outpatient setting]. Patients are treated through an inter-consultation or first-time assessment by the multidisciplinary PPC team. This multidisciplinary team is designed to provide coordination of care, support in clinical decision-making, facilitated communication, pain and other symptom management, advanced care planning, end of life care, and bereavement follow up , among others that will be described later. Two approaches were followed to describe the outcomes of the program. First, a categorization of the strategies was conducted to assist with its implementation, and second, descriptive statistics were used to describe the program’s outcomes. Mapping implementation strategies Program outcomes Education strategies There were continuous educational activities in the pediatric units that emphasized on the importance and health benefits from the early referral of children with limiting and life-threatening conditions to PPC, advancing to the progressive growth of the program. Educational activities include PPC and pain management workshops, communication role play games, printed materials, PPC subjects in the grand rounds, clinical case discussions, among others. A theory-based method was applied to describe the implementation process of our program. The approach was based on the Poot et al. article where a matrix was retrospectively developed from descriptive frameworks. To analyze and display the strategies implemented, a matrix exercise was performed. Strategies of the program TCY in the matrix The strategies were categorized retrospectively with the Cochrane Effective Practice and Organization of Care (EPOC) , Review Group Taxonomy 2015 , and grouped into the categories and subcategories: Implementation Strategies, Financial arrangements and Delivery arrangements Table . The matrix combines two frameworks: first, levels of organization influenced by the implementation , and second, the domains of implementation to provide a comprehensive matrix where defined project activities can be positioned according to their intended target domain and level, facilitating a structured description of the project . The matrix displays the implementation strategies, Financial arrangements, and delivery arrangements. Different sources of information were cross-referenced in a database to improve the quality of the information. A descriptive analysis of the variables was performed using the program’s administrative registry, no clinical chart review took place, nonetheless, this study was submitted to the Institutional Research Board (IRB) and Ethical Committee. The results were reported using measures of central tendency and dispersion, according to data distribution. Categorical variables were summarized as percentages. The variables analyzed included a detailed analysis of the children served and the referral service, type of pathologies (i.e., oncologic vs nononcologic), and place of death per year. All analyses were performed using Microsoft Excel 2016. Information from referral motive was gathered as part of an internal administrative process through an online survey (Appendix provided in supplementary data). No literature search took place, but it did undergo IRB and Ethical Committee review. Coverage Disease prevalence at the moment of referral to the program Cause of referral Place of death and bereavement follow-up Strategies Figure shows the integration of the EPOC categories on a single matrix displaying the categorization for the domains: implementation strategies, financial arrangements, and delivery arrangements. The program has focused on advancing in educational and professional interaction, through promoting communication between interdisciplinary teams. Second, it aimed on developing organizational change by task shifting, promoting incentives and resources and pursuing change in social, political and legal frameworks. The program has not stressed change at a policy level, though small changes have been reported at generating specific roles for the program, determining the site of services and providing disease management specifically from a PPC perspective. Since the beginning of the program in 2017, a total of 1965 children have been referred. In 2017, before the specific PPC program was created, 146 pediatric patients were assessed by the general PC team, whereas in 2019, 1 year after the start of the “Taking Care of You” program, 771 children were assessed in the pediatric inpatient services and 324 new patients were assessed in the outpatient setting of the hospital. This represents an 81 and 93% increase PPC demand in the inpatient and outpatient services, respectively (see Figs. and ). A relevant aspect in the program development was making a strategic alliance with the PICU and the pediatric oncology unit, which favored early integration of the PPC team in the unit’s activities and allowed support in decision-making to promote patient quality of life. This also allows continuity of care, interdisciplinary communication, and therapeutic adherence. Patients referred to the program were classified as oncological and non-oncological. We found a similar proportion in both categories, including inpatient and outpatient care. In our institution, the most frequent oncological clinical conditions were central nervous system tumors (53%) (mainly medulloblastoma and low- and high-grade gliomas), followed by bone tumors (26%) (Osteosarcoma and Ewing’s sarcoma). On the other hand, non-oncological conditions were mainly related to severe neurological compromise in 38% of the cases (including congenital malformations and chromosomal abnormalities, such as trisomy’s, microcephaly, cerebral palsy), followed by inherited muscle disorders and rare diseases. The main causes for referral in outpatient and inpatient setting was support during therapeutic withholding and withdrawal of a treatment that was already established (89%), followed by end-of-life care (86%), Communication support (70%), Pain and symptom control (69%), Psychosocial support (67%), Decision-making support (63%), and Health Care coordination (57%). The most frequent reasons for referral to the PPC program by departments are summarized in Table , the majority came from the pediatric inpatient ward (52%) and the PICU (23%). The median time from referral to death was 12 days, with an interquartile range of 2 to 61 days. The place of death of most of the patients was the hospital Table . Eleven percent of the relatives attended the bereavement workshops in the 2017–2018 time period and 25% in the 2018–2019 time period. The TCY program accompanied bereavement of parents and families through, follow-up calls, sending condolence letters, and bereavement support groups, with a symbolic butterfly release in addition to psychological interventions. The program has had around-the-clock availability, it received 81 phone calls in 2018 and 244 phone calls in 2019, related to symptom management (58%) and care coordination (42%). The program has focused on an unaddressed gap, the provision of PPC for children in Cali. It has improved referral times, coordination of care, availability of compassionate/holistic care to children with limiting and life-threatening diseases, and end-of-life care. The implementation of this program has required the onset of specific strategies and arrangements to promote awareness and education, which has been a difficult task. With this implementation experience, we intend to contribute to the PC philosophy in pediatric patients’ care and hope it serves as a model for other regions and countries. Establishing a PC team with skills, training, and exclusive activity assistance for the pediatric population was a crucial element to expand PPC demand for children with complex chronic diseases and to increase the program demand a three-fold compared with the previous year. As described in the Children’s Hospital of Eastern Ontario, Ottawa, a significant increase in patient referrals was associated with the inclusion of social work resources and experienced PC physicians . Additionally, Education was key for the progressive growth of the program and could be a strategy to lower resistance from healthcare professionals, since this has been described to be a barrier . When using the theory-based method description applied to our TCY program implementation, we recognized the program has focused on advancing individual professional and groups of professionals in their capacities and competencies and on generating organizational change to find a niche within the hospital without influencing policy. Small changes at the policy level can be accounted for by the definition of specific PPC roles and settings for patient care. Disease prevalence at the moment of referral to the program displayed the most frequent diagnoses in oncological and non-oncological groups were neurological, similar to other implementation studies carried out worldwide . We corroborate that children with neurological conditions, even if not progressive, experience high distress due to symptom control difficulties , the high burden associated with care , and potential complications that could cause premature death , pointing out the important role a PC team plays in the multidisciplinary management of these patients. Referral causes regarding our PPC program were associated with supporting decisions around withholding or withdrawing treatment, and end-of-life care, especially in the NICU and the PICU services, with an average time from referral until death of 12 days. This underlines late integration of PPC in the wide and varied conditions susceptible to the service. In the largest retrospective medical record review for a PPC population in the US by Thrane et al., they evidenced that the time from referral to death was 4.4 months . Highlighting children are still not being referred to PC until mere days before death, not in line with recommendations from the WHO and other organizations that recommend PPC to be provided after diagnosis of life-threatening diseases, which in turn could result in less aggressive care and better outcomes at the end-of-life . The global trend is to promote death at home with patient-family comfort and support measures ; however, in pediatrics, multiple difficulties arise when trying to promote this measure, and the hospital scenario is the most frequent place of death . Our team recognized the same difficulty, and most of our patients (74%) died in the hospital. Some of the barriers identified in favoring end-of-life care at home were related to fear from parents or caregivers to face the death of their child alone, the high attention needs of a patient at the end-of-life, the lack of home care programs with PC training, and the absence of pediatric hospices in the country, similar to what is described in the literature . However, some families were able to care for their children at home during their final days. Finally, family support during bereavement is an essential biopsychosocial need for the family’s well-being after the loss of a child . Our program does follow-up to support parents in grief: however, bereavement workshop attendance has been low (25%). We consider it an important aspect to improve by increasing coverage and offering more family services. Future goals, achievable through My Child Matters grant, include improving inclusion and access by benefitting more than 500 children in the region, accompanying 1000 families and relatives, and educating more than 1500 healthcare professionals. We also seek to make the enlarged program sustainable. Limitations is study shows the implementation of a PPC program in a teaching, general hospital in Cali, Colombia. This is a descriptive study, and the data were retrospectively collected, which implies that selection and information bias could be introduced during its development. To mitigate this situation, our databases were crossed with different sources of institutional information to improve the quality of information. On the other hand, our institution is a specialized reference center for patients with highly complex diseases; therefore, the reported number of patients who benefit from the program is high when compared with other institutions. Additionally, the Colombian healthcare system is based on universal coverage provided by several insurance companies, and as such, patients are often transferred to other institutions due to individual insurance preferences or administrative limitations, making it hard to keep track of all PPC patients. Likewise, there was no guarantee that all the patients with cancer and life-threatening conditions received outpatient care and completed follow-up through the TCY program because patients often need difficult-to-obtain authorization from their health entity to receive specialized care, such as PPC. Despite these limitations, our implementation model could be an example to mitigate gaps in the PPC service for Latin America and the Caribbean, where availability and access to a PPC team are scarce. The creation of a specialized PPC service increases patient referrals and favors a comprehensive approach to patients with life-threatening and limiting conditions in a general hospital. Institutional support, philanthropic support, awareness, and education are essential for the viability and impact of PPC programs. There are still many opportunities for improvement, such as the referral time and death of patients, the creation of a pediatric hospice, and improving the bereaved follow up, among others. The implementation of a PPC program in Colombia is a difficult but not impossible task to accomplish. With this paper, we hope that other PPC programs can be strengthened or new ones can be created in Latin America and the Caribbean. Additional file 1: Appendix 1. Colombian legal framework related to palliative care. Provide a summary of the Colombian laws and legislations that support Pediatric Palliative Care and promote quality of life for children with life-limiting and threatening diseases. Additional file 2: Appendix 2. Program approach strategy. The team developed a question-based strategy to address institutional education and awareness in PPC. Additional file 3: Appendix 3. Institutional Satisfaction Survey. Online institutional questionnaire applied to healthcare professionals through the pediatric departments, to quantify satisfaction and referral motives.
How much off-the-shelf knowledge is transferable from natural images to pathology images?
0dfc0df1-887c-4c8d-97f3-480a4ef86f09
7556818
Pathology[mh]
Pathology is a medical sub-specialty that studies and practices the diagnosis of disease through examining biopsy samples under microscopes by pathologists. It serves as the golden truth of cancer diagnosis. To address subjectivity in pathology examination , computational pathology exploits image analysis and machine learning for histological information understanding in tissue images. Owing to its time-efficiency, consistency, and objectivity, computational pathology merges as a promising approach to cancer diagnosis and prognosis. Inspired by domain knowledge of cancer diagnosis, many algorithms based on hand-crafted feature engineering were proposed to classify pathology images using nuclei’s morphology and spatial-distribution features and image texture features . Though pathology image diagnosis has achieved impressive progress using hand-crafted feature engineering, effective numerical representation of heterogeneous histological information in pathology images is still the bottleneck. To address this issue, data-driven methods, especially the end-to-end training of convolutional neural network (CNN), are adopted more often in recent pathology image classification studies . Though data sets containing hundreds of pathology images are considered “quite” large, they are still far smaller compared to the number of parameters in a medium-size neural network. Consequently, deep diagnostic models training with these data sets are prone to over-fitting and less generalizable in pathology practice. To address the shortage of large database in deep pathology learning, collecting large pathology image set is highly desirable. However, due to difficulty and time-consuming nature of pathology annotation, large pathology databases with labels are expensive to collect. With recent advance in whole-slide imaging, we believe that very large pathology image sets would accelerate the development of deep learning in computational pathology. At the same time, alternative candidate to address the shortage of large database in deep learning is transfer learning. In transfer learning, a “data-hungary” net is first trained on a very large database, e.g. ImageNet, and the pre-trained model is then applied to relevant but different tasks. Many studies have demonstrated its effectiveness in data-scarce applications related to natural image classification and object recognition , and natural language processing (NLP) . However, due to the lack of very large annotated pathology image database, there is no reliable pre-trained deep model available in computational pathology. Hence, different from prior studies where data in the original and target tasks share similar properties (e.g. training and test sets are composed of natural images), transfer learning in computational pathology usually adopts pre-trained CNNs on natural images instead . It should be noted that though there are different strategies, transfer learning is essentially the use of knowledge gained in one task to solve a new but related problem. Hence, transferability of knowledge heavily depends on the similarity between original and target tasks, and features transfer more poorly when the datasets are less similar . Consequently, on one hand, when using off-the-shelf features in transfer learning, one needs to identify the layers generating general features so that layers computing task-specific features are either discarded or fine-tuned; On the other hand, in the transfer learning strategy of fine-tuning a pretrained model, one needs to specify the values of hyperparameters in finetuning, such as the learning rate and the number of iterations for model refinement (i.e. similar target and source tasks usually requires less refinement). As researchers focusing on computational pathology, we are fully aware the significant differences in image contents and statistics between pathology images and natural images (which is demonstrated in ), and want to investigate effectiveness of transfer learning by answering following questions: Is transfer learning still effective from natural image classification to computational pathology? Which layer in a deep net contributes more to pathology image diagnosis? Is there a sweet spot to balance transferred model’s complexity and performance? Though answers to these questions form the basis of current pathology-image centered transfer learning, seldom literature tackles them explicitly and, to the best of our knowledge, there are only two studies related to our questions. The study in concludes that fine-tuning a pre-trained net outperforms training a CNN from scratch in medical image analysis. However the experimentation does not include pathology image sets. Recently, different strategies to combine off-the-shelf features are investigated in pathology image centered transfer learning . Since this study focuses on comparison of different pre-trained models (i.e. VGG16, ResNet, and DenseNet et al.), it is non-trivial to infer the descriptive power of off-the-shelf representations by layers directly from its results. In addition, neither of them discuss the trade-off between transferred model’s complexity and performance. Our contributionsTo answer above questions, we define a framework to measure information gain of a particular layer in a pre-trained CNN. Using performance of a random-weight layer as the comparison baseline, the knowledge gain of that particular layer is quantified by the gap between their classification accuracy. We conduct experimentation using two public-accessible breast cancer pathology image sets in this study. Based on the experimental results, though middle-layer representations lead to the highest diagnosis rates, we observe that (i) transferred general knowledge mainly resides in early layers, (ii) the depth layers in a CNN may bring marginal performance improvement in transfer learning, but the complexity of the transferred model (i.e numbers of parameters) increased greatly. This trade-off between transferred model’s complexity and transferable performance encourages further investigation of specific metric and tools to quantify effectiveness of transfer learning in future. Note, though fine-tuning a pretrained model may achieve better performance over the strategy of extracting off-the-shelf representation, the focus of this study is the amount of knowledge that can be reusable in the pretrained net. In addition, fine-tuning a model requires larger data set. Considering data scarcity in current computational pathology research, this study focuses on investigation of off-the-shelf feature extraction methods only. The rest of this paper is organized as follows. The proposed method to measure knowledge gain of a particular layer in transfer learning is presented in the Methodology Section. Experimental results and discussions are presented in the Experimentation Section, followed by conclusions. In deep learning, the incremental learning nature ensures the transition of representations in layers from generality to specificity. Hence, to reuse a model to a new task, one needs to know how much knowledge is reusable and thus to identify the layers that generate general features, or to specify hyper-parameters in model’s fine-tuning. To investigate the amount of reusable knowledge in transfer learning, we define a framework to measure the knowledge gain in each layer of a pre-trained net. Specifically, as presented in , we first define two base models. Assume that a CNN A has been trained using a database in the original task T A . Its off-the-shelf features are extracted from different layers and passed to a support vector machine (SVM) for a new task T B . Following the identical architecture of A , we define a neural network R with all convolutional and fully connected layers having random weights. In this figure, layer n in the pre-trained model is denoted by A n ; Similarly, random-weight layer n in the model R is represented by R n . The labeled color rectangles (e.g. A 1 and R 1 ) represent the weight vectors for that layer, with color differentiating the pretrained and random weights. The vertical transparent bars between weight vectors represent activations at each layer. Then to evaluate the amount of knowledge transferred by the off-the-shelf representation in layer A n , we build three models based on the two base nets as follows: R 1, n + SVM: numerical features generated by the first n layers in the random-weight model R are passed to a SVM classifier. Its performance constitutes the comparison baseline in this study. A 1, n + SVM: the first n layers of the pre-trained model A are used to compute the off-the-shelf representation. The obtained features are then passed to a SVM machine. The performance gain to the comparison baseline is the overall knowledge gain transferred by the first n layer in model A . A 1, n −1 R n + SVM: the first n − 1 layers in model A concatenating with the n th layer in model R are used to generate features for the target task T B . The performance difference between A 1, n and A 1, n −1 R n are the information gain obtained by the n th layer of model A . In the following sections of this paper, we name the three models R 1, n , A 1, n , and A 1, n −1 R n for short. In summary, given a pre-trained model A and a target task T B , we measure the quantity of transferred knowledge in A by comparing its performance to net R ’s performance in task T B . We select a net composed of random-weight layers as a comparison baseline for the following reason. It is reported that the combination of random-weight convolutional layer, relu layer, pooling layer, and normalization layer might achieve similar performance as learned features . Since a random-weight layer knows nothing about both the original and target tasks, its activations deliver knowledge gained without any effort/train. Through comparing the performance of R 1, n and A 1, n , we can tell how much knowledge obtained by the first n layer in model A is transferable to the target task T B . Similarly, the performance difference of A 1, n −1 R n and A 1, n is attributed to the information brought by layer A n . We repeat the comparison for all n ∈ [1, N ]. ExperimentationResults and discussion The experimental results for the pathology image datasets are shown in , where ach marker is the figure represents the average accuracy over the validation set for 50 times. The blue line connects models used off-the-shelf representation A 1, n extracted from the n th layer. The Orange line connects models A 1, n R n , which applies a random-weights filter layer to the A 1, n −1 representation, and the gray solid line corresponds to the performance associated with random-weight layer models R 1, n . Note that for the IIT image set, classification accuracy achieved by the state-of-the-art hand-crafted method is marked by the gray dash line in the left figure for reference. Since no hand-crafted method specifically designed for the BATCH set, gray dash line is not shown in the right figure. First, for the binary classification of the IIT image set reported on the left of the , transfer learning outperforms the hand-crafted method. Then let’s focus on A 1, n and A 1, N −1 R n , which are denoted by the blue and orange lines, respectively. The difference between these two models is whether weights in the n th layer are pre-trained. The performance gap is mainly attributed to knowledge transferred from natural image classification to pathology image diagnosis. In this experiment on the IIT image set, most transferable information is delivered by the first and second layers and increase of layer index comes with marginal performance improvement after the third layer. Performance difference between the blue line A 1, n and the gray solid line R 1, n reveals total amount of transferable information accumulated by the first n layers in the pre-trained AlexNet. Performance gap grows slightly wider from layer n = 3 to n = 6. This observation again verifies that the transferred middle layers in the pre-trained model do not introduce more knowledge compared to the random-weights layers R 1, n for 3 ≤ n ≤ 6. Above observations suggests that applying the first two layers in the pretrained AlexNet to IIT image classification is the sweet point to balance the classification performance and model’s complexity. The BATCH image set poses a problem of 4-category pathology image classification. In the left figure of , we observe a steady increment of diagnosis accuracy from the first layer to the sixth layer. Transferring the fully-connected layers in the representation layer 7 and 8 degrades the diangosis performance. Compared to the experiment on the IIT image set, the sweet spot for model transfer (i.e. transferring representation layer 1 to 6) is more obvious. Since effectiveness of transfer learning depends on a specific image set, it encourages the further investigation of specific metric and tools to quantify the feasibility of transfer learning in future. Data setsDeep net architectureEvaluation protocolis experiment quantifies the transferability of off-the-shelf representation by the performance of pathology image classification. The two public pathology images exploited in the study are described as follow. The breast cancer benchmark biopsy dataset collected from clinical samples was published by the Israel Institute of Technology (IIT data set in short) . The image set consists of 361 samples, of which 119 were classified by a pathologist as normal tissue, 102 as carcinoma in situ, and 140 as invasive carcinoma. The samples were generated from patients’ breast tissue biopsy slides, stained with H&E. They were photographed using a Nikon Coolpix 995 attached to a Nikon Eclipse E600 at magnification of 40× to produce images with resolution of about 5 μm per pixel. No calibration was made, and the camera was set to automatic exposure. The images were cropped to a region of interest of 760 × 570 pixels and compressed using the lossy JPEG compression. The resulting images were again inspected by a pathologist to ensure that their quality was sufficient for diagnosis. presents examples of pathology images in this breast cancer benchmark. The second dataset is from the ICIAR2018 Grand Challenges on breast cancer histology images (BATCH) . It is composed of 400 high-resolution (2048 × 1536 pixels) annotated H&E stained images with four balanced classes: normal, benign, in situ carconima and invasive carcinoma. All images are digitized with the same acquisition conditions, with magnification of 200× and pixel size of 0.42 μm × 0.42 μm . Examples of ICIAR2018 image set are shown in . Considering the experimental datasets have relatively small number of pathology images, we selects the AlexNet (which has fewer layers and parameters compared to other deep models) pre-trained on the ImageNet database as the model A in this experimentation. AlexNet is composed of 25 layers, including 5 convolutional layers and 3 fully-connected layers. In this study, the off-of-shelf features are extracted after the 8 learned layers as illustrated in . The random-weight neural network R shares the identical architecture as AlexNet but with filter weights randomly generated following the standard normal distribution N (0, 0.01), i.e. Gaussian distribution with zero mean and standard deviation of 0.01. The image set is divided into training set and test set, with a ratio of 7:3. Images in the training set are augmented by rotation with an angle randomly drawn from [0, 360) degrees, vertical reflection, and horizontal flip. The augmented training images are fed to the three evaluation models A 1, n , A 1, n −1 R n , and R 1, n , generating three different feature sets for each n ∈ [1, 8]. Then for each off-the-shelf feature set, a linear SVM is trained and optimized for pathology image diagnosis. In the testing phase, test images are processed by the evaluated models and classified by corresponding linear SVMs. Finally, agreement of classification results and annotated image labels is recorded for comparison. This study uses classification accuracy ACC ∈ [0, 1] to measure pathology image diagnosis performance. Since the number of images in each category of both datasets is quite close, the limitation of ACC (i.e. biased by disease prevalence) is mitigated. To obtain a reliable conclusion, we repeat the experiments 50 times for each n ∈ [1, 8] and obtain the final data by averaging all ACC s. In this work, we proposed a framework to quantify the amount of information gained by each pre-trained layer, and experimentally investigated and reported transfer efficiency of deep net’s off-the-shelf representation over different pathology image sets. The experiments suggested that the off-the-shelf features learned from natural images can be reused in compuational pathology, but the amount of information that could be transferable heavily depended on complexity of pathology images. The observation in this study had practical reference to pathology image centered transfer learning.
Quantitative phosphoproteomics reveals that nestin is a downstream target of dual leucine zipper kinase during retinoic acid-induced neuronal differentiation of Neuro-2a cells
f382be1c-cd19-47aa-b449-cbf0b980ea9b
11938613
Biochemistry[mh]
Neuronal differentiation is fundamental to the development and regeneration of the nervous system. During this process, newborn neurons cease proliferating and undergo profound morphological changes, which result first in the formation of neurites at the cell surface. Subsequently, these neurites elongate and branch to form axons and dendrites in mature neurons, promoting the assembly of functional neuronal circuits. In keeping with its complexity, neurite outgrowth is controlled by a large number of cell-extrinsic and cell-intrinsic factors. Prominent among these are neurotrophic factors, retinoids, extracellular matrix (ECM)-associated proteins, cell adhesion molecules (CAMs), intracellular protein kinases, small GTPases, cytoskeletal components and transcription factors [ – ]. Dual leucine zipper kinase (DLK) is an attractive candidate regulator of neuronal process outgrowth and maintenance due to its divergent functional properties. This protein acts as a component of the c-Jun N-terminal kinase (JNK) pathway , which, in addition to its role in the transduction of signals from cytokines, growth factors and environmental stress, contributes to brain development and synaptic plasticity . Depending on the origin of the neurons, developmental stage and physiological context , it has been reported that DLK can stimulate axon growth and regeneration as well as axon degeneration and neuronal cell death. Clues regarding the involvement of DLK in the positive regulation of neurite and axon formation are derived from studies showing that inactivation of the murine DLK gene results in abnormal brain development characterized by defects in axon growth and neuronal migration . Consistent with this finding, the loss of DLK was shown to impair axonal growth in optic and sciatic nerve crush injury mouse models , suggesting that DLK is required for axon regeneration in the peripheral nervous system (PNS). Furthermore, DLK was identified as an essential mediator of the pro-regenerative effects of cAMP on axon growth in mouse dorsal root ganglion (DRG) neurons . In addition to the abovementioned reports, genetic deletion of DLK in non-regenerating central nervous system (CNS) neurons significantly attenuated axonal degeneration and neuronal cell death caused by mechanical injury and glutamate-induced excitotoxicity [ , – ] or observed in mouse models of amyotrophic lateral sclerosis and Alzheimer’s disease . Taken together, these data suggest that, in addition to its role in neural development, DLK acts as a sensor of axonal injury in neurons of the PNS and CNS to mediate axonal regeneration and degeneration, respectively. From a mechanistic point of view, little is currently known about how DLK regulates neurite and axon outgrowth. One of the mechanisms potentially contributing to this response is the perturbation of microtubule dynamics since the loss of DLK in mice results in reduced phosphorylation of the microtubule-stabilizing proteins doublecortin, MAP2c and MAP1B , which are known to be involved in neurite outgrowth [ – ]. Knockout studies in mice have also shown that the absence of DLK impairs the injury-induced axonal retrograde transport of phosphorylated c-Jun and STAT3 , two transcription factors that promote axonal regeneration in the PNS . Another clue about the mechanisms involved in the regulatory effect of DLK on axonal growth comes from the observation that DLK signaling in differentiated mouse neuroblastoma Neuro-2a cells regulates the expression of many genes known for their roles in neurite formation and axon guidance, including neuropilin 1 and plexin A4 . To further explore how DLK regulates neurite and axon outgrowth, we investigated the effect of DLK loss on the phosphoproteome of retinoic acid (RA)-differentiated Neuro-2a cells using an isobaric tags for relative and absolute quantitation (iTRAQ) proteomics strategy. This approach allowed us to identify 32 phosphopeptides, representing 27 phosphoproteins whose abundance was significantly altered in DLK-depleted cells exposed to RA compared to that in control cells. Among these phosphoproteins, we focused our attention on the intermediate filament (IF) protein nestin , a neural stem/progenitor cell marker not known to be involved in DLK signaling, and its connection to neuritogenesis. Cell culture and treatment Lentivirus production and generation of stable Neuro-2a cell lines Cell lysate preparation and immunoblotting Neurite outgrowth analysis Quantitative phosphoproteomics Parallel reaction monitoring (PRM) analysis RT‒qPCR experiments Plasmids and transfection RNA interference Statistical analysis Mouse Neuro-2a neuroblastoma cells (ATCC ® CCL-131™) were purchased from the American Type Culture Collection (Rockville, MD, USA) and grown in Dulbecco’s modified Eagle’s medium (DMEM) (Wisent Inc., Saint-Jean-Baptiste, Quebec, Canada) supplemented with 10% (v/v) Gibco fetal bovine serum (FBS) (Thermo Fisher Scientific Inc., Waltham, MA, USA), 100 U/ml penicillin (Wisent Inc., Saint-Jean-Baptiste, Quebec, Canada) and 100 µg/ml streptomycin (Wisent Inc., Saint-Jean-Baptiste, Quebec, Canada). When indicated, the cells were differentiated by incubating them in DMEM supplemented with 2% bovine serum albumin (BSA) and 20 µM retinoic acid (Sigma‒Aldrich Canada Ltd., Oakville, Ontario) solubilized in dimethyl sulfoxide (DMSO, Sigma‒Aldrich Canada Ltd., Oakville, Ontario) for 24 h or more. HEK293T cells (ATCC ® CRL-11268™) were purchased from the American Type Culture Collection (Rockville, MD, USA) and grown in DMEM supplemented with 10% (v/v) FBS and antibiotics. For lentivirus production, cells were cotransfected with the envelope protein-expressing vector pMD2. G and the packaging protein expression vector psPAX2 (kindly provided by Dr. Didier Trono University of Geneva Medical School, Geneva, Switzerland) and with either the empty lentiviral vector pLKO.1 (plasmid 8453, Addgene, Cambridge, MA, USA) or the pLKO.1-based lentiviral mouse DLK shRNA vector (clone TRCN0000022569 [shDLK#1] or clone TRCN0000022573 [shDLK#2], Open Biosystems, Huntsville, AL, USA) using polyethylenimine hydrochloride (#24765, PEI MAX, Polysciences, Inc., Warrington, PA, USA) at a ratio of 1 µg:3 µl. At 72 h posttransfection, the culture medium containing the lentiviruses was harvested, filtered through a 0.45-µm filter, and used for infection. Neuro-2a cells were seeded at a density of 2.0 × 10 6 cells in 100-mm dishes 24 h before infection with viral supernatants supplemented with 8 µg/ml polybrene (Sigma‒Aldrich Canada Ltd., Oakville, Ontario). Two days later, the infected cells were treated with puromycin (2 µg/ml, Sigma‒Aldrich Canada Ltd., Oakville, Ontario) and selected for several days until a stable pool of resistant cells was obtained. Preparation of cell lysates, SDS‒PAGE and immunoblotting were carried out as described previously . When indicated, cytoskeletal proteins were prepared from cultured cells according to the procedure of Choi et al. and subsequently processed for immunoblotting analysis using either an anti-nestin or anti-vimentin antibody. Immunoreactive bands were detected by enhanced chemiluminescence (Western Lightning Plus-ECL, PerkinElmer, Inc., Waltham, MA, USA) and quantified using a Bio-Rad ChemiDoc imaging system. β-actin or vimentin levels were used for normalization. A list of all primary and secondary antibodies used in this study is available in Supplementary File . Neurite outgrowth was quantified using the NeuroTrack software module of the IncuCyte ® S3 Live-Cell Analysis System (Essen BioScience, Inc., Ann Arbor, MI, USA) on images taken every 4 h with a 10x objective. The segmentation mode was set at [Texture], the min cell width (µm) was set at [18,000], and the neurite sensitivity was set at [0,4]. Measurements were conducted on twelve images (4 images, 3 wells) for each replicate of a given condition. Each experiment was performed at least in triplicate, and the resulting data were subjected to multiple unpaired t tests for statistical analysis. Control (pLKO.1) and shDLK#2-depleted Neuro-2a cells were incubated under differentiating conditions for 24 h as described above, pelleted and then flash-frozen at − 80 °C. Samples were submitted to the Proteomics Platform at the CHU de Québec Research Centre for protein extraction, trypsin digestion, peptide labeling with iTRAQ multiplex reagents (SCIEX, Concord, Ontario, Canada) and mass spectrometry (MS) analyses, as previously described . In brief, proteins were extracted in lysis buffer (50 mM ammonium bicarbonate, 0.5% sodium deoxycholate, 50 mM dithiothreitol) containing protease inhibitors (Sigma‒Aldrich Canada Ltd., Oakville, Ontario), a PhosSTOP phosphatase inhibitor mixture (Roche Diagnostics, Laval, Quebec) and 1 mM pepstatin. After quantification, 100 mg of protein per condition from two biological replicates was digested overnight with trypsin, followed by labeling with the iTRAQ reagent tags 114, 115, 116 and 117 for 2 h at room temperature in the dark, as suggested by the manufacturer. The labeled peptides were subsequently combined in one tube, cleaned using an HLB cartridge (Waters, Mississauga, Ontario, Canada), subjected to phosphopeptide enrichment on TiO 2 beads and purified on a graphite column (#A32993, High-Select™ TiO 2 Phosphopeptide Enrichment Kit, Thermo Fisher Scientific, Waltham, MA, USA). The phosphopeptide sample was analyzed by nanoLC-MS/MS using a Dionex UltiMate 3000 (RSLCnano) chromatography system (Thermo Fisher Scientific, Waltham, MA, USA) coupled to an Orbitrap Fusion mass spectrometer (Thermo Fisher Scientific, Waltham, MA, USA) as described previously . The mass spectrometry raw files were processed and quantified using Proteome Discoverer 2.1 software (Thermo Fisher Scientific, Waltham, MA, USA) and searched against the UniProt Mus musculus protein database (58,654 entries) using both the Mascot and Sequest algorithms. The identified peptides and proteins were filtered at a false discovery rate of 1% using the target-decoy strategy. Only proteins identified with at least two unique peptides were considered for quantification. Normalization of the MS data and peptide ratio calculations were performed with Proteome Discoverer software. The statistical significance of differences between samples was evaluated using one-way ANOVA and z score calculations. Phosphopeptides with a p value < 0.05 and a z score > 2 were considered differentially expressed. The levels of phosphorylated nestin at Ser-894 and Ser-137 and phosphorylated c-Jun at Ser-63 and Ser-73 were monitored in control and DLK-depleted cells from three independent experiments using PRM, an MS/MS-based method for targeted quantitation . The phosphopeptide abundance was normalized to that of cytochrome c peptides previously spiked into the samples, and subsequent quantification was performed using the Skyline software package . For comparisons of data between experiments and between samples, statistical analysis was performed using an unpaired two-tailed t test. Total RNA was extracted with a Direct-zol RNA MiniPrep Kit (#R2050, Zymo Research, Irvine, CA, USA) in combination with TRIzol (#15596026, Life Technologies, Burlington, Ontario, Canada) following the manufacturer’s protocol. A 30 min on-column DNase treatment was performed before elution according to the manufacturer’s instructions. RNA was quantified on a NanoDrop spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA). Total RNA quality was assessed with an Agilent 2100 Bioanalyzer (Agilent, Santa Clara, CA, USA). Reverse transcription was performed on 2.2 µg of total RNA with Transcriptor reverse transcriptase, random hexamers, dNTPs (Roche Diagnostics, Laval, Quebec, Canada), and 10 units of RNAseOUT (Invitrogen, Thermo Fisher Scientific, Waltham, MA, USA) following the manufacturer’s protocol in a total volume of 20 µl. All forward and reverse primers were individually resuspended in 20–100 µM stock solution in Tris-EDTA buffer (Integrated DNA Technologies, Coralville, IA, USA) and diluted to 1 µM in RNase DNase-free water (Integrated DNA Technologies, Coralville, IA, USA). qPCR was performed in 10 µl in 96-well plates on a CFX96 thermocycler (Bio-Rad Laboratories, Mississauga, Ontario, Canada) with 5 µl of 2X iTaq Universal SYBR Green Supermix (Bio-Rad Laboratories, Mississauga, Ontario, Canada), 10 ng (3 µl) cDNA, and 200 nM final (2 µl) primer pair solution. The following cycling procedure was used: 3 min at 95 °C; 50 cycles of 15 s at 95 °C, 30 s at 60 °C, and 30 s at 72 °C. Relative expression levels were calculated using the qBASE framework and the housekeeping genes Psmc4 , Pum1 and Txnl4b for mouse cDNA. Primer design and validation were evaluated as described elsewhere . In every qPCR run, a no-template control was performed for each primer pair, and the results were consistently negative. All primer sequences are available in Supplementary File . The expression plasmid for wild-type mouse nestin (#MC202576) was obtained from OriGene (Rockville, MD, USA). Transient transfection of Neuro-2a cells was carried out by using 1 µg of the nestin expression vector per ml of growth medium and PEI MAX (Polysciences Inc. Warrington, PA, USA0. Cells were harvested and processed for Western blotting 48 h after transfection. RNA interference was achieved transiently using small interfering RNA (siRNA) targeting murine nestin (siGENOME set of four, #MQ-057300-01-0002 [D-057300-01, D-057300-03, D-057300-04, D-057300-17]; Dharmacon, Lafayette, CO, USA); nontargeting control siRNAs (ON-TARGETplus Nontargeting Control Pool, #D0018101005, Dharmacon, Lafayette, CO, USA); and Lipofectamine ® RNAiMAX transfection reagent (#13778100, Thermo Fisher Scientific, Waltham, MA, USA). The assays were performed as suggested by Lipofectamine ® RNAiMAX Reagent Protocol 2013 . Neuro-2a cells were seeded at a concentration of 5.0 × 10 3 cells per well in a 96-well plate. Statistical significance of immunoblot and RT-qPCR data was determined by unpaired two-tailed Student’s t test and expressed as the means ± SEMs. Comparisons of neurite outgrowth rates between two cell groups were made using multiple unpaired t tests. All the statistical analyses and calculations were performed with GraphPad Prism 10 (GraphPad Software, San Diego, CA, USA). A p value of < 0.05 was considered statistically significant. Generation and characterization of a Neuro-2a cell model to investigate the molecular mechanisms of DLK-mediated neurite outgrowth Quantitative phosphoproteomic analysis of DLK-depleted Neuro-2a cells undergoing RA-induced differentiation Changes in nestin phosphopeptide abundance after DLK depletion are associated with altered nestin mRNA and protein expression Neither knockdown nor overexpression of nestin affects neurite outgrowth in Neuro-2a cells In light of the results presented above, we next wondered whether the reduction of nestin levels seen in DLK-depleted cells could account for their failure to form neurites when treated with RA. To test this, we silenced nestin expression with siRNAs in parental Neuro-2a cells and measured neurite outgrowth over a 72 h period after RA addition using the Incucyte live cell analysis system. Nestin knockdown efficiency was confirmed by western blotting and normalized to a negative control scrambled siRNA (Fig. A and B). Despite the efficacy of the four different siRNAs used to reduce nestin expression levels in Neuro-2a cells, none significantly suppressed neurite extension in response to RA compared to the control siRNA (Fig. C). Thus, these results suggest that nestin has no regulatory function in the RA-induced neurite growth of Neuro-2a cells. To complement the knockdown approach, we also performed a rescue experiment in which control and shDLK#2-depleted cells were transfected with wild-type nestin or an EGFP control vector in growth medium (DMEM supplemented with 10% FBS). At 48 h after transfection, cells were either processed for western blot analysis to confirm nestin overexpression (Fig. A) or further cultivated for 72 h in differentiation medium (DMEM supplemented with 2% FBS + 20 µM RA) to measure neurite outgrowth (Fig. B). The results indicate that overexpression of nestin alone was not sufficient to restore neurite outgrowth in DLK-depleted cells undergoing RA-induced neuronal differentiation, suggesting that the function of DLK in neuritogenesis is not mediated by nestin. DLK has been recognized as a key regulator of nervous system development and regeneration due to its ability to modulate axon growth [ , , ]. Consistent with this, we previously used the established mouse neural crest-derived cell line Neuro-2a to show that transient depletion of DLK results in the inhibition of neurite outgrowth . To further investigate how DLK contributes to neuritogenesis, we first generated stable Neuro-2a cell lines in which DLK expression was downregulated by RNA interference. Neuro-2a cells were infected with lentiviral vectors expressing two different short hairpin RNAs (shRNAs) that target mouse DLK mRNA (shDLK#1 and shDLK#2), followed by selection with puromycin for several days and expansion. As a negative control, cells were also infected with an empty lentiviral vector (pLKO.1). Knockdown of DLK expression in cells grown in proliferating (DMEM with 10% FBS) or differentiating (DMEM with 2% FBS + 20 µM retinoic acid (RA)) media for 24 h was confirmed by immunoblot analysis. As shown in Fig. A and B, compared with that in control cells, DLK protein expression in cells infected with the shDLK#1 and shDLK#2 constructs was reduced by approximately 50% and 75%, respectively. Parallel immunoblot analyses using antibodies specific for the phosphorylated, activated forms of JNK and c-Jun, two downstream targets of DLK, revealed that DLK depletion significantly impaired the activation of JNK and c-Jun induced by RA (Fig. A and B), suggesting a role for DLK in this response. Interestingly, given the demonstrated contribution of c-Jun to the regulation of its own transcription , we correspondingly observed a significant decrease in its abundance in DLK-depleted cells exposed to RA. Taken together, these data demonstrated that DLK is required for the activation of JNK and c-Jun during the RA-induced differentiation of Neuro-2a cells. To determine whether DLK depletion impairs neurite outgrowth in our shDLK#1 and shDLK#2 Neuro-2a cell lines, we examined their morphological response to differentiation conditions over a 72-hour period using the IncuCyte ® S3 Live-Cell Analysis System for Neuroscience and the IncuCyte ® NeuroTrack software module (Fig. C). In contrast to those of control cells, which exhibited extensive neurite outgrowth after RA treatment, we consistently observed fewer and shorter neurites in both the shDLK#1- and shDLK#2-depleted Neuro-2a cell lines, with the latter showing a greater defect in neurite outgrowth (Fig. C and D). In addition to being consistent with the demonstrated role of DLK in axon formation and elongation , these data confirm that its presence in Neuro-2a cells is critical for neuritogenesis to proceed. Because the neurite outgrowth of the shDLK#2-depleted Neuro-2a cells was significantly reduced by RA treatment, we performed the following experiments using these cells. Depletion of DLK in Neuro-2a cells alters RA-induced differentiation, as evidenced by deficient neurite outgrowth. This difference is likely due, at least in part, to the decreased phosphorylation and activity of JNK and c-Jun (Fig. ), both of which are involved in neurite outgrowth and axonal regeneration [ – ]. Since the role of DLK in neurite formation has not been fully characterized at the molecular level, we speculated that other effector proteins are involved in this response. To identify such unknown effectors of DLK-dependent neurite outgrowth, we examined the phosphoproteomes of control (pLKO.1) and shDLK#2-depleted Neuro-2a cells treated with RA for 24 h using the workflow shown in Fig. A. Briefly, after protein extraction and digestion, the resulting peptides were labeled with iTRAQ reagents to quantify protein abundance, pooled in equimolar amounts, and subjected to a phosphopeptide purification step using TiO 2 particles. Enriched phosphopeptides were subsequently identified and quantified via mass spectrometry. In total, this protocol allowed us to detect and quantify 4942 phosphopeptides on 2123 unique proteins from two independent biological replicates, each with two technical replicates. A volcano plot of all phosphopeptides quantified in our phosphoproteomic analysis is shown in Fig. B. Since our goal was to identify potential effectors of DLK, we focused on phosphopeptides that exhibited at least a 1.5-fold change and a p value less than 0.05 between control and DLK-depleted cells. This statistical analysis revealed that, compared with those in the control group, only 32 phosphopeptides, derived from 27 distinct phosphoproteins, were significantly affected by DLK depletion, with 23 downregulated and 9 upregulated phosphoproteins (Table ). Gene Ontology (GO) and functional interaction analyses of the 27 phosphoproteiand Search Tool for the Retrieval of Interacting Genes/Proteins (STRING) databases, respectively. The results revealed that many of the proteins were associated with nervous system development (Table ) and potentially connected into a network of proteins, including among others, c-Jun, neural cell adhesion molecule 1 ( Ncam1) and nestin (Fig. C). Our phosphoproteomic screen revealed that the phosphorylation of c-Jun, a key regulator of axonal regeneration , at Ser63 and Ser73 was significantly decreased upon DLK depletion, further validating the immunoblot results presented above. These two serine residues are selectively phosphorylated by JNK in response to various stimuli and are involved in c-Jun transcriptional activity . Ser774 of Ncam1 is another site whose phosphorylation level significantly decreased in DLK-depleted cells undergoing RA-induced differentiation compared with that in control cells. Interestingly, phosphorylation at this site has been shown to be required for activation of the cAMP response element-binding protein (CREB) transcription factor and for neurite outgrowth . Finally, an important feature of our phosphoproteomic data was the substantial decrease in the abundance of two phosphopeptides, Ser-894 and Ser-1837, of nestin, an IF protein highly expressed in neural progenitor cells . Although these two serine residues are known to be phosphorylated in various cellular contexts [ – ], their role in nestin dynamics and/or function has not been reported. Given the involvement of nestin in stem cell functions, including differentiation and migration , we decided to focus our study on evaluating the relationship between DLK and nestin in more detail. To this end, we first confirmed the validity of our phosphopeptide data in control and DLK-depleted cells by parallel reaction monitoring (PRM) mass spectrometry, a sensitive targeted proteomics method for the selective and accurate quantification of multiple proteins or peptides simultaneously . As illustrated in Fig. D, DLK depletion led to an expected and dramatic decrease in the abundance of phosphopeptides containing Ser63 and Ser73 of c-Jun, demonstrating the efficacy of the PRM procedure. In addition, we found that the phosphorylation levels of Ser-894 and Ser-1837 of nestin were significantly lower after DLK knockdown than after control treatment (Fig. D), highlighting a novel role for DLK in nestin regulation and/or function. Because the observed changes in nestin phosphopeptide abundance in DLK-depleted cells may reflect either decreased phosphorylation levels, decreased protein abundance, or both, we measured nestin mRNA and protein expression in control and DLK-depleted Neuro-2a cells that were exposed to RA for 24 h by RT‒qPCR and Western blot analysis, respectively. As shown in Fig. A, the levels of nestin transcripts markedly increased in control cells undergoing RA-induced neuronal differentiation. In contrast, in shDLK#1- and shDLK#2-depleted cells, this increase was weaker and not significant when compared to control cells, suggesting a positive regulatory role for DLK in the RA-induced expression of nestin. This assumption was supported by the immunoblot data obtained with cytoskeletal extracts, which showed a lack of responsiveness to RA and even a decrease in nestin abundance in DLK-depleted cells compared to control cells (Fig. B and C). The fact that shDLK#1 does not reduce nestin protein abundance with the same efficacy as shDLK#2 could be due to a difference in specificity between the two shRNAs and/or a difference in residual DLK levels between the two cell lines. Taken together, these results indicate that the decreased abundance of nestin phosphopeptides observed after DLK depletion was due to changes in the overall expression of nestin rather than to reduced phosphorylation. regulates nervous system development and regeneration in many model organisms, such as Drosophila , C. elegans and mice, through positive modulation of neurite outgrowth [ , , , ]. However, exactly how DLK contributes to this process of fundamental importance in neurobiology remains a major unanswered question that we began to address in this study. Since DLK presumably exerts its function in neurite outgrowth by catalyzing the phosphorylation of specific effector proteins, either directly or indirectly, we performed a quantitative phosphoproteomic analysis of control and DLK-depleted Neuro-2a cells undergoing RA-induced neuronal differentiation to identify such target proteins. From this experiment, we observed that, compared with the control treatment, the loss of DLK significantly up- and downregulated the abundance of 9 and 23 phosphopeptides, respectively. Interestingly, almost half of the proteins from which the identified phosphopeptides were derived were enriched in GO biological process terms linked to nervous system development and included proteins with a predicted functional relationship, such as Jun, Nes, Ncam1, Marcks and Marcksl1. Among this list of candidate proteins, we decided to focus on nestin for two particular reasons: first, it exhibited the most dramatic reduction in phosphopeptide abundance in DLK-knockdown cells; second, previous studies have provided some evidence supporting its relevance to the differentiation and migration of stem cells, particularly those of the neural lineage . However, using RT‒qPCR and Western blotting, we were able to show that the observed decrease in nestin phosphorylation following DLK depletion correlated with reduced nestin expression at both the mRNA and protein levels, suggesting a possible role for DLK in the regulation of overall nestin abundance rather than phosphorylation. Although nestin was identified more than 30 years ago , very little is known about its regulation and function. The mammalian nestin gene consists of four exons and three introns [ – ]. An enhancer element residing within the second intron regulates nestin expression in the developing central nervous system (CNS) and during RA-induced neural differentiation of P19 embryonic carcinoma cells . This enhancer contains various types of cis-acting elements, such as hormone-response elements (HREs) and binding sites for the Sox and POU transcription factors, which orchestrate nestin gene expression in neural stem/progenitor cells (NSPCs) both in vitro and in vivo. Interestingly, Sox transcription factors are highly relevant for the development of the nervous system, and three of them, namely Sox1, Sox2 and Sox3, are differentially expressed in NT2/D1 cells undergoing neural differentiation with RA . Histone acetylation, an epigenetic mechanism, is also known to mediate the activation of nestin transcription during the differentiation of P19 cells along the neural cell lineage . Consistent with what has been previously described in murine embryonic stem cells and P19 cells , treatment with RA, a vitamin A derivative essential for brain development, neuronal differentiation and neurite outgrowth , significantly increased nestin expression in Neuro-2a cells (Fig. ). Like other lipophilic hormones, the action of RA in this cell line likely by probably involves genomic and nongenomic pathways . The genomic effect of RA on the expression of target genes is mediated by interactions with the nuclear receptors retinoic acid receptor (RAR) and retinoid X receptor (RXR), which function as transcription factors, while its nongenomic action involves the activation of kinase cascades, which, in turn, modulate cytoplasmic and nuclear events through the phosphorylation of specific target proteins . To the best of our knowledge, no studies have been performed to elucidate in detail how RA upregulates nestin expression in stem cells committed to a neuronal fate. In the present study, we found that DLK is likely an important component of the mechanism underlying RA-induced nestin expression, as its knockdown in Neuro-2a cells significantly impaired this response at both the mRNA and protein levels (Fig. ). Since variations in mRNA levels are typically attributed to changes in synthesis (transcription) rather than degradation, it is likely that DLK depletion affects nestin gene expression in RA-treated cells by disrupting chromatin remodeling and/or the binding of transcriptional regulators. Although DLK itself is not known to directly recruit the general transcriptional machinery or initiate transcription, its effector, JNK, has this capability. Indeed, JNK interacts with, phosphorylates, and regulates various transcription factors (e.g. c-Jun, ATF2, Sox2, STAT3, RARα, PPARγ, POU2F1, POU5F1) as well as chromatin-associated proteins (e.g. histone H3, bromodomain protein 4) [ , – ]. An effect of DLK depletion on nestin protein synthesis and/or stability is also plausible, given the results of previous work showing that: (i) DLK regulates the synthesis of Down syndrome cell adhesion molecule (Dscam) in Drosophila via the poly(A)-binding protein PABP-C, an activator of protein translation , and (ii) its effector, JNK, enhances the stability of certain proteins, such as c-Jun, via phosphorylation . Thus, our study identifies DLK as a downstream mediator of RA signaling that governs nestin expression during neuronal differentiation (Fig. ). While the physiological relevance of this regulation remains unclear, it suggests a role for DLK in modulating of nestin function. As mentioned above, nestin is an IF protein predominantly expressed in NSPCs of the nervous system . Upon their differentiation and concomitant loss of multipotency, nestin expression decreases, and nestin is replaced by other IF proteins, namely, neurofilaments and glial fibrillary acidic protein, in the neuronal and glial cell lineages, respectively . Whether this downregulation of nestin plays an active role in the switch from growth to differentiation of neural cells has not been well studied. Mice deficient in nestin are viable and develop normally like wild-type mice but exhibit increased neurogenesis in the adult hippocampal dentate gyrus , suggesting a negative role for nestin in the control of this process. In support of this notion, Wilhelmsson et al. also reported that dissociated neurospheres from Nes −/− mice generate more neurons than do those from wild-type animals when grown under differentiation conditions. Interestingly, this response is not a direct consequence of the loss of nestin function in NSPCs but is caused by reduced Notch signaling between astrocytes and neural stem cells . Notch signaling is known to promote proliferation during neurogenesis, whereas its inhibition induces neuronal differentiation both in vitro and in vivo . Thus, nestin appears to antagonize the neuronal differentiation of neural stem cells through its ability to upregulate Notch signaling in astrocytes. In the present study, which used monocultures of Neuro-2a cells, a mouse neural crest-derived cell line capable of differentiating into neurons , we observed that RA simultaneously induces the upregulation of nestin levels and neurite outgrowth in a DLK-dependent manner. These effects of RA appeared to be independent of each other since neither knockdown nor overexpression of nestin in Neuro-2a cells caused detectable changes in RA-induced neurite formation and outgrowth (Figs. and ), which are the hallmarks of differentiated cells. Therefore, we concluded that nestin does not have an obvious direct role in regulating neurite extension in Neuro-2a cells. However, because neurite formation is a complex process involving cytoskeletal rearrangements, plasma membrane extension, second messenger production and postranslational modification , we cannot exclude the possibility that nestin plays a role in the neuronal differentiation of Neuro-2a cells, which is too subtle to be detected in our assays. Further studies will be needed to fully evaluate whether there is a link between DLK signaling, nestin expression and neurite outgrowth. In addition to being expressed in NSPCs, nestin is also transiently expressed in the axons of newborn neurons, where it plays a role in growth cone morphology and the response to the axonal guidance cue semaphorin 3 A (Sema3A). Indeed, in the absence of nestin, neurons exhibit larger growth cones and reduced sensitivity to Sema3A , which induces growth cone collapse and axon repulsion in culture . Mechanistically, the effect of nestin on growth cones and Sema3A sensitivity can be attributed, at least in part, to cdk5/p35-mediated phosphorylation of doublecortin (DCX) , a critical regulator of microtubule (MT) structure, stability and function in immature neurons . These findings are of particular importance because the DLK-JNK pathway has been reported to regulate (i) the expression of axon guidance proteins , such as neuropilin 1, which functions as a transmembrane cellular receptor for Sema3A, and (ii) axonogenesis via the phosphorylation of several MT regulators, including DCX . Therefore, through their ability to both modulate DCX phosphorylation, nestin and DLK are likely capable of regulating the microtubule cytoskeleton, whose organization and remodeling are essential for axonal growth and guidance as well as migration in developing neurons . Thus, the impact of DLK-mediated regulation of nestin expression on microtubule structure and dynamics during neuronal differentiation is an important question that needs to be addressed by further research. Our study identifies for the first time a link between the regulation of nestin expression and DLK signaling in RA-exposed neuroblastoma Neuro-2a cells. The importance of nestin regulation by DLK for neuronal differentiation remains puzzling, especially because nestin knockdown or overexpression did not disrupt neurite outgrowth in these cells. Further work is needed to define the role of nestin in the organization and integrity of the neuronal cytoskeleton and to determine its contribution, if any, to DLK-mediated neurite outgrowth. Below is the link to the electronic supplementary material. Supplementary Material 1: Raw data (uncropped western blots) for Fig. 1A. Supplementary Material 2: Raw data (uncropped western blots) for Fig. 3B. Supplementary Material 3: Raw data (uncropped western blots) for Fig. 4A. Supplementary Material 4: Raw data (uncropped western blots) for Fig. 5A. Supplementary Material 5: List of primary and secondary antibodies used in this study. Supplementary Material 6: List of primers used for qPCR.
Neuropsychology of epilepsy surgery and theory-based practice: an opinion review
3000a762-236f-4cbb-9565-16df62450fd2
10550352
Physiology[mh]
Epilepsy surgery (ES) constitutes an effective alternative treatment option for patients with refractory focal epilepsy. Patients are characterized as pharmacoresistant according to the International League Against Epilepsy (ILAE), definition of drug-resistant epilepsy, that is, all of them were diagnosed with partial epilepsy and continued to have seizures for > 3 years despite the adequate and informative treatment with at least 5 antiepileptic drugs (AEDs). Advances in surgical techniques have improved seizure control outcomes and the quality of life of patients. The presurgical workup is vital for selecting patients and ensuring optimal outcomes, as well as developing a safe and rational surgical strategy. It relies heavily on an interdisciplinary effort encompassing electrophysiological, neuroimaging, neurological, WADA test, neuropsychological and psychiatric evaluation. In particular, the role of neuropsychological assessment in the context of preoperative monitoring of patients with drug-resistant epilepsy could be summarized in two general contexts: on the one hand anatomical localization and hemispherical lateralization of neurocognitive deficits associated with the seizures, and on the other, predictive information about the postoperative outcome of memory and cognitive functions, as well as the effectiveness of surgical treatment in seizure control. More specifically, these two contexts establish the neuropsychologist's specific contribution in four major areas: preoperative and postoperative assessment of neurocognitive function; neurocognitive assessment during the WADA test for the purpose of hemispherical lateralization of speech processes and functional asymmetries of memory; the interpretation of neuropsychological performance of patients undergoing functional magnetic resonance imaging (fMRI) protocols; the intraoperative or perioperative evaluation of cognitive and sensory-motor functions via electrocortical stimulation mapping. Although advanced electrophysiological and neuroimaging methodologies provide high-quality diagnostic data, the information provided by preoperative neuropsychological evaluations may significantly contribute to the characterization of the functional deficit zone, and more specifically cortical areas that are functionally abnormal between seizures, a concept at the heart of focal epilepsy syndromes. This is achieved by documenting and confirming anatomical (topical) information from other measures or even other instances, rejecting, if necessary, some of the earlier localizing scenarios, and functional aspects that otherwise would remain inaccessible to other diagnostic methods (neuroimaging and electrophysiology). While in the past neuropsychological evaluation was the gold standard for localizing and lateralizing lesions, with the advent of advanced neuroimaging and electrophysiological methods, contemporary neuropsychological assessment is more likely to deal with other important clinical and rehabilitative issues, that is, neuropsychological differential diagnosis to determine deficit profiles in various neurological (such as dementing conditions) and/or neuropsychiatric (such as psychotic disorders) conditions, informing physicians as to the negative effects of certain types of medications on cognition, establishing performance baselines to detect cognitive decline, planning cognitive rehabilitation protocols on the basis of the neuropsychological pattern of impairment of the patients, providing prognostic information with respect to the vocational, social, and functional outcome after brain injury etc. According to Beaumont, in spite of the considerable advancements, particularly the development of modern medical imaging, clinical neuropsychologists can still make a substantial contribution to the diagnosis and localization of lesions in individual cases. Despite this shift in the state of clinical affairs, the neuropsychology of ES is a domain in which serious concerns are raised as to the effectiveness of both biomedical technology and mainstream neuropsychology in solving diagnostic dilemmas. A strong critic on the frequent risk of reverse inference in fMRI research outlines the dangers of assuming uniformity across contexts and drawing premature, data-driven, inferences based on activation patterns alone. “… It is precisely here that the mettle of Luria's contribution to neuropsychology is tested and emerges from the fire as true cultural neuropsychology….” . It should not be omitted that besides neuropsychological monitoring, psychiatric assessment is another major although often neglected issue in epilepsy presurgical workup, since active psychopathology may affect postoperative seizure freedom; surgery may lead to the development of de novo psychiatric disorders, and in many instances reverse a preexisting psychiatric condition. This calls for a comprehensive psychiatric assessment of ES candidates pre- and postsurgery to minimize the risk of postsurgical psychiatric morbidities and/or poor quality of life. The literature search conducted in the present paper comprised empirical research studies, reviews, books and chapters along with references included within identified works. The Scopus, PubMed, and Google scholar databases were used as search engines. Only papers in English were considered. The last search was conducted in February 2022. The search terms were the following: neuropsychology AND epilepsy surgery , AND Luria/Neolurian / AND theory / model / approach AND cognitive / neuropsychological AND memory / learning AND executive AND function/performance AND presurgical/preoperative AND postsurgical/postoperative AND assessment/evaluation/monitoring AND temporal lobe epilepsy / frontal lobe epilepsy AND neuroimaging . From this search, 7 epilepsy surgery studies, 3 on frontal lobe epilepsy, 11 on temporal lobe epilepsy, 7 on memory and learning, 4 on neuroimaging, 3 on neural networks, and 10 on Luria's work and implications, were fully screened, and related accidents were included. With the present review, we summarize the central concepts of the Lurian theory (such as the qualitative syndrome analysis), their relevance and implications to the modern neuropsychology of epilepsy and ES underscoring our Lurian-based practice, as well as the barriers to clinical reasoning imposed by quadrant-based views of the brain, or even atheoretical, statistically-based and data-driven approaches. We further advise towards a systemic view inspired by Luria's clinical work and theorizing, given their importance towards our clinical practice, by contrasting it to the modular views when appropriate. As already mentioned, one of the main goals of presurgical neuropsychological assessment is to highlight the so-called functional deficit zone, that is, the array of brain areas showing cognitive dysfunction and/or deficits between seizures. This may involve an extensive constellation of regions with either intrahemispheric and/or interhemispheric anatomical distributions. Thus, informing neurosurgeons and neurologists as to the main nodes making up the epileptic network may increase surgical outcome. Importantly, this is consistent with the current epistemological trends suggesting a shift from the concept of epileptic focus to that of epileptic network, looking at epilepsy as a neuronal network disease. Moreover, neuropsychological assessment may point to cognitive potentialities or deficiencies that are inconsistent with previous anatomical findings (such as magnetic resonance imaging [MRI] electroencephalogram [EEG]). Such discrepancies are of considerable value since they may show, for instance, an unsuspected atypical representation of language. Hence, integrating neuropsychological findings with data from multiple sources offers a more complete picture for individual patients. Evidence has documented the potential of the preoperative neuropsychological assessment aiding in seizure lateralization when EEG and MRI findings fail to provide clear lateralization data. For instance, the Logical Memory subtest of the Wechsler Memory Scale-Revised percent of verbal retention may aid lateralization in temporal lobe epilepsy (TLE) patients for whom MRI findings were insufficient. Another way to obtain lateralization clues is through the use of discrepancy scores on memory measures, such as the Memory Assessment Scales (MAS); Verbal-Visual Memory discrepancy ; and Auditory-Visual Delayed Index difference. Postoperative cognitive outcome prediction in epilepsy is far more developed than the utility of neuropsychological test scores in predicting seizure lateralization and localization in individual patients. Regression formulas have also been proposed to enhance the clinical utility of presurgical neuropsychological data, aiming at identifying which neuropsychological domains contributed with a significant incremental variance to the prediction of seizure lateralization. Although elegant statistical applications may be of aid in isolating cognitive domains accounting for seizure lateralization prediction, they can hardly explain essential qualitative differences; that is, the salience of one or other factor/s apparently leading to the same (epiphenomenon) cognitive domain affection, yet implying the differential contribution of mental component/s breakdown with an altogether different anatomical distribution. Thus, it is difficult, if not impossible, to interpret these findings in the light of a cause and effect relation. For example, correlational data show that a pair of variables may move together (covary) but this is not always equivalent to a causality rapport. For decades, the neuropsychology of epilepsy has been dominated by modular brain theories (such as the material-specificity theory of memory), which on the one hand have greatly contributed to the understanding of epilepsy-related neurocognitive impairment and its neurobiological determinants, but on the other have posed considerable obstacles to clinical understanding. Instead, Luria looked at higher cortical functions as the result of dynamic and coordinated work of various brain areas, each playing its own particular and unique role as part of a functional system. In fact, he assumed that mental operations are the coproduct of different functional systems, with their components lying on different sides and sites of the brain. Therefore, neuropsychological deficits are no longer regarded in terms of specific brain region malfunction giving rise to symptoms (taxonomic reduction), but rather through a qualitative syndrome analysis or symptom-complex. To Luria, a syndrome is a particular structure resulting from causally related, multi-level symptoms (primary and secondary) collections, enabling the clinician to access the ¨internal geometry¨ of a neuropsychological breakdown and hence advance localization hypothesis. Although a shallow understanding of Luria's methods may regard them as impromptu or even not amenable to quantification, the clinical impact of his theory revolutionized human neuropsychology. Luria produced a theory corpus that, although not exclusively focused on epilepsy, appears to apply to the complete range of neuropathology. His holistic and systemic approach of the brain is consistent and to some extent foretold modern network approaches stemming from neuroimaging. Regarding epilepsy, the logic of cognitive functions organized into complex functional networks, contrary to modular views of the brain, seems to herald current knowledge of epilepsy as a network disease, as well as the concept of the functional deficit zone. These contributions seem to be of capital importance for the neuropsychology of ES, since they provide valuable methods and theories to aid in the localization – and lateralization – of cognitive deficits (see below) . Consequently, they are of great applicability in the context of the preoperative neuropsychological monitoring of patients-candidates for ES, where neuropsychologists strive towards the anatomical mapping of neuropsychological deficits to aid surgeons. To some authors, conventional neuropsychological measures can hardly differentiate seizures with different focal onseţ with specific associations between neuropsychological deficits and types of epilepsy representing mostly exceptions, while the complex interplay between cognitive performance and epilepsy-related variables may obscure the patients' neuropsychological picture, thus further complicating things. For instance, the known negative effects of seizures spreading from temporolimbic to frontal brain regions may explain the secondary-systemic frontal-like dysfunction frequently encountered in TLE patients, and vice versa (that is, the problem of overlapping deficits). Nevertheless, dichotomous and quadrant-based views continue to pose barriers on the way clinicians interpret cognitive functions and their supposed direct link to specific areas of the brain. This may give rise to arbitrary and oversimplified interpretations of neuropsychological performance based on the distinction between frontal and temporal, or even exclusively hippocampal tasks. This failure in topographic distinction of seizure onset on the basis of test performance-related impairments further emphasizes the need for a wider view of cerebral functional organization extending beyond mere ¨localizationism¨, performance-based, or even preconceived and/or psychometrically-based construct validity assumptions of cognitive measures. A quick glance at the historical studies of surgical patients suffering from intractable temporolimbic epilepsies may better clarify the issue above. Since then, material-specificity theory of memory was the leading paradigm in the neuropsychology of focal epilepsies, mesial (M)-TLE in particular, pointing on a clear-cut lateralization of cognitive functions and especially of memory. Yet, both clinical practice and neuroimaging research have raised further concerns regarding this theory, proposing, for instance, a dynamic interaction between left and right temporal regions, by selectively engaging on the basis of the task demands at hand. The complex inferential reasoning that drives diagnostic hypotheses has progressively been substituted by almost reflexive clinical automatisms, such as those seen when administering standardized cognitive tests and interpreting the data collected, stemming from modular and performance-based neuropsychological approaches. This reflects a rather mechanistic and nontheoretical perspective. Reliance upon diagnostic hypotheses formed on the basis of clinical observation, as well as scientific contextualization of findings and assumptions in the light of a theory are lacking. As Luria states, an important flaw of standardized tests is their reliance on a preconceived classification of “functions”; thus, they are hardly able to reveal the structure of neurocognitive abnormalities resulting from brain lesions. What is more, such measures are not aimed as much at qualitatively analyzing defects as at evaluating the degree of functional impairment of the patient in terms of performance. Hence, they are unsuited for determining the qualitative features – meaning the structure of the disturbance – and are even less suited for analyzing the pathological components responsible for the impairment. Consequently, implementing standardized measures alone for the diagnosis of circumscribed brain lesions or in cases of ES (for identification of the functional deficit zone) are not likely to justify the confidence placed in them. Interestingly, the current construct of the functional deficit zone, that is, the part of the brain showing dysfunction interictally, as suggested by objective neurological examination, neuropsychological screening, and functional neuroimaging (including fMRIs and FDG (fluorodeoxyglucose) PET (positron emission tomography)-scanning) is consistent with Luria's systemic view. This is of great importance for the neuropsychology of epilepsy and the preoperative neuropsychological monitoring of patients who are candidates for ES, in particular. This is because the neuropsychologist is called to offer the anatomical distribution of cognitive and neurobehavioral dysfunction which manifests itself during the assessment conducted interictally. The areas identified as participating in the expression of neuropsychological impairment/s are thus informative of the functional deficit zone. During the early 1940s, when imaging technologies were not yet available, neuropsychology played a decisive role in helping clinicians localize and/or lateralize cognitive and neurobehavioral impairments caused by brain injuries, thus contributing to clinical decisions in neurology and neurosurgery. The “blind” conditions under which neuropsychological diagnosis was taking place, as imposed by technological gaps, instead of being an obstacle to the development of neuropsychology, often was the very thing that propelled the refinement of clinical acumen and the cultivation of theorizing. The neuropsychological phenomena that arise spontaneously or are elicited in the context of either the initial clinical history taking or during test performance provide a starting point for theory-based causal attributions regarding the anatomical distribution of deficits. When interpreting neuropsychological findings to provide localizing neuropsychological diagnosis, neuropsychologists often tend to rely on a sort of basic judgment strategy, an “availability heuristic,” that is, group studies that “profile” cognitive dysfunction and/or neuroimaging studies (task performance and regional activations correlations). In contrast, psychometric data should be used to construct a clinical history in the light of a theory model to allow neuropsychologists to reach a scientifically based hypothesis. It is imperative to observe the specific conditions under which a given deficit manifests itself; to conduct qualitative analysis; to theoretically contextualize neuropsychological data; and to coevaluate the data through the “filter” of the patient's clinical-demographic, cultural, and idiosyncratic (personalized data) background to establish cause and effect relationships. The patient's neuropsychological profile should become “clear” not as a result of other investigations, but rather based on a syndrome analysis – a qualitative inquiry aimed at leading directly to the structure of the disturbance – by a disentangling of the/those factor(s) responsible for functional systems breakdown. We suggest that preoperative neuropsychological assessment in its initial phases would be better conducted with the neuropsychologists being blind to other sources of evidence, to avoid biases from predetermined assumptions. Memory and its neurobiological foundations could serve to gain insight into Luria's concept of brain function. More specifically, the material-specific theory of anterograde memory was developed and dominated the neuropsychology of epilepsy, claiming that whenever epilepsy onset originated from the dominant (usually left) temporal lobe, then verbal learning and memory would be adversely affected. Instead, in cases of right nondominant temporal lobe seizure onset, learning and memory for nonverbal material (such as designs or faces) would be affected, although evidence supporting this is weaker than that regarding the left temporal lobe and verbal memory. Furthermore, other cognitive abilities were presumed to remain relatively intact since seizure onset and focal epileptiform abnormalities were thought to exclusively concern the temporolimbic territories supporting encoding new information into memory. The material-specificity memory model was the main indicator of whether the contralateral nonepileptic temporal lobe was functionally enough to sustain memory post-operatively. Consequently, illusory assumptions of a compartmentalization of cognition, that is, verbal versus nonverbal memory and mnemonic versus nonmnemonic functions) led scholars to think for the former a clear-cut left to right temporal lobe functional distinction, while for the latter a frontal to extra frontal dichotomy. Such quadrant-base views, although on one hand being initially an aid to understand brain function, progressively limited clinical thought, often leading to oversimplified and fragmentary views of higher cortical functions. The implications for the presurgical neuropsychological assessment (targeting localization and lateralization of cognitive deficits) were translated into a sort of rigid clinical automatisms dictating a “left to right” and “anterior to posterior” diagnosis of cognitive dysfunction, thus restricting finer and patient-tailored qualitative views. The aforementioned assumptions constitute the main reason for the pressing need of more systematic views, like syndrome analysis stemming from Lurian theory. It means that the anatomical basis of neuropsychological deficits is no longer identified within specific brain sites of the dominant or nondominant hemisphere responsible for the symptom, but rather through a qualitative neuropsychological analysis of the syndrome or symptom-complex. Syndrome analysis is a process of analytic comparison of neuropsychological evidence accessed through various tests and the determination of general signs among them, hopefully defining a unified syndrome. To Luria, a syndrome is conceived as a structure emerged by constellations of causally related, multi-level symptoms (primary and secondary); thus, understanding the different nature of the latter is of crucial importance in determining the source of a neurological breakdown and also in advancing the localization hypotheses. Historically, neuropsychological methods of investigation have been linked to three different scientific traditions (North American, Russian – former Soviet Union, and British) that have influenced clinical neuropsychological praxis. While each of them places the emphasis on different constructs, they are hardly independent from each other. The North American approach makes use of selected tests because of their assumed relation to some element of a scheme of psychological abilities. It has been originally developed to assist psychological assessment of individual differences, a topic that received lots of interest by American psychologists. A strong point of such an approach, adopting test batteries, is a comprehensive coverage of functions, allowing the use of scores emerging from different test results. Instead, a weak point links to the extreme length of assessment procedures, their impracticality, as well as the fact of being derived from models of normal mental function, and thus probably not suitable for clinical settings. On the contrary, the Russian approach to neuropsychology has mainly adopted a single case study methodology stemming from behavioral neurology and consistent with the Lurian theory of brain functions organized in terms of functional systems distributed to particular brain regions. It is based on models of abnormal function, and aiming at the assessment of the functional systems (searching the exact factor/s breakdown leading to the particular disorder) and therefore more suitable for clinical settings. Moreover, tests selection is case sensitive contrasting with the massive use of neuropsychological batteries often administered in an acritical fashion. Tests were rather informal, and overall unstandardized, making little if no use of standardized procedures or normative data (difficult to quantify subtle changes but more sensitive to qualitative aspects of behavior), while successful diagnosis heavily relied on the level of expertise of the clinician. British neuropsychology stands between these two approaches, investigating individual cases with selected standardized measures. The main advantage of this approach is its special focus on individual patients and the parallel use of statistical analysis, thus being able to profile the disability of individual patients. However, the investigative process may be fragmentary and unsystematic, often placing huge reliance on tests procedures that happen to be available and, probably, therefore insensitive and/or inadequate. It is important to note that all approaches rely to some extent on the clinical expertise and acumen of the neuropsychologist. During the past two decades, the British approach was the dominant international style, mostly because of the growing influence of cognitive neuropsychology. Diagnostic approaches in neuropsychology are likely to vary in accordance to their theory framework of reference. On our opinion there is a general state of confusion regarding the very meaning and implications of neuropsychological theories and the methodological foundations of neuropsychology. Many people believe that by making use of normative data, validated methods of investigation or evidence-based knowledge may suffice or even remedy for the lack of theory. To this inconvenience may contribute the current state of affairs in clinical neuropsychology being dominated by statistically-based, data-guided or even atheoretical views. Instead, Luria proposed a conceptually-driven clinical praxis in neuropsychology. He was inclined to strongly reject an approach in which “auxiliary aids become the central method and in which their role as servant to clinical thought is reversed so that clinical reasoning follows instrumental data as a slave follows its master” . Evidently, Luria's perspective has been strongly impacted by eminent physiological theories (such as the Pavlovian law of power), his Marxist views and the implementation of historical materialism to the study of the brain to determine cause and effect relations. The Lurian approach has been widely criticized since the lack of a direct evaluation of the tests and of controlled scoring, with the latter mainly based upon the level of expertise of the clinician rather than on normative data. Standardized measures have proven to be more valid and reliable in cases of focal, well-defined brain insults, but with limited capacity when assessing patients suffering ill-defined brain impairments (as in many epilepsy patients) and severe or diffuse neurobehavioral disorders. However, when examining neurocognitive functions using standard measures on two different patients who have been assigned the same degree of severity, they may manifest qualitatively different neuropsychological deficits. In effect, well-known cognitive constructs may become blurred due to the weak correlation between paradigms with multiple cognitive factors, and performance in different cognitive tasks. On the contrary, Luria poses the emphasis on the process (not on test achievement) allowing for error analysis and shared breakdowns (factors) across tasks, to enable clinicians to identify the impaired network. Therefore, it proves more apt for patients with diffuse neuropsychological impairments resulting from network disconnection, as in epilepsy, given the disconnecting interfering effects of seizures on functional networks. Various attempts have been undertaken to standardize Luria's tests, with the work of Glozman being the most representative, while many neuropsychologists have started to consider rather flexible assessment protocols by choosing both quantitative and qualitative tests from different batteries, targeting (according to Luria) a synthetic evaluation of cognitive functions and making it possible to dissect them into several functional domains. The dichotomy between quantitative and qualitative neuropsychology is an illusory one, since the qualitative approach is nothing more than an approach based on productive, positive symptomatology, which is susceptible to quantification. A necessary compromise of psychometric tradition with the qualitative approach is expected by modern neuropsychological procedures offering both symptom elicitation and quantification. According to Chelune, evidence-based clinical neuropsychological practice (EBCNP) refers to methods for enhancing interactions between research and practice (clinical outcomes research), that is, “… the scientific method applied at the level of the individual hypothesis formation, literature review, study design and data collection, analysis, and conclusion …”. Thus, it is a mistake confounding EBCNP with the adoption of a particular neuropsychological theory, whatever it is. Evidence-based clinical neuropsychological practice refers to a method rather than a theory framework. When it comes to epilepsy, evidence-based neuropsychological knowledge (largely based on group studies) indicates, for instance, a lack or even absence of differentiation in terms of cognitive performance between focal epilepsy syndromes (such as FLE versus TLE), despite some significant differences found in individual subtests, leaving a knowledge gap in preoperative neuropsychological diagnosis of the functional deficit zone. We believe that a theory-guided approach (such as that proposed by Luria for syndrome analysis) could bridge this gap. Consequently, as we did elsewhere, we strongly propose that contextualized interpretation, based on theories, should also be adopted by neuropsychologists working in the domain of ES. In recent times, the functional systems theory of cerebral organization has been in part refined and further elaborated from neolurians who continued this school of thought in neuropsychology, as well as by others who inadvertently arrived at the same conclusions regarding brain function. Goldberg et al. proposed a Lurian-based theoretical extension to account for hemispheric specialization in terms of novelty/familiarity distinctions as opposed to material specificity theory, that is, spatial versus. linguistic or global versus local processing. Goldberg's gradiental theory constitutes a rejection of the modular view proposing a distributed-emergent principle, accounting for cortical functional organization. He proposes a spectral anatomical distribution of heteromodal association areas, called gradients, housing functionally similar cognitive processes in anatomical proximity within association cortices. Another theory of cortical representation elaborating on Luria's functional systems theory was formulated by Fuster. In his view, reiterant units called cognits comprise cognitive functions being intended as information exchange within and between cognits. The crucial element here is that different cognits (neural networks) have an identifiable cortical distribution, but cognitive functions do not, since the later are mediated both by shared and/or similar circuits. There are also studies replicating early theoretical notions as those put forward by Elkhonon Goldberg (one of Luria's students), who hypothesized differences in hemispheric specialization arising as a function of the practical acquisition and use of descriptive systems (such as including language and other symbolic systems). Accordingly, early bilingualism not only induces plastic changes within language networks, but also in those mediating executive functions (for review, see ). Similar long-term effects inducing neuroplastic changes have been reported in cases of chronic practice of focused tasks, such as musical performance and mental calculations on an abacus. It was also put forward that physical environment seems to determine one's initial preference and its later development for adopting either ventral stream (system of what) or dorsal stream (system of where) processing styles when conducting visual inspection. Aversi-Ferreira reviewed Luria's studies on the neuropsychology of the temporal lobes and compared these with more recent data. The authors showed that Luria's theory constitutes the basis for neuropsychological studies today, while new imaging data on the temporal lobe in relation to epilepsy and hippocampus analysis are consistent with Luria's views. Current neuroimaging research points to the failure of strict modularity assumptions in more complex integrative tasks, frequently employing analyses of functional connectivity presenting huge analogies and actually formalizing Lurian concepts. A genome-wide focus on neuropsychological phenotypes essentially offers a modern translation of Luria's cultural neuropsychology. The present paper, through the lenses of the Lurian approach, points to the necessity for the neuropsychology of ES to bypass the quadrant-based views of brain function, in favor of systemic ones like the syndrome analysis . Luria's idiographic neuropsychological approach, through thorough and insightful observations and theory-driven methods, may be of considerable aid in the context of the preoperative work-up of ES candidates, particularly when lateralization and localization of seizures is required. We encourage people working in the area of neuropsychology of epilepsy to become familiar with theories of brain function and implement this knowledge to support their clinical decisions. In conclusion, a neuropsychological approach to epilepsy, consistent with the Lurian view of higher cortical functions organized into functional systems, may deepen the understanding of neurocognitive impairments in patients with epilepsy and surpass the limits imposed by quadrant-based approaches to the brain. Luria's theoretical constructs heralded modern era neuropsychology moving from lesion-studies to a more network-based view of the brain, paralleling the shift in epileptology from the concept of epileptic focus to that of epileptic network. The major advantages of the Lurian approach are its well-structured and well-defined integrative cognitive components and its ability to provide valuable information concerning their interactions. Integration between quantitative and qualitative assessment methods still remains an open issue in clinical neuropsychology, the solution of which would further enhance the potential of Luria's neuropsychological examination paradigms. Manifold evidence such as hemispheric specialization, neocortical functional organization, functional reorganization and neuroplasticity, visual perception processing styles, bilingualism and neuroplasticity, focused activities and long-lasting neuroplastic changes research and cognitive phenotypes, and temporal lobes neuropsychological diagnostics, legalize current applications of Lurian theory penetrating the whole neuropathology spectrum and accounting for the whole panorama of neurocognition.
Associations between dental care approachability and dental attendance among women pregnant with an Indigenous child: a cross-sectional study
3eb73996-179d-4f28-b845-c1172137de60
8446472
Dental[mh]
Indigenous Australians are those who identify as Aboriginal and/or Torres Strait Lander . Indigenous Australians are the first residents of Australia, and have unique traditions, cultures, and languages . However, Indigenous Australians have poorer oral health, and experience more oral health conditions compared with non-Indigenous Australians . In the National survey of Adult Oral Health, Indigenous adults had significantly higher levels of untreated caries and missing teeth, and a lower prevalence of filled teeth, compared with non-Indigenous Australian . Pregnant women are more affected by oral conditions due to hormonal and immunologic changes during pregnancy . Oral conditions during pregnancy may have adverse effects on both maternal and child health outcomes. Approximately 30 ~ 47% of pregnant women have experienced gingivitis during pregnancy, which leads to pain, uncontrollable bleeding, and difficulties in eating . Periodontal disease, which stems from gingivitis, may increase risk of adverse maternal outcomes, such as systemic inflammation and preeclampsia . Maternal experience of dental caries during pregnancy is a contributing factor of early childhood caries (ECC) among children . ECC affects children’s eating, speech and self-confidence . Experience of dental disease in childhood increases the risk of experiencing dental disease in later life . To maintain good oral health, annual dental check-ups are essential . A higher proportion of non-Indigenous Australians attend a dentist once or more a year (60.3%) compared with Indigenous Australians (15–38%) . The low utilisation of dental care among Indigenous Australians may arise from a range of barriers Indigenous Australians face in regard to accessing timely, culturally appropriate and affordable dental care. Specifically, factors affecting dental care uptake of Indigenous Australians include cultural appropriateness of service , remoteness of residency , cost and experience of discrimination in previous receipt of health services . This study is based on the theory developed by Levesque and colleagues in accessing health service (see Additional file : Figure S1). We were especially interested in one of the domains, which is the effect of service approachability on utilisation of dental care and oral health outcomes. The service approachability is corresponding to one’s ability of how to perceive the demand. . Levesque and colleagues noted that approachability of a health service should enable people who need the service to identify that the service exists, can be reached, and will have an impact on their health . On the demand side , service approachability is related to one’s ability to perceive the need of a service, which is constructed by one’s health literacy, health belief and expectation and trust of the service. Individual health literacy is related to one’s ability to access, understand and apply health information . Health literacy was referred to service-related health literacy, including knowledge of system navigation, which is essential because it is the first step in interacting with the heath care environment . In the context of oral health, a belief in good oral health is important to ensure dental services are utilised; such beliefs in oral health can lead to behaviour changes, for example, leading one to seek health care in the first instance. Meanwhile, parental oral health beliefs also have impacts on offspring and can predict the uptake of dental care as children grow older . Finally, trust and expectation of the health service play indispensable roles in accessing health care, especially in the Indigenous Australian context. Due to long lasting legacies of colonial practises and laws, including cultural discrimination, lack of trust is one of the primary causes of poor uptake of health services among Indigenous Australians . Other researchers have, in recent years, applied the model developed by Levesque when working with marginalised populations, such as refugees and Indigenous people . However, all prior research used the model to structure reviews, not to examine the inherent associations of each of the domains with a given service utilisation and its health outcome. The aim of this study was innovative in applying the Levesque model to examine the relationship between dental service approachability on the demand side with. The hypothesis was that participants with a perceived need for dental care would have a higher uptake of dental care, resulting in better oral health outcome. Study design Setting and recruitment Ethics and consent Development of service-oriented model of accessing dental care Variables Statistical analysis This study is a cross-sectional study; and data for the study were collected during 2011–2012 as part of the baseline data collection of an early childhood caries intervention among Indigenous children in South Australia . Participants were recruited through the antenatal clinic of hospitals and Aboriginal Community Controlled Health Organisations in South Australia in both metropolitan and regional locations. During data collection, researchers and staff members in health settings would approach potential participants and to provide information about the study, before obtaining written, informed consent. Convenience sampling was adopted, and criteria were: (1) Participants must be the pregnant residents of South Australia, and (2) were expecting an Aboriginal Australian baby or babies. The questionnaire included items used in the Australia national dental survey , and had been pilot tested and discussed by members in Indigenous communities and Aboriginal Maternal Infant Care workers. There were 23 domains with a wide range of oral health information in the questionnaire, including dental health, dental behaviours, dental cost, dental perceptions, oral health belief, etc. Items used in the study were oral health outcome, outcome of service utilisation, and factors related to dental care approachability . Recruitment commenced February 1, 2011 and ended on May 30, 2012. Participants who did not answer all questions were excluded from the study. Ethical approval was received from the University of Adelaide Human Research Ethics Committee, the Aboriginal Health Council of South Australia, the Government of South Australia, the Human Research Ethics Committee of Child, Youth and Women’s Health Service, and the Human Research Ethics Committees of participating Adelaide hospitals. The study was guided by an Indigenous reference group, World Health Organisation guidelines on ethical conduct in health research on Indigenous people , and local Indigenous South Australia principles. The study additionally used the Ethical Conduct in Aboriginal and Torres Strait Islander Health Research guidelines to obtain consent . Parents of the participants provided signed informed consent for those being under the age of 16 years. Participants received a $50 voucher for reimbursement of time after completing the questionnaires. Levesque and colleagues developed a model that summarized the key determinants in accessing health service through a multi-level perspective (see supplementary Figure S1). The five dimensions that may be used to evaluate accessibility of a given health service, and include service: (1) approachability; (2) acceptability; (3) availability and accommodation; (4) affordability and; (5) appropriateness. The five dimensions reflect linear stages of a patient’s journey from the initial perception of requiring health care to the final accomplishment of receiving the required treatment. These five dimensions simultaneously correspond with five abilities for consumers: (1) ability to perceive; (2) ability to seek; (3) ability to reach; (4) ability to pay and; (5) ability to engage . Factors that impacted service approachability (ability to perceive) were health literacy, health belief and expectation and trust of the service. To better fit the oral health context, we modified the model developed by Levesque (Fig. ) . Each factor was replaced by oral health-related, and dental service-oriented determinates. These included oral health service-related health literacy, which included literacy about dental system navigation, oral health beliefs of visiting a dentist, trust and expectations of a dental service, and perceived need for dental care. According to the modified model, different stages were linear from the perception of needing care to the accomplishment of the dental patient journey. According to the modified model, there were three factors impacting ability to perceive: dental service health literacy, oral heath beliefs, and trust and expectations of the dental provider. With the addition of perceived need for dental care, there were thus four dimensions measured in this study (see Additional file : Table S1 and Figure S2). Dental service-related health literacy was measured by patient’s ability to navigate to the dental health system. Dental service-related health literacy was measured by “If you needed to visit the dentist tomorrow, would you know what to do?” and “Do you think there would be a dentist able to see you tomorrow?” (response options ‘yes’ or ‘no’). Dental health belief was measured by the question: “How important do you rate the following in relation to teeth?”, with ‘visiting the dentist’ being the domain of interest. Response options included: ‘extremely important’, ‘fairly important’, ‘doesn’t matter much’, ‘not very important’ and ‘not at all important’. To facilitate analysis, responses to this question were dichotomised into ‘extremely/fairly important’ and ‘doesn’t matter much/not very/not at all important’. Trust and expectation toward dental care was measured by the question “I believe going to the dentist would help my teeth”, and responses were re-dichotomised as ‘strongly agree’ and ‘not strongly agree/somewhat agree or doesn’t matter much’ from five sequential responds. The oral health outcome was measured by self-reported gum disease during pregnancy. The dental care utilisation outcome was measured by time of the last dental visit (≤ 1 year or 12 + months). Participant’s perception of need was measured by asking: “Do you think you need to see a dentist?” (response options ‘yes’ or ‘no’). Social-demographic variables included age, employment status, education level and geographic remoteness of residential location. The definition of the remoteness of the resident location followed the Accessibility/Remoteness Index of Australia (ARIA+) , with the location subsequently categorised as “remote” and “non-remote area”. Age was presented as mean values in years plus standard deviation (SD) and was re-categorised as ‘34 years or less’ and ‘over 34 years’ to facilitate multivariable analysis. Education was categorised as ‘no schooling’, ‘primary/secondary education’, and ‘tertiary education’. Employment status was categorised as ‘employed’ or ‘receiving Centrelink payment/other’. Age was presented as means and standard deviations. All other variables were categorical, and thus presented as frequency and percentage. Chi-square tests were used in bivariate analysis, while adjusted prevalence ratios and their corresponding 95% confidence intervals were used in multivariable analysis using Generalised Poisson regression models because the distribution of outcome variables was under-dispersed . Factors related to service approachability (perceived need for dental care, oral health belief, dental service health literacy and trust and expectation toward service) were tested in bivariate analysis, with variables with statistically significant differences ( P < 0.05) entered into multivariable models. Confounders were adjusted for, and included remoteness of residency, education level and employment status. Age was an additional confounder in the model involving dental attendance (Table ), because studies have found pregnant women aged above 35 years old were more likely to access dental care , and we assumed that such women would be more experienced in health care seeking. Additional analyses were performed to examine the association between dental attendance and self-reported gum disease. Annual dental visit was entered into the regression model as an exposure for self-reported gum disease, and adjusted for remoteness, education and employment status. Variables with P < 0.05 in 2-sided α level were considered as being statistically significantly different in all analyses. Data were analysed using R version 3.6.1. A total of 554 eligible participants were invited to take part in the study, with 427 (77%) providing consent and completing the questionnaire. The average age of participants was 25.3 [12pt]{minimal} $$ $$ ± 5.8 years (Table ). Most participants reported having received primary/secondary education (70.3%), and approximately one-third (28.1%) had received tertiary education. Approximately 15% of participants were in current employment. The majority of participants lived in non-remote locations (86.9%). As shown in Table , 42.7% of participants reported having experience of gum disease. Almost all participants (96.9%) reported having seen a dentist in their lifetime. 85.8% of participants perceived a need for dental care. Of these, more than one-third (35.7%) had visited a dentist in the previous 12 months. Most participants (88.3%) perceived visiting a dentist to be very important. Of these, 36.7% of participants had attended for dental care in the last 12 months (Table ). Approximately one quarter of participants (22.0%) reported not knowing what to do if they needed to visit the dentist the next day. Just over 60% (60.2%) of participants reported that they did not think a dentist would be able to see them the next day. Most participants (86.2%) strongly agreed that going to the dentist would help their teeth. Table shows the unadjusted and adjusted estimates from the multivariable analysis with visiting a dentist less than 12 months ago as the outcome and the service approachability factors as exposures. With the exception of perceived need for dental care, all factors related to service approachability were associated with dental service utilisation in the unadjusted analysis. After adjusting for remoteness of residency, education level, employment status and age, only one factor remained statistically significant: “not knowing what to do if needed to make a visit to the dentist the next day” (APR = 0.86, 95%CI 0.74–0.99). Table shows the analysis of service approachability factors with self-reported gum disease as the outcome variable. After adjusting for remoteness, employment status and education level, participants who perceived a need for dental care had 24% higher risk of having self-reported gum disease (APR = 1.24, 95%CI 1.06–1.45). There were no statistically significant associations observed between dental attendance in the last 12 months and self-reported gum disease (Table ). Our research sought to examine the relationship between dental service approachability, dental care attendance and self-reported gum disease among women pregnant with an Aboriginal child in South Australia using a modified version of the Levesque model. The findings showed that service-related factors were associated with dental attendance, which was consistent with the modified model. However, little effect was observed between service-related factors and self-reported gum disease, and no association was observed between dental attendance and self-rated gum disease. The results highlight the limitations of using the modified model in a quantitative study such as the one implemented. Participants’ ability to navigate the dental care system was the key demand-side service approachability factor in utilising dental service. Previous research findings also reported Indigenous persons with higher skills in navigating dental services have higher compliance in long term dental treatment . In this case, a person’s language capacity, knowing the information of location and contacts of dental clinics played an important role in the accomplishment of the dental care journey . However, due to the complexity of the Australian health system, many Indigenous and other socially or culturally marginalised groups struggle to adequately navigate the health system . For instances, some public dental services are only available for children or young adults or government health care/concession card holders. For many states, Aboriginal people may need to contact local Aboriginal community-controlled health service first to access dental care . For some Indigenous Australians, mainstream dental services (private or public) may be the only options for dental care, because dental services may not be provided by their local Aboriginal community-controlled health service. Barriers to successfully navigate mainstream dental services include language and cultural barriers. Empirical research has demonstrated that awareness of dental service availability may be limited for some Indigenous people , and also midwives . Making dental service systems more navigable is crucial, given the negative impacts that poor dental care utilisation on oral health outcomes. For Indigenous Australians to better navigate dental care systems, information in accessible formats is required . According to Robards , navigation systems that integrate technologies, such as social media, may facilitate Indigenous Australians to better understand, connect and engage with dental care. Such interventions should be based in the Indigenous community setting. During the COVID-19 crises, Summer noted that the application of social media channels shared through trustworthy local community networks enabled fast and effective health information sharing. Although dental care service provision may not always be available in the Aboriginal Community Controlled Health Organisation setting, such organisations had an indispensable role in the dissemination of health information, and a leading role of enhancing communication among Indigenous communities . Based on these findings, future navigation programs that embrace social media and related technology might be more effective and economically friendly for women pregnant with an Indigenous child. Such services should be easy to contact to make health system navigation more approachable and understandable. Navigation support is just one example of improving system navigation. The health navigator program—targeting both Indigenous and non-Indigenous Australians—was increasingly used among patients with chronic disease who have difficulties in accessing health service, which improved the process of care . There is evidence that Indigenous Liaison Officers can improve the engagement of Aboriginal families with health professionals, and may have a positive impact on diagnosis. There are some Aboriginal Liaison Programs for dental care , although no study specifically examined its effect on uptake of dental care, the project was proven to be successful in dental referral to mainstream dental service . There has been a Midwifery-Initiated Oral Health Dental Service program. In this program, midwives provided oral assessments and referrals to local and free public dental care for pregnant women. The referral letter included the contact details of a dentist, a checklist of date of visit, number of visits and treatment to better navigate participants to the service and to facilitate them to complete the course of recommended treatment . The program was effective and promising in the improved uptake of dental care, and may be a beneficial pathway forward to implement among Indigenous populations . One of our study hypotheses was that participants who had a perceived need for dental care would have better oral health than their counterparts with no perceived need; however, this did not prove to be the case (APR = 1.24, 95% CI 1.06–1.45). This suggests that the motivations or reason for participants’ perceived need for dental care were mixed and complicated. For example, the last visit for a dental appointment may have been for a check-up (a good oral health-related behaviour) and because of a problem. Thus, “uptake of dental care within one year” was found to be a weak indicator for oral health outcome. “Reason for that last visit” would have been a more reliable indicator for the phenomenon we were aiming to measure. Our study made it possible to compare aboriginal to non-aboriginal pregnant women. A higher demand for dental care among Aboriginal women during pregnancy can be observed in this study (85.8%) compared with non-Aboriginal pregnant women in the United States (50.1%) . The rate of dental visit < 12 months in this study (35.7%) was very close to a comparable study in New Zealand (37.7%) . While it is still lower than non-Aboriginal pregnant women (45.6%) , and figures from high income countries were more in dental attendance, with approximately 70–92% pregnant women reported to have accessed dental care in the last 12 months . This was the first study to describe dental uptake and service approachability, and to test the association with self-reported gum disease among women pregnant with an Indigenous child in Australia. Most of studies focus on provision of transport and reduction of cost to improve the accessibility of health care for Aboriginal people. Little empirical research has focused on the phases before actual interaction with the health care service, including participant motivation and capability to contact the service. This study reiterates the importance of system navigation in accessing dental care, which might also give more directions to improve accessibility of primary health care for Indigenous people. Indications for future research include: (1) Dental health literacy on how to navigate dental systems is important in the access outcome of dental care. Navigation support could be integrated with technologies, based on local community networks and collaborating with midwives. (2) The effect that approachability of a given service has on health outcomes (dental attendance). Motivations for visiting a dentist differ, and this has an impact on oral health outcomes. Previous uptake of dental care was not a good indicator of oral health. There is a need for better analytical approaches, and different measures of exposures and outcomes to better illustrate the impact that utilisation of dental care has on oral health outcomes. The study limitations were that social desirability bias may have influenced participant responses and no clinical data was collected to ascertain objective measures of dental health. This study was cross-sectional in design implying that no assumptions of causality could be made. Although dental care was recognised as being important among our sample of women pregnant with an Indigenous child in South Australia, dental utilisation was low. Ability to successfully navigate the dental care system was associated with regular dental attendance. Perceived need for dental care was associated with self-reported gum disease. No association was observed between service-approachability-related factors and self-reported gum disease. Additional file 1. Appendixes . Appendix A: Figure S1: A conceptual framework of access to health care [25]. Appendix B: Table S1: Questionnaire of factors impacting on dental service approachability. Appendix C: Figure S2: Variables corresponding to service-oriented model of accessing dental care.
Integrated proteome and metabolome analysis of the penultimate internodes revealing remobilization efficiency in contrasting barley genotypes under water stress
b4248496-876e-485c-89cc-a7023b4a45a3
11569251
Biochemistry[mh]
Water scarcity poses a formidable challenge to agricultural productivity on a global scale, with Iran, located in an arid and semi-arid region, being particularly vulnerable to the adverse effects of drought . During the spring season, the phenomenon of increased soil evaporation coincides with critical growth stages of cereal crops, notably during the seed-filling phase . This water stress significantly undermines the grain yield of barley ( Hordeum vulgare L.), recognized as the fourth most important crop worldwide . The consequences of drought extend to a marked reduction in essential yield components, including spikes per unit area, grains per spike, and seed weight . Under such conditions, cereals exhibit a remarkable ability to remobilize an additional 10–45% of dry matter from vegetative tissues to grains, a process that is critical for sustaining yields in water-limited environments – . The greatest amount of dry matter (55%) is produced by the lower internodes, with the penultimate internode and the peduncle contributing 27% and 18%, respectively . Furthermore, the efficiency of dry matter remobilization is greater in the upper internodes, specifically the penultimate, than in the peduncle and lower intermediate internodes, as indicated by the higher ratio of mobilized dry matter to maximum weight , . In response to water stress, cereals demonstrate a propensity to accumulate water-soluble carbohydrates (WSCs) within their stems, which plays a vital role in supporting grain filling rates when photosynthesis is hindered , . Research has shown that drought-tolerant genotypes tend to store greater amounts of carbohydrates in their stems compared to their more susceptible counterparts . The remobilization of assimilates from the penultimate internode is directly correlated with grain yield, highlighting the importance of optimizing this process for yield stability in stress-resilient genotypes . Proteomics and metabolomics serve as practical methods for developing drought-tolerant genotypes by identifying key proteins involved in stem reserve remobilization . These methodologies enable researchers to analyze plant responses to environmental stressors, thereby facilitating the breeding of barley genotypes with enhanced drought resistance , . Previous proteomic analyses have underscored the importance of proteins associated with photosynthesis and signalling pathways, which are integral to the remobilization of stem reserves to developing seeds, particularly in crops like rice and wheat subjected to drought conditions . Moreover, recent proteomic studies have revealed the pivotal role of specific enzymes that mediate the regulatory mechanisms between sink growth and the mobilization of non-structural carbohydrates (NSCs) in rice. This knowledge is instrumental in guiding the selection of traits that can be incorporated into promising barley genotypes . In response to water stress, plants activate a myriad of metabolic pathways, including sugar metabolism, the tricarboxylic acid cycle, glycolysis, and the oxidative pentose phosphate pathway. These pathways are essential for maintaining cellular functions and overall plant health during periods of water deficit . Furthermore, enzymes involved in the detoxification of reactive oxygen species (ROS), such as those in the ascorbate–glutathione cycle, play a vital role in protecting plant tissues from oxidative damage . The current investigation centers on the pivotal proteins and metabolites that govern the remobilization of barley stem reserves under water stress conditions. An analysis was conducted across three contrasting barley genotypes—‘Yousef’, ‘Morocco’, and ‘PBYT17’—which are characterized as drought-tolerant, drought-susceptible, and semi-drought-tolerant, respectively. This study elucidates the variances in remobilization characteristics among these phenotypically diverse genotypes. Our integrated shotgun proteomics and metabolomics analysis has revealed a complex network of protein interactions that bolster antioxidant capacity during water stress, thereby playing a crucial role in the remobilization of stem reserves. A comprehensive examination of these findings underscores the critical functions that metabolic pathways serve in enhancing the resilience of barley in the face of water scarcity, reinforcing the imperative for continued research and development in this vital area of agriculture. The physiological responses induced by water stress in barley contrasting genotype Changes in remobilization characteristics induced under water stress conditions To determine the effect of water stress 21 and 28 DAA on barley genotypes, we measured remobilization characteristics. In response to water stress, Yousef genotype exhibited a significant increase in the maximum specific weight of penultimate internodes while Morocco and PBYT17 genotypes had no significant differences (Fig. a). Similarly, Yousef and PBYT17 genotypes demonstrated a considerable rise in the remobilization rate (mg plant −1 ) of penultimate internodes under water stress whereas Morocco genotype did not show any significant changes (Fig. b). As shown in Fig. c, Yousef genotype shows a significant increase in the remobilization efficiency (mg plant −1 ) of penultimate internodes under water stress while PBYT17 genotype did not show any significant difference. In Fig. , Yousef demonstrates a remarkable proficiency in the process of material remobilization to grain, exhibiting adaptability under both control and water stress conditions. Conversely, Morocco genotype exhibited a notably low rate of remobilization efficiency and maximum specific weight for the penultimate under water stress. Figure a shows the differences in the specific weight of penultimate internodes for Yousef, Morocco, and PBYT17 genotypes for 100 days of during which the genotypes displayed varying patterns. In both control and water stress conditions, Yousef revealed a higher specific weight in penultimate internodes. The changes in the specific weight of penultimate internodes occurred 21 DAA. Then, the specific weight reached to its highest point at 28 DAA and decreased subsequently over time period (see Fig. a). In both control and water stress conditions, Yousef and PBYT17 showed a higher and significant specific weight in penultimate internodes 21 DAA. While, the specific weight of penultimate internodes in the Morocco genotype was not significant 21 DAA (Fig. b). Figure a depicts the relative water content (RWC) and grain yield of in Yousef (tolerant), Morocco (susceptible), and PBYT17 (semi-tolerant) genotypes under both control and water stress conditions. Under water stress, RWC significantly decreased (with a P-value of < 0.01) in Morocco, PBYT17, and Yousef genotypes by 17%, 20%, and 10%, respectively, as compared to their control counterparts (Fig. a). During periods of stress, PBYT17 and Morocco genotypes experienced a decrease in grain yield by 48% and 45%, respectively, as shown in Fig. b. Yousef maintained its grain yield under water stress conditions, suggesting an inherent efficiency in remobilizing resources that mitigates yield loss. In contrast, Morocco genotype showed a significant yield reduction in response to control condition. Identification of differentially abundant proteins (DAPs) Multivariate data analysis of DAPs Correlations between proteins and remobilization efficiency traits To determine the protein expression patterns changes at 21 and 28 DAA in control and water stress conditions, we performed correlation analysis between the abundance of proteins and level of the remobilization efficiency of penultimate internodes. At 21 DAA, our findings revealed a total of 90 positive correlations alongside 115 negative correlations under control conditions, emphasizing the complex interplay between protein expression and remobilization efficiency (Fig. a and Supplementary data S1). The shift observed under water stress, with positive correlations increasing to 98 and negative correlations decreasing to 110, underscores the adaptability of protein expression in response to water availability (Fig. b and Supplementary data S1). Of the strongest correlated proteins with remobilization efficiency under water stress included trehalose-6 phosphate synthase (F4ZC54) (T6P), galactokinase (A0A8I6XZ94), two sucrose synthase (P31922 and F2DRP6), Inositol-1-monophosphatase (A0A8I6XZT0), xylose isomerase (Q40082), and aldose 1 epimerase family protein (F2D0E0) starch synthase (A0A8I6WY77), ribokinase (F2CTM5), and glucan synthase-like 3 (C6GFB3) involved in carbohydrate metabolism, two proteins involved in fermentation pathway such as alcohol dehydrogenase (ADH1) (Q94L27) and ADH4 (C5NM76) and 6-phosphogluconate dehydrogenase decarboxylating protein (A0A8I6Z047) involved in OPPP. Notably, the levels of galactokinase, sucrose synthase, inositol-1-monophosphatase and 6-phosphogluconate dehydrogenase decarboxylating protein decreased in abundance in the susceptible genotype but were unchanged in the tolerant one at 21 DAA under water stress (Fig. b). However, two ADH decreased in abundance in the susceptible genotype but increased in the tolerant one at 21 DAA under water stress (Fig. b). As we progressed to the 28 DAA time point, the results under control conditions were even more pronounced, with 121 positive and only 40 negative correlations identified (Fig. c and Supplementary data S1). Conversely, under water-stressed conditions at 28 DAA, we observed a shift with 96 positive and 99 negative correlations (Fig. d and Supplementary data S1). Such findings underscore the necessity of further investigations into the molecular mechanisms governing these traits, as they hold significant implications for improving crop resilience and yield under adverse environmental conditions. A correlated proteins with remobilization efficiency under water stress at 28 DAA included Thioredoxin like 5 (F2DTW0), Ferredoxin thioredoxin reductase catalytic chain (A0A8I6Y6J4), Rubredoxin (F2CR00), Peroxidase superfamily protein LENGTH 329 (A0A8I6YEN8), Glutathione peroxidase (Q9SME6), Superoxide dismutase copper chaperone putative (F2E710), and NADH cytochrome b5 reductase (F2E5P4) involved in redox-related proteins, five ribosomal proteins including 30S ribosomal protein S19 (A0A191TDL1), 30S ribosomal protein S19(A0A8I6WXR4), 60S ribosomal protein L14 putative (A0A218LNP1), Ribosomal protein L19 (F2D9X8), 60S ribosomal protein L32 (S4Z0T4), 60S ribosomal protein L32 (A0A8F4MA56), and 60S ribosomal protein L7a (F2DE13), two proteins involved in RNA regulation in Remorin (A0A8I6YBT6) and the GAGA binding transcriptional activator (F2E947). Notably, NADH cytochrome b5 reductase levels significantly increased in the Yousef genotype at 28 DAA but remained unchanged in Morocco and PBYT17 genotypes, suggesting better ROS elimination and cellular homeostasis in the Yousef genotype (Supplementary data S1). Five ribosomal proteins increased in expression at 28 DAA in Morocco under water stress. While ribosomal proteins remained unchanged at 28 DAA in Yousef genotype under water stress condition (Fig. b,d). In the Morocco genotype, the abundance of ribosomal proteins, protein biosynthesis is an energy consuming process since all proteins need energy in the form of sugars during their biosynthesis. Furthermore, Remorin unchanged in the Yousef genotype but decreases in the Morocco genotype at 28 DAA under water stress. Comparative proteomic analysis was used to identify the changes of protein profiles in contrasting barley genotypes differing in water stress response and remobilization characteristics. Our results showed 309 increased proteins in abundance (Fig. a) and 450 decreased proteins in abundance (Fig. c) at 21 DAA. 11 stress-responsive shared proteins were found in Morocco and PBYT17 barley genotypes. Seven proteins were shared in both Morocco and Yousef genotypes (Fig. a). At 21 DAA, 8 and 16 stress-responsive shared proteins were observed between the Yousef and PBYT17, and between the Morocco genotypes, respectively. Only one common protein was identified among the three genotypes (Fig. c). At 21 DAA, PBYT17 and Morocco genotypes showed the highest number of uniquely differentially abundant proteins whereas the tolerant genotype displayed the lowest number. Then, functional classification was performed to explore the molecular function of significantly changed proteins in response to water stress in each single genotype. The major functional categories of increased protein in abundance in Yousef genotype was as follows: 14.2% in carbohydrate metabolism, 5% in lipid metabolism, 8% in amino acid metabolism, 14.2% in both water stress and protein metabolism (synthesis and degradation), 5% each in transport, miscellaneous functions, RNA, and DNA, and 2% in each of secondary metabolism and cell wall degradation. In contrast, in Morocco genotype, increased proteins in abundance were more implicated in carbohydrate metabolism (6%), protein metabolism (40.46%), amino acid metabolism (8%), redox (4%), misc (8%), signalling (10%), nucleotide metabolism (4%), RNA (6%), and DNA (6%). In the PBYT17 genotype, the largest portion of the increased proteins in abundance was involved in carbohydrate metabolism (7.5%), mitochondrial electron transport/ATP synthesis (28%), lipid metabolism (2.8%), amino acid metabolism (3.7%), hormone metabolism (4.7%), stress (4.7%), mitochondrial 2-oxoglutarate/malate carrier protein (misc) (8.4%), proteins metabolism (synthesis and degradation) (25%), and cell (5.6%) (Fig. b). The majority of proteins that decreased in abundance in Yousef genotype 21 DAA was included 7.5% in carbohydrate metabolism, 6.7% in lipid metabolism, 8.4% in amino acid metabolism, 3.3% in each of RNA and DNA metabolism, 20.3% in protein metabolism (synthesis and degradation), 5% in the cell wall (degradation and modification), signalling and cell, and 15.2% in misc. Whereas, in Morocco genotype, the decreased proteins in abundance were functionally categorised in carbohydrate metabolism (18.7%), protein metabolism (18%), lipid metabolism (3.7%), amino acid metabolism (2.2%), secondary metabolism (4.5%), redox (3%), stress (7.5%), signalling (2.2%), RNA (10.5%) and cell (3.7%), and misc (7.5%). In the PBYT17 genotype, the largest portion of the decreased protein in abundance was functionally distributed in carbohydrate metabolism (9.7%), cell wall (3.2%), lipid metabolism (3.2%), amino acid metabolism (4.3%), secondary metabolism (2%), stress (3.2%), redox (6.5%), nucleotide metabolism (2.1%), DNA (1%), RNA (1%), protein metabolism (synthesis and degradation) (10.8%), signalling (5.4%), and misc (4.3%) (Fig. d). At 28 DAA, 420 increased proteins and 401 decreased proteins were observed in response to stress (Fig. a,c) of which 21 shared proteins were identified between Yousef and Morocco and and between the Morocco and PBYT17 genotypes (Fig. a). Only two shared common proteins were identified in three genotypes (Fig. c). Furthermore, 19 and 12 shared proteins were identified between Yousef and PBYT17, and between Yousef and Morocco genotypes, respectively (Fig. c). The highest number of DAPs also were observed in PBYT17 and Morocco genotypes. Yousef showed the lowest number DAPs. The functional distribution of increased proteins in abundance in the Yousef genotype 28 DAA was as follows: carbohydrate metabolism (6.8%), protein metabolism (20.45%), signalling (6.8%), cell wall (9%), lipid metabolism (13.6), amino acid metabolism (6.8%), misc (6.8%), and RNA (6.8%). While in the8.89%), protein metabolism (24.47%), amino acid metabolism (4.49%), secondary metabolism (5.6%), and RNA (6.67%). In the PBYT17 genotype, increased proteins in abundance were categorized in carbohydrate metabolism (14.5%), lipid metabolism (6.45%), stress (4.82%), protein metabolism (11.2%), amino acid metabolism (8.06%), and RNA (6.45%) (Fig. b). The majority of proteins that decreased in abundance 28 DAA in Yousef genotype were involved in carbohydrate metabolism (18.27%), stress (4.3%), misc (7.5%), RNA (8.6%), protein metabolism (30.1%), signalling (3.2%), and lipid metabolism (3.2). However, in Morocco genotype, decreases protein in abundance were implicated in carbohydrate metabolism (5.8%), lipid metabolism (5.8%), stress (8.8%), misc (14.7%), protein metabolism (20.5%), nucleotide metabolism (5.8%), and RNA (8.8%). In contrast, in the PBYT17 genotype, decreased in abundance were categorized in carbohydrate metabolism (8.13%), lipid metabolism (4.06%), stress (4.8%), redox (3.25%), misc (6.5%), protein metabolism (20.3%), transport (5.6%), signalling (6.5%), and RNA (10.5%) (Fig. d). As a supervised method, partial least square discrimination analysis (PLS-DA) was used to predict or discriminate DAPs that are potentially useful biomarkers in helping sample classification 21 and 28 DAA under water stress using the variable of importance in prediction (VIP) score value (Figs. and ). We observe 335, 457, and 466 proteins with VIP greater than 1 in Yousef, PBYT-17, and Morocco, at 21 DAA respectively (Fig. a, c, e). These proteins were more functionally implicated in protein metabolism and carbohydrate metabolism. Interestingly 26,69%, 36.30%, and 37.04% of DAPs were amongst of the proteins with VIP > 1 at 21 DAA. However, at 28 DAA, 389, 442, and 348 protein with VIP greater than 1 were found in Yousef, PBYT-17, and Morocco, respectively (Fig. a, c, e). The functional groups analysis indicated these proteins were more classified in protein metabolism carbohydrate metabolism, and particularly RNA metabolism. The results showed 32.99%, 37.48%, and 29.51% of DAPs were amongst of proteins with VIP > 1 at 28 DAA. Differential abundant metabolites (DAMs) We investigated the differential accumulation patterns of identified metabolites across different genotypes at 21 and 28 days after anthesis (DAA). Seventeen metabolites were detected by utilizing a triple quadruple mass spectrometer connected to a combined ion chromatography system. They comprised three nucleotides and sugar nucleotides, five glycolytic metabolites and sugar phosphates, and eight organic acids. The more increased metabolites were TCA cycle intermediates in the Yousef genotype at 21 and 28 DAA compared to the two other genotypes (Supplementary data S1). The profiles of metabolites in penultimate in the three genotypes under water stress at 21 and 28 DAA are given in Supplementary data S1. In particular, the levels of UDP-glucose and glucose-6-phosphate (G6P) showed a significant rise in the Yousef genotype at 28 DAA, highlighting their role as key precursors for trehalose-6-phosphate (T6P) production. Additionally, glucuronic acid (GlcA) levels increased in Yousef and PBYT17 genotypes at 21 and 28 DAA, indicating its vital role in various metabolic pathways, including the ascorbic acid pathway and the pentose phosphate pathway (PPP) (Fig. d). Moreover, the consistent elevation of G6P in both Yousef and PBYT17 at 21 and 28 DAA emphasizes the importance of this metabolite in the oxidative pathway of the pentose phosphate pathway, which is crucial for antioxidant defense mechanisms. This pathway, which utilizes G6P as a foundational substrate, is instrumental in reactive oxygen species (ROS) scavenging, thereby providing essential protection to cells against oxidative stress. The reactivation of assimilates prior to flowering is of paramount importance for sustaining seed yield, particularly under abiotic stress conditions. Our investigation has revealed a significant enhancement in the efficiency of assimilate mobilization within the penultimate internode of drought-tolerant genotype when subjected to stress. This observation underscores the superior capability of drought-tolerant plant to remobilize assimilates to developing seeds compared to their drought-susceptible counterparts. In our comprehensive proteomic analysis, facilitated by tandem mass spectrometry, along with meticulous metabolomic profiling of the penultimate internode, we have elucidated the mechanisms governing carbon reserve translocation and its regulatory pathways. These mechanisms are likely instrumental in optimizing the source capacity of the plant, thereby enhancing its ability to allocate resources effectively during filling growth stages. The intricate interplay of these factors not only contributes to the understanding of plant physiology under water stress but also opens avenues for innovative agricultural practices aimed at mitigating the impacts of drought. The implications of these findings are discussed in subsequent sections in a greater detail. Proteins involved in carbohydrate metabolism associated with remobilization efficiency Proteins involved in Redox metabolism were associated with remobilization efficiency Proteins involved in protein synthesis induced in the susceptible genotype Proteins involved in RNA regulation (TFs) maintained in tolerant genotype The efficiency and extent of assimilate remobilization are crucial factors influencing grain yield during grain filling stage . For the pathways of carbohydrate metabolism, the abundancethe susceptible genotype but were unchanged in the tolerant one which may lead to substantial carbon reserve remobilization at 21 DAA. Then the abundance of two proteins of ADH1 and ADH4 which are essential for sugar metabolism through glycolysis in ethanol fermentation was higher in tolerant genotype than the susceptible one. Sucrose synthase which cleavage of Suc into fructose and UDP-Glucose, is the main transported form of assimilates. The two forms of UDP-Glc and Fru is thought to be the first step in the sucrose-to-starch conversion. It is well-documented that the activity of sucrose synthase is linked to sink strength during seed development mediating higher remobilization efficiency. Galactokinase catalyzes the phosphorylation of galactose using ATP leading to galactose-1 phosphate biosynthesis, which is then utilize for ascorbic acid (AsA) biosynthesis. Besides, inositol monophosphatase enzyme is required for the synthesis of myo-inositol, breakdown of inositol (1,4,5)-trisphosphate, and synthesis of L-galactose, a precursor of AsA. Ascorbic Acid-is an Oxidant Scavenger which plays major roles in ROS signalling, preventing ROS accumulation and protecting osmoprotectants (e.g., fructan) during grain filling under a biotic stress , . TCA, and OPPP proteins remained unchanged in Yousef but decreased in Morocco, indicating the tolerant genotype’s capacity to efficiently remobilize stem reserves 21 DAA under water stress. OPPP is a significant NADPH producer, which is vital for inhibiting ROS through 6PGDHs . Activated during seed filling, the TCA and OPPP cycles contribute to glucose and malate production, which are necessary for osmotic regulation and ROS removal under water stress . Thus, the regulation of starch and sucrose metabolism, including the pentose and gluconate pathways, is vital for their production . At 28 days after anthesis (DAA), UDP-glucose and G6P levels increased in the Yousef genotype, as essential precursors for Trehalose-6 Phosphate (T6P) production which was then consistence with the correlation between T6P synthase and remobilization efficiency level indicating its role as a signalling molecule that links carbon availability to seed development , . NADH cytochrome b5 reductase plays a critical role in the cellular mechanisms of drought tolerance, particularly evident in its increased abundance in the tolerant genotype at 28 days after anthesis (DAA). This elevation in NADH cytochrome b5 reductase levels signifies an enhanced capacity for reactive oxygen species (ROS) elimination and maintenance of cellular homeostasis, a vital process for plant resilience under water stress conditions . In contrast, the susceptible and semi-tolerant genotypes exhibited no significant changes in this enzyme’s abundance, highlighting a potential deficiency in their ability to manage oxidative stress. A comprehensive proteomic study of three wheat genotypes with differing drought tolerance levels further elucidates the significance of ROS scavenging mechanisms. The drought-tolerant genotypes, Excalibur and RAC875, demonstrated a remarkable ability to mitigate oxidative stress, as evidenced by elevated levels of superoxide dismutase (SOD) and catalase (CAT). These findings suggest that the enhanced scavenging capacity of these genotypes correlates with a reduced production of ROS, attributed to lower levels of proteins associated with photosynthesis and the Calvin cycle. Collectively, these observations reinforce the notion that effective ROS management is a hallmark of drought tolerance . In the Morocco genotype, five small and large ribosomal subunits exhibited heightened expression levels, which correlates negatively with the efficiency of remobilization. Ribosomal synthesis is a sugar-intensive process necessary for plant survival under water stress . In the context of water stress, our findings suggest that the Morocco genotype prioritizes the synthesis of ribosomal proteins, thereby utilizing available sugars for their biosynthesis. The correlation between ribosomes, protein synthesis, and growth, and the role of ribosomes and mRNA translation in growth regulation have been explored recently . This strategic allocation of resources is essential for ensuring survival; however, it may concurrently compromise the genotype’s ability to efficiently remobilize nutrients from the stem. Proteins that regulate RNA play a pivotal role in various biological processes, particularly transcription factors (TFs) that interact with DNA to modulate gene expression. These proteins decreased in abundance in susceptible genotype while maintained in tolerant one of among these, the GAGA binding transcriptional activator is noteworthy for its involvement in seed size and response to sugar starvation . Functional analyses have demonstrated that cis-elements, such as GAGA elements, are integral in remobilizing signals associated with sugar deprivation. The presence of these elements is critical for the transcriptional activation mediated by BPC transcription factors, underscoring their importance in plant developmental processes. In addition to GAGA binding activators, other transcription factors, such as ARFs, are essential in mediating auxin signaling pathways that influence seed filling via Trehalose 6-phosphate (T6P). The ability of plant genotypes to adapt to water stress is exemplified by the tolerant genotype, which effectively safeguards its photosynthetic resources. This genotype maintains a delicate equilibrium between diminished photosynthetic rates and the remobilization of stored carbohydrates through mechanisms involving UDP-Glucose (UDP-Glc) and glucose-6-phosphate (G6P). Notably, UDP-Glc has been proposed as a crucial intracellular mediator of reactive oxygen species (ROS) signalling, further illustrating the complexity of water stress responses . The significance of Remorin which its abundance was maintained in drought tolerance genotypes in grain setting is also emphasized, particularly in the context of the gsd1-dominant mutant phenotype, which suffers from impaired seed filling due to disrupted carbohydrate transfer mechanisms. Together, these findings highlight the critical functions of transcription factors in maintaining plant resilience under water stress conditions and their integral role in the regulation of seed filling processes. Recent investigations into the integration of proteomics and metabolomics have significantly advanced our understanding of stem reserve remobilization mechanisms in barley, particularly under conditions of varying water availability. The data presented in Fig. illustrates that drought-tolerant genotypes possess an exceptional capability to sustain stable concentrations of critical proteins that play essential roles in carbohydrate metabolism. Among these proteins, sucrose synthase, inositol monophosphatase 3, and galactokinase are particularly noteworthy for their involvement in carbon remobilization, a process that is crucial for maintaining growth during episodes of water scarcity. In stark contrast, drought-susceptible genotypes exhibit a marked reduction in these proteins, highlighting a significant biochemical divergence in their response to water stress. Moreover, the study reveals an increase in the availability of sugar phosphate and sugar nucleotide like UDP-glucose and G6P, which correlates with T6P synthesis and remobilization efficiency. This correlation underscores the role of signalling molecules in carbon availability, which is instrumental in regulating plant growth and seed size development. The results indicate that while drought-tolerant genotypes prioritize energy conservation to facilitate the remobilization of assimilates from the stem to the grain, drought-susceptible genotypes predominantly concentrate on protein biosynthesis to sustain growth. The increased remobilization efficiency and specific metabolic responses observed in the Yousef genotype under water stress highlight its potential as a candidate for breeding drought-resistant barley varieties. This distinction not only emphasizes the superior resource utilization efficiency of drought-tolerant genotypes but also illustrates their ability to maximize seed development even under challenging conditions. Furthermore, the capacity of drought-tolerant genotypes to protect proteins and metabolites from damage caused by reactive oxygen species (ROS) is a critical aspect of their adaptive strategies to drought. These findings not only pave the way for enhancing drought tolerance in barley but also enrich our comprehension of the complex molecular responses that underpin drought resilience in plants. Plant materials Drought stress treatment Protein extraction and shotgun proteomics LC–MS/MS analysis Bioinformatics analysis of proteomic data and statistical tests Functional annotation and gene ontology analysis of the DAPs and metabolites Targeted metabolomic profiling Correlation analysis The Yousef and PBYT17 genotypes of spring barley were obtained from Seed and Plant Improvement Institute (SPII, Karaj, Iran). The Morocco genotype seeds were donated by Dr. Adnan Al-Yassin from the International Center for Agricultural Research in the Dry Area (ICARDA). Furthermore, the use of plants in the present study complies with all necessary international, national, and institutional guidelines and legislation. All plant material is owned by the authors, and no permission is required for their use. Under field trial conditions, an experiment was conducted on three barley genotypes using a randomized complete block design. The seeds were planted in two well-drained plots, measuring 9 m × 1.7 m, with loamy soil. The plants were spaced 5 cm apart in a row, with 10 cm between rows. This factorial experiment was carried out with two treatments (control and water stress) and three replications. After implementing drip irrigation, the plants were cultivated under control conditions (maintained at soil field capacity (FC)) until the stage of anthesis. On 28 DAA, water stress was induced by depriving the plants of water until the soil water content reached 40% FC (equivalent to −1.5 MPa soil water potential). A digital moisture instrument was utilized to measure the soil moisture levels in both the control and drought-stressed plots. To maintain the desired soil water content, water was added accordingly. At 21 and 28 DAA, three replications of the penultimate internodes from the main stem were collected for proteome analysis. These samples were quickly frozen in liquid nitrogen and stored at −80 °C to extract proteins and metabolites. After being incubated at 70° C for 72 h, the weight of 15 detached spikes was used to determine the grain yield. The RWC and stem remobilization characteristics were assessed following the methods described previously . The specific weight of each internode was obtained by dividing the weight by the length of internodes among 7, 14, 21, and 28 DAA and the internode weight at the physiological maturity stage (W mat ). Furthermore, the remobilization rate was determined by subtracting the weight of the internodes at physiological maturity (Wmat) from the maximum internode weight (Wmax) recorded during measurements at 7, 14, 21, and 28 DAA. The remobilization efficiency of each internode was estimated as (W max − W mat /W max ) × 100 2 . To obtain the protein fraction, trichloroacetic acid (10% v/v) and β-mercaptoethanol (0.07% v/v) were used to extract the freeze-dried (100 mg) and powdered penultimate internode samples, following the method described elsewhere . The mixture was then subjected to centrifugation at 16,000 g for 30 min, and the resulting pellet was washed twice with 1.5 mL of acetone before being re-centrifuged at 16,000 g for 30 min. The pellet was lyophilized using vacuum centrifugation, and then re-suspended in 400 μL of 50 mM Tris–HCl (pH 8.8) with 2% w/v SDS. Following this treatment, the solution was subjected to extraction using methanol-chloroform, following previous methods . The resulting pellet was dried at room temperature and dissolved in 80 μL of 8 M urea and 100 mM Tris–HCl (pH 8.8). The protein concentration was measured using the A 2D-Protein Quant Kit (BioRad, Hercules, USA) and then the samples were subjected to SDS-PAGE (80 μg of the sample per well of a Bio-Rad 10% Tris–HCl precast gel, run at 150 V for 1 h). Staining with colloidal Coomassie Blue, following a previously described method , was used to visualize the proteins. Each lane was divided into 16 sections of equal length. These sections were then washed in a solution of 100 mM of NH 4 HCO 3 , 50% v/v acetonitrile, and 100 mM NH 4 HCO 3 , and then dehydrated using 100% acetonitrile. The proteins present in the gel sections were reduced by exposing to 50 μL of 10 mM DTT and 50 mM of NH 4 HCO 3 at 37 °C for 1 h. They were then alkylated by adding 50 μL of 50 mM iodoacetamide dissolved in 50 mM of NH 4 HCO 3 at room temperature in the dark for 1 h. Furthermore, the gel fragments underwent a brief rinse in a solution consisting of 100 mM of NH 4 HCO 3 , 200 μL of 50% acetonitrile, and 100 mM of NH 4 HCO 3 for 10 min. Subsequently, they were dehydrated by being completely submerged in 100% acetonitrile and left to air-dry. The proteins were then subjected to an overnight digestion process at 37 °C using 20 μL of trypsin at a concentration of 12.5 ng/μL in 50 mM of NH 4 HCO 3 . To extract the peptides, 30 μL of a mixture containing 50% formic acid and 2% acetonitrile was added to the gel pieces. The resulting supernatant was then dried using a vacuum for further analysis. Each sample was subjected to nano LC–MS/MS using an Easy-nLC 1000 liquid chromatography system attached to an LTQ-XL ion trap mass spectrometer (Thermo Fisher, CA, USA). The 200 Å, 5 μM analytical columns utilized contained Magic C18AQ resin (Michrom Bioresources, CA, USA) packed into a fused silica capillary with an integrated electrospray tip. The peptides extracted were separated by utilizing a gradient consisting of 0–50% Buffer B (95% v/v acetonitrile, 0.1 v/v formic acid) and 100–50% Buffer A (5% v/v acetonitrile, 0.1% formic acid) for 58 min at a flow rate of 0.5 μL min −1 . This was followed by a 5-min interval of 50–95% Buffer B at the same flow rate. Mass spectra were collected within the range of 400–1500 amu. Xcalibur software v2.06 (Thermo Fisher, CA, USA) was used for peak recognition and dynamic exclusion window, with a setting of 90 s and MS/MS analysis of the top six most intense precursor ions at 35% normalization collision energy, as described previously . The 16 fractions in each experiment were sequentially processed and combined into a single output file to identify proteins, using the Uniprot Hordeum vulgare protein database ( www.uniprot.org ) (35, 907 entries, version 2020) and PlantPres database (3466 entries, version 2017) . To account for null values and facilitate log transformation for statistical analysis, protein abundance was determined using normalized spectral abundance factors (NSAF) with an additional spectral count of 0.5 added to all counts. The total NSAF values were used to measure relative protein abundance (Supplementary data S2) , . Statistically, multifactorial ANOVA followed by Duncan´s multiple range test (DMRT) at 0.05 level was conducted to determine physiological traits, implemented in SAS software (SAS Inc., USA). A pairwise comparison was also conducted on the DAPs using student t-tests of the NSAF data. Any protein with a p-value below 0.05 was considered to be differentially abundant. The NSAF values of proteins under water stress were compared to those of control conditions to determine fold changes. The resulting unique and common DAPs were visualized through Upset plots, available at https://gehlenborglab.shinyapps.io/upsetr/ . The online annotation tool, Mapman ( http://www.plabipd.de/portal/mercator-sequence-annotation ), with default parameters , was used for the functional annotation of DAPs. MetaboAnalyst software was employed for multivariate statistical analysis, utilizing PLS-DA (partial least squares discriminant analysis). In addition, Orthogonal Partial Least Squares Discriminant Analysis (OsPLS-DA) was utilized for multivariate statistical analysis using MetaboAnalyst software. Multifactorial ANOVA was conducted to determine the metabolites that exhibited significant changes (P < 0.01) in various conditions (control and drought) or among different genotypes (tolerant, semi-tolerant, and susceptible). The penultimate internodes came from a set of three genotypes that were considered biological replicates. In total, there were eight independent biological replicates collected at two different time points (21 and 28 DAA). To prepare the samples, fresh samples of the Yousef, Morocco, and PBYT17 genotypes were frozen with liquid nitrogen and ground into a fine powder using a mortar and pestle. Primary metabolites were extracted from 100 mg of finely powdered fresh material using 1 mL of ice-cold methanol and chloroform (1:1, v/v) as previously reported , . Targeted metabolite analysis was conducted using external standards and the extracted samples on an IC-MS system comprising a Dionex ICS5000 (Dionex, Idstein, Germany) coupled with a 6490 Triple Quad LC/MSMS (Agilent, USA), following the methods described in our prior publication , . Metabolites were detected in negative ion mode with multiple reaction monitoring (MRM). Data extraction was performed using MassHunter software (version B.03.01, Agilent Technologies, Germany), and metabolite quantification was carried out by creating a batch for each sample set with Quantitative Analysis (QQQ) software (Agilent, Germany). To normalize the data, 13C-pyruvate was used as an internal standard and added to each sample before analysis. Metabolomics data are available in Metabolights, accession number MTBLS11510 ( http://www.ebi.ac.uk/metabolights/) . The statistical software MetaboAnalyst 6.0 ( https://www.metaboanalyst.ca/ ) was utilized for the analysis of targeted metabolite data. MetaboAnalyst 6.0 was used to generate the profiles of 17 metabolites in three genotypes under water stress at 21 and 28 DAA. A previous study Ghaffari et al utilized the Pearson correlations (function cor.test, R) to analyze the relationship between the abundance of proteins and the efficiency of remobilization in the penultimate internodes of paired samples. Supplementary Information 1. Supplementary Information 2.
Spatio–temporal dynamics of bacterial community composition in a Western European watershed, the Meuse River watershed
06efec57-bc8e-4a16-a687-ee13389591be
11916896
Microbiology[mh]
Bacteria are integral to river ecosystems, where they contribute to vital biogeochemical processes such as organic matter decomposition and nitrification. Their importance is illustrated by the microbial loop, i.e. the assimilation of dissolved organic matter into biomass by bacteria, which are then ingested by protists, themselves predated by zooplankton (Azam et al. ). This pathway of carbon and nutrient cycling though microbial components is coupled to the classic food chain formed by the phytoplankton–zooplankton–fish hierarchy. Like in other environments, the analysis of bacterial community composition (BCC) in rivers has benefited from the rapid evolution of biomolecular techniques that started with low-resolution fingerprinting followed by next-generation sequencing and more recently metagenomics. It has been shown in many studies that river microbial assemblages are dominated by a limited number of phyla: Actinomycetota, Pseudomonadota, Bacteroidota , and Cyanobacteriota (previously named as Actinobacteria, Proteobacteria, Bacteroidetes , and Cyanobacteria , respectively) (Staley et al. , de Oliveira and Margis , Savio et al. , Wang et al. , , Hu et al. , Hassell et al. , Henson et al. , Blais et al. ). In particular, several genera have been frequently associated with freshwater environments, such as hgcI clade ( Actinomycetota ) (Kang et al. , Newton et al. ), Flavobacterium ( Bacteroidota ) (Hagberg et al. , Kirchman ), Limnohabitans ( Pseudomonadota ) (Kasalicky et al. , Hu et al. ), and Fluviicola ( Bacteroidota ) (Guo et al. , Ji et al. ). One way to differentiate subgroups in aquatic bacterial communities is to analyze the BCC of particle-attached communities versus free-living ones. Indeed the water column is a heterogenous environment, where mineral or organic particles (e.g. flocs of decaying phytoplankton) provide various habitats and/or carbon sources for bacteria, therefore being considered as hotspots of microbial abundances and activity compared to free-living bacteria (Crump et al. , Luef et al. ). Accordingly, many studies reported that river bacterial diversity is higher in the fraction recovered on 3- or 5-µm-sized filters (“particle-associated” bacteria) than in the flow through (“free-living” bacteria) (Crump et al. , Velimirov et al. , Savio et al. , Payne et al. , Henson et al. , Liu et al. ). Along the river course, variations in BCC have been reported, that are reminiscent of what is observed for benthic invertebrates and has been framed as the River Continuum Concept (Vannote et al. ). This concept describes how the physico–chemical characteristics of a river change along its course, leading to a predictable succession of biological communities. In several studies, such a succession has been observed for bacterial communities (Staley et al. , Savio et al. ). Headwaters (HW) hold a diverse community of little active soil- and groundwater-affiliated taxa (Crump et al. , Savio et al. ) or, on the contrary, fast-growing r-strategists (e.g. some Bacteroidota ) (Read et al. ). Then, as the river progresses, the latter are progressively replaced by “typical” k-strategists (Read et al. ), which are small, nonmotile, slow-growing substrate specialists (Savio et al. , Niño-García ) belonging, among others, to the hgcI I clade ( Actinomycetota ), and the Polynucleobacter and Limnohabitans genera (both Pseudomonadota ) (Livermore et al. , Pernthaler ). Generally speaking, the structure of biological communities is described in the literature as the result of the interplay of two antagonistic mechanisms: “mass effect” and “species sorting” (Mouquet and Loreau , , Cadotte , Shanafelt et al. , Thompson and Gonzalez , Leibold et al. ). For riverine bacterial communities, the “mass effect” process can be portrayed as the input of allochthonous bacteria originating from surrounding riparian zone that, when prevailing, leads to higher alpha diversity and lower beta diversity and the dominance of certain species such as those typical of soils (Wang et al. ). This phenomenon holds particular significance in HW ecosystems. Those species are not the most locally adapted ones, but they are the most abundant ones at a regional scale. Conversely, “species sorting” is the selection of the most fit species by the local (a)biotic parameters, leading to a lower alpha diversity and a higher beta diversity across the rivers of a watershed (Suzuki and Economo ). In various studies, a gradual shift from mass effect to species sorting has been described along the river course. Indeed, as bacteria flow downstream, they face increased competition for resources (Crump et al. , Savio et al. , Niño-García et al. ), favoring the proliferation of the most competitive species. Conversely, other studies reported a stability in terms of alpha diversity along the river course (Staley et al. , Wang et al. ) or an increase downriver (Henson et al. ), with no clear shift in terms of beta diversity. This variety of results between different studies suggest that the balance between local and regional processes differs from one river ecosystem to another. In addition, the specific parameters influencing BCC appear to differ considerably between river watersheds, thus hindering the identification of universally consistent factors. These identified driving parameters are either temperature (Ma et al. , Reza et al. , Cruaud et al. , Wang et al. ), dissolved oxygen (DO; Feng et al. , Spietz et al. ), pH (Niño-García et al. , Doherty et al. , Mateus-Barros et al. ), salinity (Ma et al. ), total suspended matter (TSM) (Sommaruga and Casamayor ), concentration and/or quality of organic matter (Judd et al. , Staley et al. ), and nitrogen and/or phosphorus concentrations (Ma et al. , Hu et al. , Mateus-Barros et al. ). The impact of watershed characteristics has been highlighted as well, such as the distance from the river source (Paudel Adhikari et al. ), river discharge (Doherty et al. , Cruaud et al. , Caillon et al. ), landform (Liu et al. ), and land use or land cover (Van Rossum et al. , Hosen et al. ). Some studies have also confirmed the influence of season, which integrates several abovementioned parameters, on BCC (Crump et al. , Doherty et al. ). Lastly, the effect of biotic factors such as phytoplanktonic blooms (Winter et al. ) and composition (Šimek et al. ) or grazing rate by protozoa (Salcher et al. ) has been reported as well. One step further, several studies have underscored the potential of bacteria as indicators of water quality due to their high sensitivity to variations in water physico–chemical parameters (Zhang et al. , Martinez-Santos et al. ). Their effectiveness as proxies for ecological status has been demonstrated across various aquatic environments. For example, in coastal ecosystems, Aylagas et al. ( ) developed a bacterial biotic index that showed a significant correlation with anthropogenic compounds such as Polychlorinated Biphenyls (PCB), cadmium, and organic matter. Several families, among which Comamonadaceae and Flavobacteriaceae , were identified as indicators of poor ecological status. In the Songhua River, Yang et al. ( ) found that bacterial indicators of remediation could be identified based on their negative correlation with nitrate levels, including members of the Comamonadaceae family, Limnohabitans, Flavobacterium , and Rhodoferax . In the Danube River, Fontaine et al. ( ) utilized the negative correlation between bacterial taxa and Chl-a concentration—a proxy for eutrophication—to identify four genera as reliable indicators of good water quality: Fluviicola, Acinetobacter, Flavobacterium, and Rhodoluna . The Meuse River, which is the focus of this study, is 926 km long, ranking as the 11th longest river of Western Europe, and crosses three countries (Belgium, France, and the Netherlands) (Fig. ). Its watershed area is 34 548 km 2 and is populated by roughly 7 millions inhabitants [2009 census in Descy ( )], covering some parts of Germany too. Its annual discharge at Jambes (located midstream of the river) is 159 m 3 /s (hydrometrie.wallonie.be). Its water serves different purposes such as agriculture, industries, drinking water supply, hydroelectricity production, and recreational activities (Descy ). Since the 1980s, several surveys have been undertaken on this river centered on phytoplankton production (Descy ), bacterioplankton biomass and production (Servais ), planktonic food webs (Joaquim-Justo et al. , Servais et al. ), dissolved carbon dioxide, methane and nitrous oxide concentrations (Borges ), or effect of floods on TSM (Hamers et al. ). To our knowledge, this study provides the first comprehensive analysis of bacterial diversity in the Meuse River watershed, with a distinction between large and small fractions (SF). Three distinct sampling campaigns were undertaken: two spatial campaigns covering the entire watershed during spring and summer, when microbial activity is expected to peak, and one temporal campaign spanning a full year at a midstream site. The main objective of this study was to investigate the evolution of the BCC within the Meuse watershed using a spatio–seasonal approach, where two fractions in the water column are considered separately (small versus large fraction). Specifically, this study aimed to: (i) assess whether spatial patterns of alpha diversity aligned with temporal ones, (ii) determine the extent to which environmental parameters influenced beta diversity, (iii) evaluate if the BCC of the Meuse watershed was dominated by typical freshwater taxa, and (iv) determine whether dominant taxa could serve as bioindicators of river quality based on their correlation with environmental factors. Study sites and sampling strategy Sample collection and analysis of (a)biotic parameters DNA extraction, PCR, and sequencing Bioinformatic pipeline and downstream analyses Statistical analyses Alpha diversity was assessed based on the Shannon index calculated using the Vegan package in R (Oksanen et al. ). Depending on data normality distribution, ANOVA or Kruskal–Wallis tests were applied to compare alpha diversity values between groups (SF spring versus LF spring, SF spring versus LF summer,…). To define whether alpha diversity values were significantly linearly correlated with distance from the river mouth, Pearson correlation coefficients were calculated. Spearman rank correlations were also carried out to determine potential correlations between Shannon indexes or the most abundant genera (top 20) and physico–chemical parameters. The top 20 genera were determined separately for three distinct groups: spatial MR, spatial HW, and temporal campaign. In both spatial groups, the top 20 genera were calculated by aggregating data from the SM and LF, as well as from both spring and summer seasons. Similarly, for the temporal campaign, data from both fractions were aggregated to determine the top 20 genera. For beta diversity calculations, no rarefaction was performed. Instead, data were processed according to Gloor et al. ( ). First, the function cmultRepl from the zCompositions package (v1.3.4) was used. This function enables to transform ASVs with zero count (which could cause errors after log-ratio transformation), into near-zero estimates (or probability of occurrence), therefore considering undersampling instead of absence. Afterwards, the microbiome package was used to perform a centered log-ratio transformation (clr) (Aitchinson ). A PERMANOVA test (adonis2 function in R) was performed to identify, which variables explained beta diversity among the parameters measured (i.e. most abundant genera and physico–chemical parameters). Distance-based constrained analysis of redundancy (RDA) was then executed using the Vegan package to represent the differences in beta diversity between the different groups of samples (spatial, temporal, SF, LF,…). The physico–chemical parameters measured during both the spatial and temporal campaigns were represented on the RDA plot as vectors, with their length positively correlated to the R 2 values of the PERMANOVA test. During the spatial campaigns, which took place during the spring and summer of 2019, 42 sampling sites were analyzed (Fig. ). Twenty-four sites were sampled along the Meuse main river axis (MR) from the river source to its mouth, with a distance of roughly 30 km between sampling sites. Due to practical limitations, no sample was taken between the river spring and 69 km downstream. Eighteen sites were sampled in HW, which were located within an area characterized by a single soil occupation and at maximum 5 km from a stream source. The QGIS 3.16.7 software was used to visualize maps of the Meuse watershed and its land use to choose the sampling stations. The area where the Meuse meets the Rhine and forms a delta was excluded from the watershed representation (Fig. ) due to the complex water mixing in that section, which made it difficult to analyze the evolution of the Meuse BCC. Consequently, the study focused on a stretch of the river from its source to 926 km downstream, which corresponds to the entry point into the delta, with this point referred to as the “river mouth.” The temporal campaign was conducted at a site located midstream (Jambes, 440 km from the river mouth), which was sampled every second week for 1 year, from February 2019 to March 2020. GPS coordinates of all study sites can be found in , with sampling dates and values of studied parameters. In small streams, surface water was carefully sampled within the first 30 cm depth with a 10-l bucket. Elsewhere, where the river depth was superior to 1 m, surface water was collected using a bucket attached to a rope, from the middle of a bridge. Before collecting water, buckets were rinsed several times with the water from the same sampling site. Afterwards, water was transferred into 10-l bottles, which were rinsed the same way. The experimental protocols are detailed in the . On site, temperature and DO were measured. In addition, for bacterial production measurement, 10 ml of water were poured in 50-ml plastic flasks and those were stored in boxes filled with river water in order to maintain the temperature close to the river one before incubation of the samples with the radioactive substrate (tritiated thymidine), which was performed at the laboratory. Other parameters were measured in the laboratory the same day: TSM, chemical oxygen demand (COD), and chlorophyll a (Chl-a). Phosphate and ammonium concentrations were measured for the samples of the spatial campaigns (HW and MR samples), but not for those of the temporal campaign due to logistical limitations. River discharge data were obtained from public institutions monitoring rivers in France (hydro.eaufrance.fr), Belgium (voies-hydrauliques.wallonie.be), and the Netherlands (rijkswaterstaat.nl). Those could not be determined for HW streams. Water samples were first filtered on 5-µm pore-sized polycarbonate filters (Durapore, Merck Millipore, Ireland) to collect the “particle associated” bacteria, or large fraction bacteria (LF). Then the flow through was filtered on 0.2-µm pore-sized filters to collect “free living” bacteria, or SF bacteria. Filtration was performed until the filter was clogged, which typically occurred after ~1 l was filtered on a 5-µm filter and 300 ml on a 0.2-µm filter. DNA was extracted from the material retained on the membranes using a phenol–chloroform–isoamyl-based extraction protocol (detailed in the ). A two-step Polymerase Chain Reaction (PCR) procedure was performed. PCR1 consisted in the 16 rRNA gene amplification and was executed in our laboratory, followed by gel electrophoresis to assess the quality of amplicons that were then stored at −20°C. The amplification protocol and the primers used [515F (GTGYCAGCMGCCGCGGTAA) and 806 Rb (GGACTACNVGGGTWTCTAAT)] (Apprill et al. , Parada et al. ) were those recommended by the Earth Microbiome Project to amplify the V4 region of the 16S rRNA gene (Caporaso et al. , ). 2.5 µl of DNA (5 ng/µl) were put into a 0.2-ml PCR tube, with 5 µl of primer F (1 µM) and 5 µl of primer R (1 µM). Then, 12.5 µl of PCR mix were added (KAPA Hifi HotStart ReadyMix PCR Kit, Kapa Biosystems, Roche Sequencing, Switzerland). PCR1 was run as follows: 3 min at 95°C, 25 cycles of 30 s at 95°C, 30 s at 55°C, 30 s at 72°C, and a last step of 5 min at 72°C. Amplicons were stored at −20°C. PCR2 consisted in ligating the indexed adaptors to the amplicons. It was performed at the “Genotoul bioinformatics platform Toulouse Occitanie” ( https://bioinfo.genotoul.fr ), which also carried out the Illumina Mi-Seq paired-end sequencing (2 × 250 bp). The sequences obtained were submitted to the NCBI Nucleotide Sequence Database (accession number: PRJNA1126447). SRA accession numbers are provided in . The demultiplexed raw file of sequence data of the 205 samples was processed using the DADA2 pipeline v1.16 (Callahan et al. ) on the R v3.1 software (RStudio Team ). The process followed by this pipeline has already been detailed in a previous study (Fontaine et al. ). First, primers were removed, then dereplication, denoising, and concatenation of paired sequences were performed. Additionally, forward and reverse reads were respectively trimmed at 220 and 210 bp length in order to discard the low quality parts of the sequences. Lastly, chimera were removed. 11 992 816 reads remained out of 17 555 706. The lowest number of reads per sample was 1106, the highest was 132 274. All samples combined, a total of 65 169 Amplicon Sequence Variants (ASVs) was obtained. Those ASVs were compared to the Silva database (version 138.1) with the assignTaxonomy function in order to obtain the taxonomic identification of ASVs. The bootstrapping threshold was set to 100. Sequences identified as belonging to the genus Pseudarcicella were then cross-referenced with the NCBI database using the BLAST tool ( https://blast.ncbi.nlm.nih.gov ), and subsequently reassigned to the Aquirufa genus. This change in identification is explained in the section “Discussion.” Afterwards, eukaryotes, chloroplasts, mitochondria, and archaea were discarded. 9528 216 reads (79,4%) corresponding to 31 578 bacterial ASVs (i.e. 48,5%) remained after this step. To perform alpha diversity comparisons, a random rarefaction of the ASV abundances was conducted using the “rrarefy” function (Oksanen et al. ). Prior to rarefaction, any sample containing <10 000 reads was excluded from the analysis to preserve sufficient diversity information. This exclusion resulted in the loss of 14 samples out of 205 (7% of the samples), including four samples of the spatial campaign on the MR, three of the temporal campaign on the MR, and seven of the spatial campaign on the HW. 10 out of the 14 samples corresponded to LF samples. The rarefaction process was then carried out using the lowest number of reads among the 190 remaining samples, which was 10 019. The list of all ASVs and their abundance in those 190 samples with the taxonomy associated can be found in . Patterns of alpha diversity in the HW and along the MR Evolution of alpha diversity over 1 year at one sampling site Evolution of the 20 most abundant genera across the watershed and along the year Driving parameters of beta diversity patterns Identification of bacterial taxa correlated with physico–chemical parameters of water quality In order to identify taxa among the top 20 most abundant ones that were correlated with physico–chemical parameters indicative of water quality, i.e. DO, ammonium, and phosphate concentrations, we further analyzed the Spearman correlation matrix of Fig. . Ten genera were identified as positively correlated with DO. The most significant correlations were observed for Flavobacterium and Rhodoferax (two discriminating taxa of beta diversity, Table ) and Methylotenera . In addition, a strong negative correlation was observed between DO and Cyanobium PCC 6307 (Fig. ). Another Spearman correlation matrix was calculated exclusively on the main fluvial axis spatial study ( ), during which the most abundant taxa exhibited great variations (Fig. ) and for which nutrient concentrations were available. Here again, the three abovementioned taxa ( Flavobacterium, Rhodoferax , and Methylotenera) showed a strong positive correlation with DO, and the negative correlation between Cyanobium PCC 6307 and DO was confirmed. In addition, Flavobacterium and Rhodoferax were negatively correlated with phosphate concentration ( ). Plus, the relative abundance of Rhodoferax was positively correlated to COD, a proxy of the amount of organic matter in the water (Fig. and ). Other genera were correlated with DO and nutrients in the spatio–seasonal campaigns of the Meuse River axis. On one side, Limnohabitans, Aquirufa , Comamonodaceae ASV5 , and Sphingorhabdus were positively correlated with DO and negatively correlated with phosphate ( ). On the other side, SAR11 Clade III ASV13 and Microcystis PCC 7914 were positively correlated with phosphate concentration. In the HW, no difference in alpha diversity was observed between spring and summer nor between SF and LF bacterial communities (Kruskal–Wallis test, P -value = .25) (Fig. ). Median Shannon index values were 5.3 for SF spring, 5.7 for LF spring, 4.2 for SF summer, and 5.8 for LF summer. Variations in alpha diversity was relatively consistent across the four groups, except for SF summer, which showed greater variation. The lower number of samples for the summer season was both due to low sequencing coverage (which led to discard those samples, see the section “Materials and methods”) and the impossibility to sample some streams because they were dry. Along the main stretch of the river, alpha diversity of the LF was greater than that of the SF, except close to the mouth of the river during summer, where the opposite trend was observed (Fig. ). Of note, the river source (first sampling point on the MR axis also included in the HW) was characterized by a much higher alpha diversity (Shannon index from 4.9 to 5.7) than the following sampling site located around 69 km downstream. During spring, alpha diversity of the LF and SF fractions increased significantly along the river course (respectively R 2 = 0.35, P -value = .0036; R 2 = 0.45, P -value = 5e −4 ) (Fig. ). This could be put in relation with the river discharge that was much higher during the spring campaign than the summer one and increased sharply downstream ( ). This negative correlation between the Shannon index (for both fractions) and the distance from the river mouth or river discharge in spring was further confirmed by calculations of Spearman correlation coefficients ( ρ = −0.49, P -value = .0042 for SF; ρ = 0.49, P -value = .0064 for LF) (Fig. ). Moreover, for both fractions, a positive correlation was also observed between the Shannon index, phosphate ( ρ = 0.55, P -value = .0003 for SF; ρ = 0.69, P -value = .0001 for LF), and ammonium ( ρ = 0.63, P -value = .046 for SF; ρ = 0.49, P -value = .034 for LF), and a negative one with Chl-a ( ρ = −0.38, P -value = .009 for SF; ρ = −0.38 P -value = .015 for LF). During summer, a significant decrease in alpha diversity occurred in the LF along the main axis ( R 2 = 0.37, P -value = .0036), with a major drop between km 267 and km 202 from the river mouth (Fig. ). On the contrary, no significant variation of alpha diversity could be observed for the SF along the main axis ( R 2 = 1.5e −5 , P -value = .99) (Fig. ). Calculations of Spearman correlation coefficients revealed that the alpha diversity of the SF was significantly correlated to ammonium ( ρ = 0.55, P -value = .0034) and phosphate ( ρ = 0.43, P -value = .037), and negatively to DO concentration ( ρ = −0.57, P -value = .012) (Fig. ). No correlation of alpha diversity was observed with river discharge. Generally speaking, the Shannon index of bacterial communities at the station sampled every second week for 1 year (Jambes) was again higher for the LF than the SF (Fig. ). Both fractions were characterized by notable variations of alpha diversity between successive sampling dates. Nevertheless, a decreasing pattern in alpha diversity of the LF could be highlighted during summer, followed by an increase at the end of the summer, in autumn and in winter (Fig. ), to values above those observed in the two spatial campaigns on the MR axis (Fig. ). No clear pattern could be highlighted for the SF. Interestingly, the high values of alpha diversity of the LF fraction in autumn and winter matched those of river discharge at the same seasons ( ). The link between both variables was confirmed by the positive Spearman correlation between the Shannon index of the LF and river discharge ( ρ = 0.6, P -value = .002) (Fig. ). In addition, the Shannon index of the LF was positively correlated to the concentration of TSM ( ρ = 0.81, P -value = .00001), COD ( R 2 = 0.57, P -value = .0487), and DO ( ρ = 0.34, P -value = .032); differently, it was negatively correlated to temperature ( ρ = −0.62, P -value = .0026) and Chl-a ( ρ = −0.33, P -value = .0118) (Fig. ). Concerning the SF, the only significant correlation was that of the Shannon index with the COD ( ρ = 0.73, P -value = .008). We then identified separately the 20 most abundant genera in the HW (Fig. ), along the Meuse main axis (MR) during spring and summer (Fig. ), and over 1 year at Jambes (Fig. ). All in all, this represented 33 different genera when the different studies were aggregated (Fig. ). In the HW, the top 20 most abundant genera accounted for 30%–40% of all ASVs (Fig. ), whereas they represented a more variable percentage of all ASVs (20%–80%) in both the MR and the temporal study (Fig. and ). Nine genera were unique to the top 20 of HW (i.e. not found in the top 20 of the MR or of the temporal study). Those were, in decreasing order of abundance, Novosphingobium, Aurantimicrobium, Yersinia, Cellvibrio, Dechloromonas, Pseudarcobacter, TM7a, Pseudorhodobacter , and Rhodoluna. Novosphingobium, Dechloromonas, Pseudarcobacter , and Pseudorhodobacter were negatively correlated with temperature and positively with TSM, whereas Cellvibrio was negatively correlated with temperature but not with TSM (Fig. ). Nine top 20 genera were shared by the different studies (spatial HW, spatial MR, and temporal studies), representing almost half of the top 20 of each study. Among them, two genera, Flavobacterium and Limnohabitans were especially abundant across in all studies. The other ones were Comamonadaceae ASV5, Fluviicola, Methylotenera, Polynucleobacter, Aquirufa, Rhodoferax , and Simplicispira. Rhodoferax was more abundant in the HW, in both fractions and at both seasons, with a predominance in spring. A peak of this genus could also be observed during late autumn and winter in the LF (and to a lower extent in the SF) of the temporal study. Finally, it was easily detected in both fractions of the MR until km 203 (from the river mouth) in spring. Rhodoferax was negatively correlated with temperature and positively with TSM (Fig. ). Flavobacterium was the second most abundant genus in the HW in spring and first one in summer (Fig. ). Moreover, it was very abundant in the MR in spring (Fig. ), and during autumn and winter in the temporal study (Fig. ). In summer, its relative abundance was greatly reduced in the MR and at Jambes. Coherently, it was negatively correlated to temperature (Fig. ). This taxon showed no preference for either of the fractions. Limnohabitans was more abundant in the SF than the LF in the HW during both seasons. Similarly, it was very abundant in the SF of the MR at both seasons, as well as in summer in the LF. It remained stable throughout the temporal study for both fractions. Furthermore, its relative abundance decreased along the MR whatever the season or fraction (Fig. ). This change in relative abundance with distance was confirmed by a Spearman correlation ( ). Concerning Aquirufa , its was little detected in the HW. In the MR and at Jambes, it was more abundant in the SF than LF, and consistently detected in spring, autumn, and winter while its presence in summer was sporadic (Fig. and ). Fluviicola and Polynucleobacter were present at all stations along the MR and throughout the year at Jambes, with no clear variation in relative abundance according to fraction or season for the first one, the second one being more abundant in the SF fraction. Comamonadaceae ASV5 was consistently detected in spring in the HW, MR, and temporal study, and in lower abundance in summer and autumn. Generally speaking, it was more abundant in the SF than the LF. Methylotenera was detected in spring in the spatial campaigns (HW and MR), and in low abundance in autumn, winter, and spring at Jambes. It was largely absent from the water masses in summer and exhibited a strong negative correlation with temperature (Fig. ). Finally, Simplicispira was recovered in greater abundance in the SF than the LF whatever the study. It was less abundant in summer than in the other seasons. In addition, seven top 20 genera were shared by the MR and the temporal study but not detected in the 20 dominant taxa in HW, which means that the water masses along the MR and at Jambes shared a majority (16) of their top 20 genera. Those seven genera were Armatimonas, Candidatus Planktophila, hgcI clade , NS11-12 ASV15, SAR11 Clade III ASV13, Sediminibacterium, and Sporichtyaceae ASV6. All were positively correlated with temperature and negatively with TSM (Fig. ). In agreement with those results, Candidatus Planktophila, hgcI clade , and Sporichtyaceae ASV6 shared similar spatio–temporal patterns, i.e. a greater relative abundance in summer. In addition, they were more abundant in the SF over the LF. Those trends were also verified in the temporal study, in which those three taxa were almost absent in winter. Armatimonas was more abundant in the LF fraction in all sampling campaigns, especially in summer and autumn. SAR11 Clade III ASV13 was largely represented in the SF of the MR (where a steady increase in abundance was observed downstream, from km 465 to km 27 of the river mouth), and of the temporal study in summer. Finally, two taxa were greatly present in the top 20 of the MR in summer and not in the other groups: Cyanobium PCC-6307 and Microcystis PCC-7914 (Fig. ). Both taxa were positively correlated to temperature (Fig. ). A positive correlation was also observed between Cyanobium PCC-6307 and TSM. Cyanobium PCC-6307 was the most dominant genus in the upper part of the MR (from km 752 to km 552 from the river mouth) in both fractions (10%–37%), and Microcystis PCC-7914 dominated downstream in the LF (from km 203 to km 27 of the river mouth). At the last four stations, this sole genus represented 50%–65% of all ASVs in the LF. Table presents the ranges of abiotic and biotic parameters measured for the three campaigns: HW during spring and summer, MR during spring and summer, and the temporal study at Jambes. A PERMANOVA test was done to identify which (a)biotic parameters explained the dissimilarity between bacterial communities, all samples pooled. The test was significant for all physico–chemical parameters ( P -value < .05), and the best explanatory ones ( R 2 > 0.05) were season, temperature, and distance from the river mouth (Table ). Moreover, among the top 20 most abundant taxa calculated separately for all three datasets (HW, main fluvial axis, and temporal study) and aggregated, Flavobacterium emerged as the most influential taxon to discriminate the communities, followed by Rhodoferax and Sediminibacterium . On the redundancy analysis plot (Fig. ), a progressive distinction of samples according to distance and season was visible for the spatial surveys, with the HW samples being clearly separated from the others. Regarding the temporal survey, most samples of the autumn and winter seasons formed a cluster separated from the spatial study, revealing a different BCC during those seasons, while the summer and spring samples were grouped with the spatial samples of the same seasons. The temperature vector pointed toward the summer season downstream (and concomitantly to the summer samples of the temporal campaign), whereas the DO and TSM vectors pointed toward the autumn and winter samples of the temporal campaign. The latter vector also pointed toward some HW samples. The bacterial production and Chl-a vectors pointed toward midstream in summer, and the COD vector pointed toward the upstream samples of the spatial campaign. Despite being significant according to the PERMANOVA test (Table ), the factor “fraction size” did not differentiate the samples as strongly as season, temperature, and distance ( ). The LF holds greater diversity than the SF Bacterial alpha diversity changes from HW to the mouth of the Meuse River Bacterial alpha diversity varies substantially over 2-week intervals at the same sampling station The dominant genera unique to HW are not typical freshwater taxa Several dominant taxa detected along the MR axis and in the temporal study were identified as potential bioindicators of water quality Differences in BCC between spatial and temporal campaigns are mostly explained by season, distance, and temperature Concerning the impact of season on beta diversity, it was expected to be significant. Indeed, the distinction between winter and summer samples, with spring and autumn intermediary, aligns with other studies (Crump and Hobbie , , Doherty et al. , Payne et al. ). The second most influential physico–chemical parameter driving beta diversity was temperature. This has been shown to differentiate BCC in different studies of fluvial axes (Cruaud et al. , Payne et al. ). The third driving parameter, distance from the river mouth, has been demonstrated to significantly influence BCC in various studies on temperate rivers undertaken during spring (Crump and Hobbie , Jordaan and Bezuidenhout , Read et al. , Savio et al. , Zhao et al. ). The same conclusion was drawn in a study on the Koshi River flowing through regions with cold to tropical climates (Paudel Adhikari et al. ). However, a recent investigation on the Nile River (Eraqi et al. ) revealed that distance did not influence beta diversity, neither during summer nor winter, presenting a notable deviation from previous findings. Finally, in our study, the impact of fraction size on beta diversity was lower than the driving parameters mentioned earlier, even if it was still significant. This result contrasts with several studies of riverine bacteria, which have shown a clear separation of samples according to the fraction size (Savio et al. , Henson et al. ). In this survey of the Meuse watershed, both spatially (at two seasons) and temporally (at one sampling station mid-stream, Jambes), alpha diversity was significantly greater in the LF than in the SF. Such a trend was observed in the HW (although not significant), in the waters of the main fluvial axis (with the exception of the summer samples close to the mouth, dominated by Cyanobacteriota ), and at Jambes throughout the year. The difference in bacterial diversity according to fraction size is in line with previous studies, which addressed this topic (Crump et al. , Mohit et al. , Rieck et al. , Payne et al. , , Gweon et al. ). It is generally explained by the nutrient-rich and varied microenvironments associated with particles, which tend to harbor more diverse microbial communities than the free-living communities (Wang et al. ). On the other hand, the dominance of two Cyanobacteriota genera in the LF of some samples is likely due to their ability to form microcolonies, with an average size of 40 µm for Cyanobium (Jezberová and Komárková ) and >100 µm for Microcystis (Xiao et al. ). Despite significant differences in alpha diversity between the SF and the LF, numerous taxa were shared between the two fractions (i.e. Flavobacterium, Limnohabitans, Aquirufa , Sporichthyaceae ASV6, and Comamonadaceae ASV5). Indeed, many bacterial taxa can alternate between free-living and particle-associated lifestyles (Grossart ). However, notable differences in BCC between SF and LF can also be highlighted in this study. Indeed, some taxa were far more present in the SF (i.e. hgcI clade, Polynucleobacter , and SAR11 clade III ASV13), whereas other taxa were in the LF (i.e. Armatimonas and Microcystis PCC-7914 ). Similar results were reported by Jackson et al. ( ), which observed a prevalence of the Cyanobium clade in the larger fraction of water masses in the Mississippi watershed in summer, while the SAR11 clade was predominantly found among bacteria of smaller fraction sizes. In addition, consistent with our findings, Savio et al. ( ) observed a dominance of the SAR11 clade and hgcI clade in the SF in the Danube River in summer. The ecology of several of these taxa is discussed further below. Unlike waters of the main axis, the HW did not show significant change in alpha diversity between season or fraction, revealing a stable diversity of the water masses. Our results are in contrast with another seasonal study on HW, where a higher diversity was observed during spring compared to summer, which was explained by the higher influence of allochthonous inputs during spring (Laperriere et al. ). In addition, the Shannon index of HW (around 5.5) was higher than that of the waters of the main axis (mostly between 4 and 5). Of note, the sharp decrease in alpha diversity that was observed between the HW (including the Meuse source, km 926 of the river mouth) and the second sampling point along the Meuse axis located 69 km downstream indicates that this stretch of the river deserves further exploration in the future, with sampling at intermediate locations. Nevertheless, the greater alpha diversity of HW compared to locations further downstream has been observed in the Danube as well (Savio et al. ). It was explained by the mass effect being a bigger driver of diversity upstream than species sorting. Moreover, groundwater has been reported to hold a greater bacterial diversity than river water (Retter et al. , Ji et al. ). This difference is explained by a more neutral pH (Fierer et al. ) and a more stable temperature of groundwater (Pinto and Nano ). Finally, Retter et al. ( ) highlighted that the greater productivity (based on cellular ATP and cell count) in river compared to groundwater results in a lower diversity, which is consistent with our findings. In the spring campaign, the increase in alpha diversity along the main axis could be explained by higher precipitations than during that of summer. As a consequence of precipitations, a steep, progressive increase of discharge was observed along the main axis. The positive correlation between alpha diversity and discharge could be explained by the fact that the dispersion effect overrode the species sorting effect during rainfall events. This hypothesis was put forward in the temporal study of a Canadian river subject to seasonal ice cover by Cruaud et al. ( ) and was also supported by the work of Caillon et al. ( ) on the effect of flood events on the BCC of streams. Conversely, the decrease in diversity along the main axis observed in summer (for LF) was consistent with another study carried out at this season in other rivers in the Northern hemisphere (Ruiz-González et al. ). In our study, the summer campaign was characterized by a much lower flow than the spring one, and it is likely that species sorting overrode mass effect. Significant fluctuations in alpha diversity were observed within a 2-week period throughout the year. Similar observations were made in an annual study conducted at a single sampling site on the Mississippi River by Payne et al. ( ). The authors explained those short-term fluctuations, especially noticeable in the summer, by sudden and unpredictable disturbances happening briefly (such as variations in local currents). Seasonal variations were noticed in our study as well: alpha diversity of the LF increased with river discharge (especially in winter). This rise in alpha diversity was most likely linked to a rise in the concentration of suspended particulate matter carried in the water during high water events, providing additional microhabitats for the bacteria (Crump et al. , Ortega-Retuerta et al. ). Indeed, a high correlation coefficient was recorded between alpha diversity of the LF and TSM in our study. The decrease in alpha diversity during summer was expected, as species sorting is known to be positively correlated with temperature (Wang et al. ). As mentioned earlier, Novosphingobium, Aurantimicrobium, Yersinia, Cellvibrio, Pseudarcobacter, TM7a, Pseudorhodobacter, Dechloromonas, and Rhodoluna were detected in the top 20 most abundant genera of HW but not of the MR axis. Novosphingobium is a ubiquitous, metabolically versatile taxon that has been found in a large variety of habitats, where it decomposes organic compounds (including pollutants): the rhizosphere, contaminated bulk soils, seawater, and freshwater (Lee et al. , Sheu et al. , Kumar et al. ). The type strains of Aurantimicrobium have been isolated from various habitats such as freshwater (Nakai et al. ), a river receiving swine wastewater (Sun et al. ) and fish gut microbiota (Chen et al. ). Yersinia, has been detected in various environments, such as human feces, animal feces and intestines, freshwater, and food (Sulakvelidze , Fukushima et al. ). Cellvibrio is a genus associated to sediment, soil, and rhizosphere environments (Blackall et al. , Mergaert et al. , Zhang et al. , Lau and Furusawa ), with exceptional capabilities to degrade plant biomass (Xie et al. , Lau and Furusawa ). While it has been observed in localized HW in the Southeastern USA (Teachey et al. ) and in natural springs in Taiwan (Chen et al. ), its absence from studies performed on a broader scale like that of Laperrière et al. ( ) in Northeastern USA streams suggests that its distribution may be site-specific and heavily influenced by local environmental factors. The presence of Dechloromonas is often associated with anoxic, organic-rich environments such as Wastewater Treatment Plants (WWTPs) (Hu et al. , Saunders et al. ). Pseudarcobacter has been detected in a variety of aquatic environments such as seawater, marine invertebrates, but also sewage and WWTPs (Basiry et al. ). Similarly, Pseudorhodobacter has been recovered from marine sediment, seawater, marine invertebrates, but also wastewater (Bian et al. ) and sludge (Calderon-Franco et al. ). TM7a has been found in soils, the human gut, and riverine environments (Jin et al. ). Lastly, Rhodoluna is an aquatic genus that has been reported across the Danube River (Fontaine et al. ) but was only identified here as part of the top 20 genera of HW and not of MR. In conclusion, most genera exclusive to the HW of the Meuse watershed are associated to soil and/or aquatic environments predominantly rich in organic matter. This observation aligns with the greater values of COD recorded in the HW samples compared to those of the MR. In addition, the association of several of those taxa with wastewater/sludge suggest a potential contamination of HW sampling sites by wild animal feces, cattle, or maybe human wastewater. However, those results should be interpreted with caution, as many of the abovementioned bacterial genera include multiple species with different ecological niches. Three broad-ranging parameters were selected to assess water quality within the Meuse watershed: DO, ammonium, and phosphate concentrations. In the case of the Meuse River, Chl-a could not serve as a proxy of eutrophication, and thus as an indicator of river quality, due to its reduction by the activity of filter-feeding invasive species (discussed in detail further). Two dominant taxa in the temporal and spatial studies (MR and HW) were identified as potential indicators of water quality in the Meuse watershed, as was the case in studies on other watersheds: Flavobacterium and Aquirufa . Regarding Flavobacterium , the prevalence of this primarily aerobic chemoorganotroph genus can be attributed to its capacity for degrading a range of biopolymers like cellulose, chitin, and pectin (Kirchman ). In the Meuse River, Flavobacterium can be considered as an indicator of good water quality, due to its positive correlation with DO and negative correlation with phosphate. The same status was inferred by Fontaine et al. ( ) in the Danube River. Its almost complete disappearance from the top 20 genera in the main axis of the Meuse River in summer is in line with results reported in the Mississippi River (Payne et al. ). One possible explanation can be found in the negative correlation of Flavobacterium with temperature in the Meuse fluvial axis. As for Aquirufa , it has recently been isolated from freshwater environments closely linked to terrestrial ecosystems. It has the ability to degrade pectin, a polymer found in the cell walls of terrestrial plants (Pitt et al. , , Sheu et al. ). Moreover, its rhodopsin system allows it to perform photoheterotrophy (Pitt et al. ), enabling survival in nutrient-poor environments (Chiriac et al. ). It is suspected to be a prevalent freshwater taxon, as the 16S rRNA gene of isolates match that of uncultured clones found in various studies on rivers (Crump and Hobbie ), lakes (Burkert et al. ), and freshwater sediments (Tamaki et al. ). Aquirufa is closely related to Pseudarcicella , a genus initially isolated from leech skin (Kämpfer et al. ) and commonly identified in various riverine environments (Sun et al. , Yang et al. , Cruaud et al. ). It should be noted that Aquirufa can be mistaken for Pseudarcicella during routine identification with certain databases (Hahn M, personal communication), highlighting the importance of careful examination of ASV sequences. Aquirufa was considered an indicator of good water quality in the Meuse due to its positive correlation with DO and negative correlation with phosphate. This aligns with previous findings, which reported a strong negative correlation between Aquirufa abundance and total algae levels in a lake reoligotrophication assessment (Farkas et al. ). In addition to those three taxa, it is noteworthy to highlight the presence of the genus Rhodoferax in the top 20 most abundant genera of the Meuse watershed, especially the HW and the temporal campaign. Rhodoferax spp . are purple nonsulfur, mostly facultative anaerobic bacteria (Kaden et al. ). This genus was defined as a “typical freshwater taxon” (Okafor ), which reduces iron. It represented ∼5% to ∼10% of all ASVs at the river source, which aligns with multiple previous studies reporting Rhodoferax in aquifers (Zhuang et al. , Abiriga et al. , Kasanke et al. ). Species sorting would then make it progressively decrease in the next kilometers, which was observed from the second sampling point along the river axis, especially in summer. Therefore, it can be considered a tracer of groundwater. Rhodoferax relative abundance was negatively correlated to temperature and phosphate concentration, two parameters increasing downstream. Its greater presence during late autumn and winter is coherent with a recent study on Chinese urban rivers (Wang et al. ) and with the identification of this genus in cold environments such as Arctic lakes (Van Trappen et al. ), beneath an Arctic glacier (Cheng and Foght ), and in the permafrost (Steven et al. ). Furthermore, two taxa were observed at specific locations and seasons, with a status of indicator of poor water quality that is backed up by the literature: Cyanobium PCC-6307 and Microcystis PCC-7914 . Those Cyanobacteriota genera were detected in great abundance in the LF fraction of downstream locations along the fluvial axis in summer, concomitantly with the highest values of Chl-a within MR (max 30 mg/l). However, no statistical correlation between those Cyanobacteriota relative abundances and Chl-a concentrations could be established. Phytoplankton blooms (∼120 mg of Chl-a/l) used to occur in the upstream section of the Meuse (until km 400 from the river mouth), where the discharge is still moderate (Descy et al. ). Indeed, phytoplankton production depends on the balance between growth rate and dilution rate. Downstream, the phytoplankton biomass would decrease due to dilution by tributaries, protozoan grazing, and cell mortality (Descy and Gosselain ). However, this pattern is no longer valid, as blooms have drastically diminished in the Meuse River for the last 15 years due to the invasion of filter-feeding molluscs such as Dreissena polymorpha (zebra mussel), Dreissena polymorphys (Marescaux et al. ), and Corbicula spp. (Pigneur et al. ). The greater abundance of Cyanobium midstream could be negatively and positively correlated with DO and temperature, respectively. In accordance with this, Cyanobium has been described as frequently found in warm waters (Stanier et al. ). Moreover, the presence of Cyanobium PCC-6307 has been reported in a variety of aquatic environments, such as lakes and reservoirs, rivers (Eraqi et al. , Blais et al. ), coastal areas (Adyasari et al. ), seas (Kolda et al. ), or even WWTP effluents (Millar et al. ). Microcystis PCC-7914 has been reported in a smaller range of habitats, i.e. lakes (Wu et al. , Li et al. ) and rivers (Millar et al. ). Its peak close to the river mouth can be explained by its positive correlation with phosphate concentration (increasing downstream), which has already been highlighted in previous studies (Davis et al. , Harke and Gobler ). Both Cyanobium PCC-6307 and Microcystis PCC-7914 are known to potentially release cyanotoxins (Millar et al. ); therefore their presence poses a health risk for the fauna and potentially humans and should be the object of further investigation. Other taxa were identified as indicators in this study (i.e. Limnohabitans, Methylotenera , NS11-12 marine group, and SAR11 clade III) but their indicator status contradicted findings from previous studies. Detecting Limnohabitans among the dominant genera was unsurprising. Indeed, Limnohabitans has been characterized as a genetically diverse taxon with wide ecological distribution (Jezbera et al. ). Its significant contribution to freshwater bacterioplankton communities stems from its rapid substrate uptake and growth, utilization of algal-derived substrates, and susceptibility to high mortality rates from bacterivory (Kasalický et al. ). The extensive study on the Danube River of Fontaine et al. ( ) defined it as an indicator of eutrophic conditions, while here, the opposite status was suggested, due to its positive correlation with DO and negative correlation with phosphate. The difference might be due to the criterion used to identify bacterial indicators: Fontaine et al. ( ) looked for correlations of taxa abundance with Chl-a concentration (a proxy for eutrophication) whereas we proceeded in the same way using nutrient and DO concentrations. Methylotenera and NS11-12 marine group were other indicators of good quality in the Meuse watershed (positively correlated with DO and negatively with phosphate). However, previous studies have reported the presence of Methylotenera in numbers in rivers affected by agricultural activities (Huang et al. ), and the NS11-12 marine group was associated with metal contamination (Pb and Cu) in a coastal area (Coclet et al. ) and with dissolved organic carbon originated from algal blooms or from external inputs in lakes and rivers (Farkas et al. ). The fourth taxon, SAR 11 clade III is typically associated with marine habitats but has also been detected in freshwater environments (Tsementzi et al. ). Its summer peak in the Meuse is in agreement with the results of several studies in oceans (Carlson et al. , Eiler et al. ) and in lakes (Salcher et al. , Heinrich et al. ). It might be linked to the presence of proteorhodopsin in these bacteria (Atamna-Ismaeel et al. ). Moreover, it presented a positive correlation with phosphate concentration, reflecting its potential as bioindicator of poor freshwater quality. This correlation is opposite (Salcher et al. ) or consistent (Heinrich et al. ) with what was observed in lacustrine environments. Finally, regarding the other abundant genera across the Meuse watershed, we could not identify any as bioindicator. Some taxa ( Armatimonas, Candidatus Planktophila, hgcI clade , and Sporichthyaceae ASV6) were not identified as bioindicators in the literature either, whereas others had been classified as indicators of good river quality, i.e. Fluviicola (Ji et al. ) and Sediminibacterium (Song et al. ), or of poor river quality, i.e. Polynucleobacter (Pandey et al. ) and Simplicispira (Vignale et al. ). This work was the first to address the BCC of the Meuse River watershed. Furthermore, its originality was to combine a spatio–seasonal survey with a high frequency annual survey. The taxa identified in the HW and the main Meuse River, at different time scales, were consistent with those found in other freshwater environments. Similarly, the main environmental parameters explaining the dissimilarity of BCC between sampling locations have been reported in other surveys of lotic bacterial communities. Yet, the riverine BCC in the Meuse watershed and its spatio–temporal variations were unique, further illustrating the absence of a single pattern of bacterial diversity in rivers worldwide. A notable distinction in our study was the relatively minor influence of fraction size on BCC variations compared to the more significant roles of season, temperature, and distance from the river mouth. This contrasts with other studies, which have placed greater emphasis on fraction size. Moreover, some bacterial taxa were significantly correlated with physico–chemical parameters, highlighting their potential as indicators of good water quality in the Meuse River, i.e. Flavobacterium, Limnohabitans, Aquirufa, Methylotenera, Rhodoferax , and NS 11–12 marine group. Conversely, indicators of poor river quality could be identified as well, i.e. Cyanobium PCC-6307, Microcystis PCC-7914 (particularly abundant in the summer campaign in the Meuse), and SAR 11 clade III. It is important to mention however, that the identification of those “bioindicator” genera was constrained by the limited number of physico–chemical parameters measured in this study. Moreover, ammonium and phosphate, were only measured in the spatial study on the MR axis due to technical limitations. To increase the discriminating power of such analyses, measurements of those parameters should be applied to any future study, and as well as other parameters, such as dissolved organic carbon and nitrate. Additional spatial studies on this watershed during autumn and winter would be valuable to confirm the pronounced differences of BCC observed during the temporal campaign at those seasons. Further on, a multiyear analysis would provide a clearer understanding of the spatio–seasonal patterns in the Meuse watershed and potentially reveal the impact of climate change on riverine BCC. In that respect, performing analyses based on RNA sequencing of the 16S rRNA would provide an additional standpoint on the Meuse BCC, by allowing to identify the active fraction of the bacterial community inhabiting the water column. Lastly, metagenomic analyses would allow to characterize the key functions performed by the river microbiota that we have characterized in this study. fiaf022_Supplemental_Files Code for analyses is available at https://github.com/valbarberoux/R-script—Meuse-BCC
null
3c401157-cf43-4808-bdcd-c776e2a0c9f6
9572931
Pharmacology[mh]
Elsholtzia ciliata (Thunb.) Hyland belongs to the genus Elsholtzia , family Lamiaceae. In the clinical application of traditional Chinese medicine, the aerial parts of Mosla chinensis Maxim (MCM) and Mosla chinensis Maxim cv. Jiangxiangru (JXR) are used as E. ciliata . MCM is mostly wild, and JXR is the cultivated product of MCM, which was often confused with Elsholtzia splendens Nakai ex F. Maek. before . However, Ganpei Zhu believes that JXR has obvious plant morphological differences from MCM. The plant height of JXR can reach 25–66 cm. The stem has gray, white curly pubescence. The leaf blade is broadly lanceolate to lanceolate, and the leaf margin is obviously serrate. The bracts are obovate and ovate. Calyx lobes are triangular lanceolate in shape. There is a hair ring at the base of the crown tube. Nutlets are yellowish brown and nearly round, with lightly carved surface, reticulate and flattened inside. MCM plants are shorter. Stem inversely pilose. Leaf blade linear to linear lanceolate, leaf margin serrate, inconspicuous. Bracts ovate orbicular. Calyx lobes subulate. There is no hairy ring at the base of the crown. Nutlets are nearly spherical, brown, with deep carving on the surface, and uneven in the mesh . Therefore, JXR should be listed as an independent variety . E. ciliata is a herbaceous plant distributed in Russia (Siberia), Mongolia, Korea, Japan, India, the Indochina peninsula, and China, while in Europe and North America it was also introduced and cultivated. In China, it is produced almost all over the country, except Xinjiang and Qinghai. It has low requirements for growth environment, a short growth cycle, flowering period from July to October, and harvest in summer and autumn . Traditional Chinese medicine theory believes that E. ciliata has a spicy flavour and a lukewarm nature. It also has the effect of inducing diaphoresis and relieving superficies, removing dampness for regulating the stomach and inducing diuresis for removing edema. The following is a review of chemical compositions and pharmacological activities. A total of 352 compounds have been identified from E. ciliata . Among the chemical components of E. ciliata , flavonoids and terpenoids are the main components, which make E. ciliata have more obvious antimicrobial, anti-inflammatory, and antioxidant effects. Terpenoids such as 3-carene and some aromatic compounds such as carvacrol exhibit antimicrobial activity. Some polysaccharides can inhibit the proliferation of tumor cells, and show positive effects in immunoregulation. Compounds 1 – 48 are flavonoids, 49 – 77 are phenylpropanoids, 78 – 193 are terpenoids, 194 – 202 are alkaloids compounds, and 203 – 352 are other compounds. Compounds 1 – 352 are listed in and the structures 1 – 352 are listed in . In the traditional application of Chinese medicine, E. ciliata is mainly used for the treatment of summer cold, cold aversion and fever, headache without sweat, abdominal pain, vomiting and diarrhea, edema, and poor urination. Modern pharmacological studies show that E. ciliata has antioxidant, anti-inflammatory, antimicrobial, insecticidal, antiviral, hypolipidemic, hypoglycemic, analgesic, antiarrhythmic, antitumor, antiacetylcholinesterase, and immunoregulator activities. 3.1. Antioxidant Activity3.2. Anti-Inflammatory Activity3.3. Antimicrobial Activity3.4. Insecticidal Activity3.5. Antiviral Activity3.6. Hypolipidemic Activity3.7. Antitumor Activity3.8. Immunoregulatory Activity3.9. Others Different polar ethanol extracts of JXR had different degrees of inhibition on α -glucosidase activity. Therefore, it has certain hypoglycemic activity. When the polar ethanol extract concentration was 4.0 mg/mL, the inhibition rate of petroleum ether extract was 93.8%, IC 50 was 0.339 mg/mL, and the inhibition rate of ethyl acetate extract was 92.8%, IC 50 was 0.454 mg/mL. The essential oil prepared by steam distillation, petroleum ether cold extraction, and petroleum ether reflux extraction also showed significant inhibition of α -glucosidase at the concentration of 0.25 mg/mL, and the inhibition rates were more than 90% . The results of the formalin-induced Licking test showed that E. ciliata crude ethanol extract has analgesic effect on the early stage of reaction (0–5 min) . THE Langendorff perfused isolated rabbit heart model was used. When E. ciliata essential oil was added into perfusate, QRS interval was increased, QT interval was shortened, AND action potentials upstroke amplitude was decreased, and activation time was prolonged when the concentration of E. ciliata essential oil was increased in the range of 0.01–0.1 μL/mL, and showed concentration dependence. This may be due to the fact that sodium channel block can increase the threshold of action potential generation, prolong the effective refractory period, and inhibit the zero-phase depolarization of late depolarization. The reduction of action potential duration can reduce the occurrence of early depolarization. This experiment provides theoretical basis for E. ciliata in the treatment of arrhythmia . 7- O -(6- O -acetyl)- β -D-glucopyranosyl-(1→2)[(4-oacetyl)- α -L-rhamnopyranosyl-(1→6)]- β -D-glucopyranoside in methanol extract of E. ciliata was hydrolyzed to obtain acacetin. The IC 50 of acacetin against acetylcholinesterase was 50.33 ± 0.87 μg/mL, which showed a significant inhibitory effect on acetylcholinesterase activity, which may hold promise for Alzheimer’s disease treatment . Oxidative stress refers to a state of imbalance between oxidation and antioxidant effects in vivo. It is a negative effect caused by free radicals in the body and is considered to be an important factor in aging and disease. It was reported that the essential oil of E. ciliata could increase catalase (CAT) activity in brain of mice by 26.94%, which may be related to the decomposition of hydrogen peroxide by CAT to reduce oxidative stress . There is a phenolic substance osmundacetone in E. ciliata ethanol extract. In DPPH experiment, the IC 50 value of osmundacetone was 7.88 ± 0.02 µM, indicating a certain antioxidant capacity. The inhibitory effect of osmundacetone on glutamate-induced oxidative stress in HT22 cells was studied by reactive oxygen species (ROS) method. The results showed that osmundacetone significantly reduced the accumulation of ROS and could be used as a potential antioxidant . By studying the effect of E. ciliata methanol extract on J774A.1 murine macrophage, the evaluation of antioxidant activity showed that all the tested compounds had significant effects on ROS release under oxidative stress at the highest concentration (10 M), especially luteolin-7- O - β -D-glucopyranoside, luteolin, and 5,6,4’-trihydroxy-7,3’-dimethoxyflavone . Various scholars studied different polarity extracts of E. ciliata . According to the free radical scavenging experiment of Huynh Xuan Phong, the result showed that E. ciliata extract had certain scavenging ability against 2,2-diphenyl-1-picrylhydrazyl (DPPH) and 2,2’-azino-bis (3-ethylbenzothiazoline -6-sulfonic acid) (ABTS), with IC50 values of 495.80 ± 17.16 and 73. 59 ± 3.18 mg/mL. . In DPPH experiment, the EC 50 values of dichloromethane extract, crude ethanol extract and n-hexane extract were 0.041 µg/µg, 0.15 µg/µg and 0.46 µg/µg, respectively, showing strong antioxidant activity. Such antioxidant capacity may be related to non-polar flavonoids and phenols contained in E. ciliata , among which the total phenol content of dichloromethane is 96.68 ± 0.0010 µg GAEs/mg Extract, and the total flavonoid content is 71.5 ± 0.0089 µg QEs/mg Extract. Therefore, it has the strongest antioxidant capacity . Jing-en Li et al. extracted JXR ethanol extract with petroleum ether, ethyl acetate and water-saturated n-butanol, respectively, and studied the antioxidant activities of the three parts and water phase. The results indicated that ethyl acetate showed good antioxidant activities in ferric reducing antioxidant power (FRAP), DPPH, and β -carotene assay, which may be related to the higher flavonoid content in this extract . The antioxidant ability of mytilus polysaccharide-I (MP-I) contained in JXR water extract was concentration-dependent. When the concentration was 16 mg/mL, the chelation rate of MP-I and Fe 2+ was 87.80%. When the concentration was 20 mg/mL, the scavenging rate of DPPH free radical was 81.32%. The scavenging rate of hydroxyl radical was 81.94% . The DPPH test IC 50 of MCM essential oil and methanol extract were 1230.4 ± 12.5 and 1482.5 ± 10.9 μg/mL, respectively. Reducing power test EC 50 were 105.1 ± 0.9 and 313.5 ± 2.5 μg/mL, respectively. β -Carotene bleaching assay EC 50 were 588.2 ± 4.2 and 789.4 ± 1.3 μg/ml, respectively. The total phenolic content of the essential oil was about 1.7 times that of methanol extract, which further verified the stronger antioxidant capacity of the essential oil . Different parts of E. ciliata have different antioxidant capacity. Lauryna Pudziuvelyte et al. used DPPH, ABTS, FRAP, and cupric ion reducing antioxidant capacity (CUPRAC) to evaluate the antioxidant activity of different parts of E. ciliata DPPH and ABTS results showed that total phenolics content (TPC) and total flavonoids content (TFC) amounts of ethanol extracts from E. ciliata flower, leaf and whole plant were the highest and had the strongest antioxidant activity. The results of FRAP and CUPRAC test showed that the ethanol extract of E. ciliata flower had the highest antioxidant activity. Among different parts of the ethanol extract, the content of quercetin glycosides, phenolic acids, TPC, and TFC in stem extract was the lowest, and the antioxidant activity was the lowest . The ethyl acetate fraction of E. ciliata was purified by macroporous resin with 80% ethanol to obtain fraction E. In the DPPH experiment, the EC 50 value of fraction E was 0.09 mg/mL, which showed the strongest antioxidant and free radical scavenging ability. The EC 50 value of fraction E was higher than positive control butylated hydroxytoluene (0.45), butylated hydroxyanisole (0.21), and vitamin C (0.41). Hence, it can be seen that E. ciliata has the potential to prevent cardiovascular diseases, cancer, and other diseases caused by excess free radicals . Compounds pedalin, luteolin-7- O - β -D-glucopyranoside, 5-hydroxy-6,7-dimethoxyflavone, and α -linolenic acid in the essential oil of E. ciliata were investigated under a lipopolysaccharide (LPS)-induced inflammatory reaction. It can inhibit ROS release, but its mechanism deserves further study . LPS-induced inflammation was evaluated by the number of inflammatory mediators, i.e., tumor necrosis factor- α (TNF- α ), interleukin (IL)-6, and prostaglandin E2 (PGE2). E. ciliata ethanol extract could significantly inhibit the secretion of inflammatory mediators, where TNF- α and IL-6 factors could be effectively inhibited in the stem and flower part, and the PGE2 pathway could be inhibited in the leaf part . The effect of E. ciliata on inflammation can be further verified by studying pyretic rats caused by LPS and mononuclear macrophage RAW264.7 induced by LPS. E. ciliata essential oil and water decoction can reduce the contents of PGE2, TNF- α and other inflammatory factors to different degrees, and can reduce the content of nitric oxide (NO) in serum . Excessive NO can induce the production of pro-inflammatory factors, such as TGF- α and IL-1 β , and aggravate the inflammatory response . JXR alleviates dextran sulfate sodium induced intestinal knot inflammation in mice by affecting the release of NO, PGE2 and other inflammatory mediators and cytokines . Carvacrol in MCM can inhibit the expression of pro-inflammatory cytokines interferon- γ (IFN- γ ), IL-6, and IL-17 and up-regulate the expression of anti-inflammatory factors TGF- β , IL-4, and IL-10, thus reducing the level of inflammatory factors, reducing the damage to cells, and achieving anti-inflammatory effects . In the formalin-induced licking response test, the licking time of E. ciliata crude ethanol extract and dichloromethane extract is shortened at the late phase under 100 mg/kg dose, and the licking time of n-hexane extract is shortened at the early phase under 100 mg/kg dose, which may be related to its anti-inflammatory effect . Water extract of E. ciliata has anti-allergic inflammatory activity and may be related to the inhibition of calcium, P38 mitogen-activated protein kinase, and nuclear factor- κ B expression in the human mast cell line . Different polar extracts of E. ciliata demonstrated significant differences in inhibition ability with regard to microorganism. The results showed that dichloromethane fraction had the strongest inhibitory activity on Candida albicans with minimum inhibitory concentration (MIC) of 62.5 µg/mL, while n-hexane fraction had the strongest inhibitory effect on Escherichia coli with MIC of 250 µg/mL . The ethyl acetate extract of JXR had strong inhibitory effect on Rhizopus oryzae , with the inhibition zone diameter of 13.7 ± 2.7 mm, MIC of 5 mg/mL and minimum bactericidal concentration (MBC) of 5 mg/mL . The MIC of JXR petroleum ether extract, n-butanol extract and ethanol extract against Escherichia coli , Staphylococcus aureus and Bacillus subtilis were 31.25 μg/mL, and the MIC of ethyl acetate extract was 15.60 μg/mL . The carbon dioxide extract of E. ciliata demonstrated a certain inhibitory effect on Staphylococcus aureus , Salmonella paratyphoid, and other microorganisms. When the concentration of the extract was 0.10 g/mL, the inhibitory effect on Staphylococcus aureus was the most obvious, and the diameter of the inhibition zone is 19.7 ± 0.1 mm . According to existing research reports, E. ciliata is rich in essential oil, which contains abundant antibacterial ingredients and can inhibit a variety of microorganisms, so it has research significance and value. The main antibacterial active components of the essential oil of E. ciliata are thymol, carvacrol, and p-Cymene, which have inhibitory effect on Staphylococcus aureus , Methicillin-resistant Staphylococcus aureus and Escherichia coli . MIC were 0.39 mg/mL, 3.12 mg/mL and 1.56 mg/mL, and the diameters of inhibition zone were 21.9 ± 0.1230, 18.2 ± 0.0560, and 16.7 ± 0.0115 nm, respectively . The essential oil in E. ciliata flowers, stems, and leaves had inhibitory effects on Escherichia coli , Staphylococcus aureus , Salmonella typhi , Klebsiella pneumoniae, and Pseudomonas aeruginosa. Both of them had the strongest inhibitory effect on Staphylococcus aureus with the inhibitory zone diameter of 12.2 ± 0.4 and 11.2 ± 0.1 mm, respectively . Other relevant findings suggest that JXR essential oil may affect the formation of Staphylococcus aureus biofilm, so as to achieve bacteriostatic effect on its growth. The MIC of JXR essential oil to Staphylococcus aureus was 0.250 mg/mL. When the concentration was 4MIC, the inhibition rate of essential oil to Staphylococcus aureus biofilm formation could reach 91.3%, and the biofilm clearance rate was 78.5%. The MIC of carvacrol, thymol, and carvacrol acetate against Staphylococcus aureus were 0.122, 0.245, and 0.195 mg/mL, respectively, which were the effective antibacterial components of essential oil. Carvacrol, carvacryl acetate, α -cardene, and 3-carene had strong inhibitory effects on the formation of Staphylococcus aureus biofilm, and the inhibition rates were more than 80% at 1/4 MIC (0.0305, 1.4580, 0.1267 and 2.5975 mg/mL, respectively) . In another study, Li Cao et al. studied the inhibitory effect of MCM essential oil on 17 kinds of microorganisms, among which, It significantly inhibited Chaetomium globosum, Aspergillus fumigatus and Candida rugosa . The antibacterial zone diameters were 16.3 ± 0.58, 15.0 ± 1.00, 16.0 ± 0.00 mm, and MIC were 31.3, 62.5, 62.5 μg/mL, respectively . It also has obvious inhibitory effect on Bacillus subtilis and Salmonella enteritidis , which might be related to the terpenes contained, but this opinion remains to be verified . Thymol and carvacrol are the main antibacterial components of MCM. Caryophyllene oxide can be used in the treatment of dermatomycosis, especially in the short-term treatment of mycosis ungualis . The bactericidal mechanism of essential oil may be due to the fact that active components such as carvacrol can damage cell membranes and alter their permeability . The extract of MCM had a significant inhibitory effect on the spore germination of Aspergillus flavus and could significantly change the morphology of Aspergillus flavus mycelia, podocytes, and sporophytes, with a MIC of 0.15 mg/mL . The germination rate of Penicillium digitorum treated with carvacrol significantly decreased, the mechanism may be that carvacrol can change the surface morphology of mycelia, and the cavity rate of mycelia increased with the increase of carvacrol concentration. The permeability of the cell membrane of bacteria increases, causing an electrolyte imbalance in bacteria. As a result, the sugar content and nutrients in bacteria are reduced, so as to achieve bacteriostasis. The MIC and MBC of carvacrol against Penicillium digitorum were 0.125 and 0.25 mg/mL, respectively . Some studies have shown that E. ciliata has an insecticidal effect. The repellency rate of E. ciliata essential oil to Blattella germanica was 64.50%, no significant difference from positive control diethyltoluamide (DEET) ( p < 0.05). RD 50 of E. ciliata essential oil was 218.634 µg/cm 2 , which was better than DEET (650.403 µg/cm 2 ) . Contact toxicity IC 50 of E. ciliata essential oil to Liposcelis bostrychophila was 145.5 μg/cm 2 , and fumigation toxicity IC 50 was 475.2 mg/L. (R)-carvone. Dehydroelsholtzia ketone and elsholtzia ketone are the active components of E. ciliata essential oil against Liposcelis bostrychophila . The IC 50 of contact toxicity were 57.0, 151.5, and 194.1 μg/cm 2 , and those of fumigantion toxicity were 417.4, 658.2, and 547.3 mg/L, respectively . Carvone and limonene are the two main components in E. ciliata essential oil. The ability of E. ciliata essential oil, carvone, and limonene against Tribolium castaneum larvae and adults was evaluated by a contact toxicity test and fumigation assay. Contact toxicity test showed that the LD 50 of E. ciliata essential oil, carvone, and limonene to Tribolium castaneum adults were 7.79, 5.08, and 38.57 mg/Adult, respectively, and 24.87, 33.03, and 49.68 mg/Larva to Tribolium castaneum larvae. The results of fumigation toxicity test showed that LC 50 of Tribolium castaneum adults were 11.61, 4.34, and 5.52 mg/L Air, respectively, and LC 50 of Tribolium Castaneum larvae were 8.73, 28.71, and 20.64 mg/L Air, respectively . Thymol, carvacrol, and β -thymol contained in JXR essential oil had significant fumigation toxicity against Mythimna Separate, Myzus Persicae, Sitophilus Zeamais, Musca domestica, and Tetranychus cinnabarinus , among which β -thymol has the strongest activity. The IC 50 values for the five pests were 10.56 (9.26–12.73). 14.13 (11.84–16.59), 88.22 (78.53–99.18), 10.05 (8.63–11.46), and 7.53 (6.53–8.79) μL/L air, respectively . Determined by the immersion method, the LC 50 of MCM essential oil against Aedes albopictus larvae and pupae at four instars were 78.820 and 122.656 μg/mL, respectively. The chemotaxis activity of MCM essential oil was evaluated by the method of effective time of human local skin coating. When the dose was 1.5 mg/cm 2 , the complete protection time of Aedes albopictus was 2.330 ± 0.167 h . From this point of view, E. ciliata essential oil has the development potential as a natural anti-insect agent. It provides a basis for the development and utilization of pesticide dosage forms. Leishmania mexicana can cause cutaneous leishmaniasis. E. ciliata essential oil had anti-leishmania activity with IC 50 of 8.49 ± 0.32 nL/mL. Leishmania mexicana mexicana was treated with a survival rate of 0.38 ± 0.00 %. Selectivity indices were 5.58 and 1.56 for mammalian cell WI38 and J774, respectively. This provides a reference for the treatment of cutaneous leishmaniasis . E. ciliata water extract has an obvious anti- trichomonas vaginalis effect, i.e., can destroy the insect body structure, to achieve the purpose of killing insects. The results of in vitro experiments showed that the lowest effective concentration of E. ciliata water extract was 62.5 mg/mL, and the lowest effective time was 12 h. When the concentration was 250 mg/mL, all Trichomonas vaginalis could be killed for 4 h. This experiment provides a new idea for the clinical treatment of vaginal trichomoniasis . T helper 17 (Th17) cells play an important role in maintaining adaptive immune balance, and an excess of Th17 cells can cause inflammation. Carvacrol plays an anti-influenza virus role by reducing the proportion of Th17 cells significantly increased by influenza virus A infection. It can be used as a potential antiviral drug and can also be used to control inflammation caused by influenza virus A infection . Mice with viral pneumonia modeled by A/PR/8/34 (H1N1) virus were treated with low, medium and high dose of MCM total flavonoids. Lung index of the three dose groups were 12.81 ± 3.80, 11.65 ± 2.58, 11.45 ± 2.40 mg/g, respectively, compared with the infection group 16.05 ± 3.87 mg/g, the inhibition rates were 20.18%, 27.41%, 28.66%, respectively . E. ciliata ethanol extract has an inhibitory effect on the proliferation of avian infectious bronchitis virus, which may be related to the increased expression of three antiviral genes suppressor of cytokine signaling 3 (SOCS3), 2′-5′-oligoadenylate synthetase-like (OASL), and signal transducer and activator of transcription 1 (STAT1) in H1299 cells treated with extract, and this inhibitory effect shows a certain concentration dependence. In addition, the extract had no cytotoxicity when the concentration was less than 0.3 g/mL . Above experiments provide new possibilities for the treatment of inflammation caused by the virus. A/WSN/33/2009 (H1N1) virus was used to infect Madin-Darby canine kidney cells to explore the antiviral activity of phenolic acids from MCM in vitro. The survival rate of the cells treated with the compound 3-(3,4-dihydroxyphenyl) acrylic acid 1-(3,4-dihydroxyphenyl)-2-methoxycarbonylethyl and methyl lithospermate were higher than 80%, and the inhibition rate of virus at 100 μmol/L were 89.28% and 98.61%, respectively . In another study, the lung index of low, medium, and high dose of MCM water extract on mice infected by A/PR8 influenza virus was 1.21 ± 0.22%, 1.12 ± 0.17%, and 0.94 ± 0.21%, respectively. Compared to the virus-infected group 1.80 ± 0.29 %, the inhibition rates were 32.78%, 37.78% and 47.78%, respectively. The extracts of the three groups can increase the amounts of IL-2 and IFN- γ in serum of mice, and promote the antiviral ability of the body indirectly or directly . Fluoranthene is a compound with antiviral activity extracted from E. ciliata . It has a certain inhibitory effect on two enveloped viruses, sindbis virus, and murine cytomegalovirus, with the lowest effective concentrations of 0.01 and 1.0 μg/mL, respectively. However, its biological effects are complex, and its clinical safety and effectiveness need further research . The hypolipidemic activity of E. ciliata ethanol extract was evaluated by determining the effects on the contents of triglyceride and total cholesterol in serum of mice in vivo and the proliferation of 3T3-L1 preadipocytes in vitro. The results showed that the levels of triglyceride and total cholesterol in serum of mice treated with the extract were decreased, and the differentiation and accumulation of 3T3-L1 preadipocytes were also effectively inhibited. The levels of genes associated with adipogenesis, such as peroxisome proliferator activated receptor γ (PPAR γ ), fatty acid synthase (FAS), and adipocyte fatty acid-binding protein 2 (aP2) were also significantly reduced. In addition, serum leptin content in E. ciliata ethanol extract treatment group was lower than that in obese mice, which may be due to the reduction of fat content. By this token, the action mechanism of E. ciliata lowering blood lipids may be to inhibit the expression of genes related to fat cell formation. However, the specific mechanism needs further study . Pudziuvelyte, L. et al. extracted essential oil from E. ciliata fresh herbs, lyophilized herbs, and dried herbs, respectively. In in vitro experiments, three kinds of essential oil presented significant inhibition of proliferation effect on the human glioblastoma (U87), pancreatic cancer (PANC-1), and triple negative breast cancer (MDA-MB231) cells, with EC 50 values ranging from 0.017% to 0.021%. However, E. ciliata ethanol extract did not show cytotoxicity in this experiment . The antitumor activity of origin processing integration technology and traditional cutting processing technology of E. ciliata was evaluated by measuring the effect of the decoction and essential oil on the average optical density of TNF- α in rat lung tissue. Average optical densities of water decocted solution and essential oil of traditional cutting E. ciliata were 0.530 ± 0.071 and 0.412 ± 0.038, respectively, and those of integration processing technology of origin were 0.459 ± 0.051 and 0.459 ± 0.051, respectively. Compared with the blank group (0.299 ± 0.028), there were varying degrees of increase . In vitro experiments of JXR pectin polysaccharide (MP-A40) showed that the proliferation of human leukemic cell line K562 was affected by MP-A40. When the concentration of MP-A40 was 500μg/mL, the inhibition rate was 31.32% . Macrophages can regulate apoptosis by producing NO and other effecting molecules. Macrophage RAW 264.7 cells treated by JXR pectin polysaccharide (MP-A40) showed an obvious increase in NO production. Moreover, it’s concentration-dependent. When the concentration of MP-A40 was as low as 10 μg/mL, NO production was still 15 times that of negative control . Mice treated with cyclophosphamide had elevated levels of free radicals, increasing aggression towards immune organs, and decreased thymus and spleen indices. Polysaccharide MP can scavenge free radicals and promote the proliferation of ConA-induced T cells and LPS-induced B cells. To a certain extent, the immunosuppression induced by cyclophosphamide can be alleviated . However, the potential immunomodulatory mechanism of polysaccharide remains to be further studied. This paper summarizes pharmacological activities of E. ciliata , among which antioxidant, anti-inflammatory, antimicrobial and insecticidal activities are the main activities, but also has antiviral, hypolipidemic, hypoglycemic, anti-tumor activities. Hence, 352 kinds of chemical constituents identified from E. ciliata were summarized. According to their structure types, they can be divided into flavonoids, phenylpropanoids, terpenoids, alkaloids and other compounds. According to the existing pharmacological experiment results in vivo and in vitro, E. ciliata dichloromethane extract, ethyl acetate extract and essential oil all show good pharmacological activity. Carvacrol contained in E. ciliata is the main active ingredient of antibacterial. At present, researches on pharmacological activity of E. ciliata mainly focus on essential oil, and some researches involve E. ciliata alcohol extract, water extract and polysaccharide, but there are relatively few researches on pharmacological activities of E. ciliata such as analgesia, immune regulation, hypoglycemia and hypolipemia. Whether E. ciliata has potential pharmacological activity still needs further test to prove. The safety study of clinical dose also deserves extensive attention. Some representative action mechanisms of E. ciliata were briefly illustrated for the sake of reference. The possible action process is shown in the figure. The mitogen-activated protein kinases (MAPKs) signaling pathway chain consists of three protein kinases, MAP3K–MAP2K–MAPK, which transmit upstream signals to downstream responsive molecules through sequential phosphorylation. MAPK includes four subfamilies: ERK, p38, JNK, and ERK5. MAPK activity is thought to be regulated by the diphosphate sites in the amino acid sequence of the active ring. The active ring contains a characteristic threonine-x-tyrosine (T-x-Y) motif. Mitogen activated protein (MAP) kinase phosphorylates on two amino acid residues, thereby activating MAPK pathway. MAP kinase phosphatase (MKP) can hydrolyze phosphorylated products and inactivate MAPK pathway. The extract inhibited the activation of MAPK signaling pathway by blocking the phosphorylation of p38, JNK and ERK . When stimulated, tissue cells release arachidonic acid (AA). Cyclooxygenase (COX) catalyze AA to produce a series of bioactive substances such as prostaglandins (PGs), causing inflammation. The extract can affect the COX-2 pathway by affecting the release levels of TNF- α , IL-6, and PGE2, which are key mediators released by macrophages during bacterial infection, so as to achieve the purpose of anti-inflammatory . Carvacrol can significantly inhibit the mRNA expression of toll like receptor 7 (TLR7), interleukin-1 receptor associated kinase (IRAK4), TNF receptor associated factor (TRAF6), induced pluripotent stem-I (IPS-I), and interferon regulatory factor 3 (IRF3) in mice, thereby affecting the immunomodulatory signaling pathways of TLR7/RLR and playing an anti-H1N1 influenza virus role . With the deepening of various studies, the gradual clarification of the mechanism of action has created conditions for drugs to play a better role. Two representative mechanisms of action, MAPK and COX-2, are shown in and . E. ciliata has rich resources and low requirements for growth environment. It can be planted artificially and has a short growth cycle. The rich essential oil content makes it possible for E. ciliata to be used as flavor and food additive. Undoubtedly, the development of E. ciliata in new dosage forms and the application to medicine, food, and other fields will provide broad development prospects in the future.
rVSVΔG-ZEBOV-GP Vaccine Is Highly Immunogenic and Efficacious Across a Wide Dose Range in a Nonhuman Primate EBOV Challenge Model
37ad19b9-1859-4132-870c-c42528c4389e
11945660
Vaccination[mh]
Ebolaviruses are members of the Filoviridae family, a group of filamentous, enveloped ribonucleic acid (RNA) viruses, maintained in nature in an enzootic cycle most likely involving fruit bats and in humans through persistent infection [ , , ]. After introduction from the natural cycle, the virus is transmitted by person-to-person contact. The Ebola virus (EBOV), Orthoebolavirus zairense , formerly known as Zaire ebolavirus , has been responsible for the vast majority of human cases of Ebola virus disease (EVD), including the 2014–2016 outbreak in West Africa and multiple subsequent outbreaks, including those in the Democratic Republic of the Congo (2018–2022), Uganda (2022), and Guinea (2021) . Initially, patients with EVD exhibit nonspecific symptoms, but severe disease can quickly develop profuse diarrhea, hemorrhagic symptoms, and multi-organ dysfunction, often with high mortality . In a meta-analysis that included 20 outbreaks from 1976 to 2014 as recorded by the World Health Organization (WHO), the case fatality rate for EBOV was 76% . Two monoclonal antibody (mAb) treatments (Inmazeb and Ebanga) were found to reduce EVD-associated mortality and have been approved by the US Food and Drug Administration (FDA) since 2020 [ , , ]. At the time of the 2014–2016 outbreak, there was no approved vaccine or treatment available for EVD. rVSVΔG-ZEBOV-GP is a live attenuated recombinant viral vaccine in which the gene encoding the vesicular stomatitis virus (VSV) glycoprotein (GP) G is replaced with the GP gene of EBOV . Eight Phase 1 [ , , , , , , , ] and five Phase 2/3 [ , , , , ] clinical trials of rVSVΔG-ZEBOV-GP were conducted during the 2014–2016 outbreak with more than 18,000 individuals receiving at least one dose of rVSVΔG-ZEBOV-GP . In some cases, long-term follow-up of participants, up to 5 years post-vaccination, extended beyond the outbreak . The vaccine was generally well tolerated in healthy participants [ , , , , , , , , , , , , , , ]. A pivotal Phase 3 study conducted in Guinea using a ring vaccination protocol demonstrated 100% efficacy in individuals ≥10 days after receiving a single vaccination of rVSVΔG-ZEBOV-GP at 2 × 10 7 pfu . Efficacy and safety data from these trials formed key components in licensure submissions to Regulatory Agencies including the FDA and European Medicines Agency (EMA). The rVSVΔG-ZEBOV-GP vaccine (ERVEBO ® ; also known as V920; Merck & Co., Inc., Rahway, NJ, USA) received conditional (November 2019) and full market authorization (January 2021) from the EMA for adults aged ≥18 years to protect against EVD caused by EBOV. The EMA approved an expanded indication for ERVEBO in September 2023 to include individuals aged ≥1 year . In December 2019, the US FDA authorized the vaccine for adults aged ≥18 years , and in August 2023, they expanded its indication to include individuals aged ≥12 monthsSince approval of the rVSVΔG-ZEBOV-GP vaccine, an additional Ebola vaccine has been approved by the EMA but requires two doses for primary immunization . Preclinical studies conducted prior to the 2014–2016 EBOV outbreak consistently demonstrated the efficacy of rVSVΔG-based vaccines in protecting nonhuman primates (NHPs) against lethal Ebola infections [ , , , ]. These studies evaluated research-grade preparations of the vaccine at doses of approximately 1 × 10 7 pfu, which were 100% effective in preventing Ebolavirus disease in NHPs when a single dose was delivered about 28 days prior to EBOV challenge. Other preclinical studies demonstrated a critical role for antibodies in the vaccine-induced protection and demonstrated that the rVSVΔG-ZEBOV-GP vaccine was well tolerated in healthy NHP as well as in simian–human immunodeficiency virus (SHIV)-infected NHPs . Collectively, these preclinical data contributed to the decision to advance the vaccine candidate into Phase 1 clinical trials in late 2014. In parallel with ongoing Phase 1 and Phase 2/3 clinical trials of rVSVDG-ZEBOV-GP in 2014–2015, the US Department of Defense, in collaboration with Merck & Co., Inc. (Rahway, NJ, USA), conducted two dose-ranging NHP studies with clinical-grade preparations of rVSVΔG-ZEBOV-GP to evaluate immunogenicity and efficacy of this vaccine at doses similar to those used in the clinical studies. Across these two NHP studies, animals were vaccinated with a wide range of rVSVΔG-ZEBOV-GP doses (1 × 10 8 to 3 × 10 2 pfu) with the goal of inducing a range of immune responses, which were used in Study 1 to support dose selection for clinical studies and were used in Study 2 to inform correlates of protection analyses for potential immunobridging between NHP and human responses and efficacy. Ebolavirus envelope glycoprotein (EBOV-GP)-specific antibody responses were assessed using the same validated enzyme linked immunosorbent assay (ELISA) and 60% plaque reduction neutralization test (PRNT 60 ) methods used in clinical studies. This approach was used to help better align across species for immunobridging evaluations and to estimate efficacy of filovirus vaccines in humans in situations where studies to demonstrate protection against disease may not be feasible. Here, we report results from both studies, which, together with clinical safety and efficacy data and additional nonclinical toxicology data, supported the application for regulatory approval and product licensure and served as the foundation for understanding immune responses and protection in a relevant animal model. 2.1. Study Design 2.2. Animals 2.3. Responsiveness Scores 2.42.5. rVSVΔG-ZEBOV-GP Viremia and EBOV RNA 2.6. Hematology and Clinical Chemistry 2.7. Anatomical Pathology 2.8. Statistical Analysis An overview of the study design of the two immunogenicity and efficacy studies of rVSVΔG-ZEBOV-GP in NHPs, including the dose regimen of vaccination, is shown in . Both studies were conducted by the United States Army Medical Research Institute of Infectious Diseases (USAMRIID). Development of the rVSVΔG-ZEBOV-GP vaccine has been thoroughly reviewed, and the product has been described previously . A clinical-grade rVSVΔG-ZEBOV-GP vaccine candidate manufactured by IDT Biologika (lot # 003 05 13 in Study 1 and lot # 001 10 14 in Study 2) was used here. EBOV (strain Kikwit) was isolated from a human case in the Democratic Republic of the Congo in 1995 (USAMRIID challenge stock “R4415”) and is the most frequently used EBOV isolate for NHP studies . In Study 1, 27 adult cynomolgus macaques ( Macaca fascicularis ) of Asian origin (age range, 4–7 years; weight range, 3.7–10.6 kg; 13 males, 14 females) were randomized into one of four groups, eight monkeys per group in three experimental groups and three monkeys in a control group. Animals in the three experimental groups were vaccinated by intramuscular (IM) injection (in the deltoid muscle) of 1 × 10 8 , 2 × 10 7 , or 3 × 10 6 plaque-forming units (pfu) of rVSVΔG-ZEBOV-GP. In Study 2, 24 adult cynomolgus macaques ( Macaca fascicularis ) of Cambodian origin (age range, 4–11 years; weight range, 3.3–7.8 kg; 11 males, 13 females) were randomized into one of six groups, four or five monkeys per group in five experimental groups and two monkeys in a control group. Animals in the experimental groups were vaccinated by IM injection (in the deltoid muscle) of 3 × 10 6 , 3 × 10 5 , 3 × 10 4 , 3 × 10 3 , or 3 × 10 2 pfu of rVSVΔG-ZEBOV-GP. The control groups in both studies received diluent/saline only. In both studies, study personnel were blinded to the animal group assignments. Forty-two days after vaccination, all animals were challenged with EBOV at a target dose of 1000 pfu, which was administered intramuscularly in 0.5 mL (Study 2) or 1 mL (Study 1) to the right quadriceps of anesthetized animals. Previous studies have shown this dose of EBOV to be uniformly lethal within 5–8 days after challenge . Prepared challenge inoculum was titrated for dose verification using a validated assay . The animals were observed at least once daily, with more frequent observations on days when at least one animal exhibited clinical signs of disease. Physical examinations under anesthesia including body weight, temperature, and physical examination observations (e.g., observations of vaccination or challenge injection site, excreta, presence of petechial rash, and lymphadenopathy by palpation) were conducted 4 days prior to the vaccination dose, during the vaccination phase on Days −4, 0, 1–4 (in Study 2 only), 7, 14, 28, and 35 (Study 2) or 36 (Study 1), and during the challenge phase on Days 0, 3, 5, 7, 10, 14, 21, and either at end of the in-life phase (Days 28–31) and/or at termination (Study 1) or on Day 28 (Study 2). In addition, blood was collected for analysis of hematology and clinical chemistry on Days −4, 0, 1–4 (for Study 2 only), and 7 of the vaccination phase and Days 0, 3, 5, 7, 10, 14, 21, and 28 of the challenge phase (including coagulation parameters for Study 2). During the vaccination phase (Days 0–10), animals were assigned a responsiveness score from 0 to 3: 0 = active; 1 = and 3 = moderate unresponsiveness (requires prodding), demonstrates weakness. During Days 42–56 (Days 0–14 of the challenge phase), animals were assigned a responsiveness score from 0 to 5: 0 = active; 1=decreased activity; 2 = mildly unresponsive (becomes active when approached), occasional prostration; 3 = moderate unresponsiveness (may require prodding to respond), weakness; 4 = moderate to severe unresponsiveness requires prodding, moderate prostration; and 5 = moribund, severe unresponsiveness, pronounced prostration. After Day 56, the number of observations was reduced to twice daily, with the second observation performed at least 6 h later. Anti-EBOV GP immunoglobulin G (IgG) titers were determined using the validated ZEBOV-GP IgG ELISA conducted at Q 2 Solutions (San Juan Capistrano, CA, USA) as previously described . Briefly, serum samples were serially diluted and added to microtiter plates that had been coated with purified recombinant ZEBOV-GP (Kikwit strain) overnight. After incubation, plates were washed prior to the addition of a horseradish peroxidase-conjugated goat anti-human IgG secondary antibody. Plates were washed again and 3,3′,5,5′–tetramethylbenzidine (TMB) substrate was added. The enzymatic reaction was stopped with a sulfuric acid solution and optical density (OD) was measured on an ELISA plate reader. Anti-GP IgG concentrations were calculated from a dilution series of a reference standard consisting of pooled human sera using a 4-parameter logistic (4PL) curve fit. GP-ELISA antibody titers were reported in GP-ELISA units per milliliter (EU)/mL. The assay was validated for the analysis of both human and NHP samples based on in-depth parallel assessments and was demonstrated to have a lower limit of quantitation (LLOQ) of 13.62 EU/mL at Q 2 Solutions. Seroresponse for GP-ELISA was assessed in two ways: a 2× increase from baseline to 200 EU/mL or higher or a four-fold increase from baseline . A post-challenge ELISA was conducted in Study 2 under level 4 biocontainment at USAMRIID using a non-validated ELISA ( ). Prior to challenge, assessment of virus-neutralizing antibody titer on Days −4, 0 (in Study 2 only), 7, 14, 28, and 35 (Study 2) or 36 (Study 1) during the vaccination phase was also measured at biosafety level (BSL)-2 in a validated assay based on neutralization of the vaccine virus as previously described . In brief, serially diluted serum samples were mixed with rVSVΔG-ZEBOV-GP and incubated for 20 h to allow neutralization prior to adding to Vero cells. A methylcellulose overlay was added an hour later, and infected cells were incubated 2 days before plaques were visualized using a crystal violet stain and counted. Determination of the 60% neutralizing titer (PRNT 60 ) is based upon the percent reduction in viral plaques in the presence of serum compared to that of the virus control without serum and is calculated by linear regression. This vaccine virus-based PRNT was validated for the analysis of both human and NHP samples and was conducted at Q 2 Solutions. The PRNT 60 lower limit of detection is 20 and LLOQ is 35. For collection of interim data for informational purposes, pre-challenge anti-EBOV GP IgG titers and pseudovirion neutralization assay (PsVNA) were first determined in both studies at USAMRIID using non-validated research-grade methods ( ). In Study 2, VSV-ZEBOV vaccine virus plasma levels post-vaccination were measured at Q 2 Solutions using a quantitative RT-PCR (qRT-PCR) assay as previously described . This assay is qualified for testing human plasma samples but is not species specific, therefore assay performance is expected to be similar with NHP plasma samples. For the assay method used at the time of these NHP studies, the lower limit of detection was 62.5 copies/mL, and LLOQ was 156.25 copies/mL. Wild-type EBOV RNA in plasma post-challenge was measured at USAMRIID for both studies using a validated qRT-PCR method as previously described . Hematology parameters were determined from blood samples collected in tubes containing EDTA using the ADVIA 120 Hematology System (Siemens Healthineers, Erlangen, Germany). Serum chemistry analytes were measured using a Piccolo Xpress analyzer (Abaxis, Union City, CA, USA) (Study 1) or a Vitros ® 350 Chemistry System (Ortho Clinical Diagnostics, Raritan, NJ, USA) (Study 2). In Study 2, coagulation parameters were analyzed using the Sysmex CA-1500 instrument (Siemens Healthineers, Erlangen, Germany). Upon death or euthanasia, necropsies were conducted under BSL-4 containment. Collected tissues for histopathology were fixed in 10% neutral buffered formalin. After a minimum of 21 days’ formalin fixation, the tissue samples were trimmed, routinely processed, embedded in paraffin, and sections cut at 5 µm for histology. The histology slides were deparaffinized and stained with hematoxylin and eosin. In addition, immunohistochemistry for detection of EBOV antigen post-challenge was performed on collected tissues including samples of the lung, liver, spleen, kidney, and inguinal lymph nodes using an Envision-PO kit (Agilent, Santa Clara, CA, USA). For immunohistochemistry analysis, a mouse monoclonal anti-EBOV antibody (USAMRIID #702/703) was used at a dilution of 1:8000. After deparaffinization and peroxidase blocking, sections were covered with primary antibody and incubated at room temperature for 30 min. The sections were rinsed, secondary anti-mouse IgG antibody (peroxidase-labeled polymer) applied for 30 min, rinsed, substrate–chromogen solution applied for 5 min, rinsed, and then stained with hematoxylin. In Study 2, in situ hybridization (ISH) was performed on the eye tissue of all animals using the ViewRNA TM ISH Tissue Assay kit (ThermoFisher Scientific, Waltham, MA, USA). For this assay, 20 ZZ probes (18–25 base-long oligonucleotide pairs complementary to target RNA) against EBOV NP were designed and synthesized. After the deparaffinization and peroxidase blocking process, eye sections were covered with ISH probes and incubated at 40 °C in a hybridization oven for 2 h, rinsed, the ISH signal was amplified by applying Pre-amplifier and Amplifier conjugated with alkaline phosphatase, and then a fast red substrate–chromogen solution was applied for 10 min at room temperature. Slides were counterstained with hematoxylin and evaluated by a veterinary pathologist. For both studies, the primary analyses estimated geometric mean titers (GMTs) of the anti-EBOV GP IgG using the validated ZEBOV-rGP ELISA and the EBOV GP-specific neutralizing antibody titers using the validated rVSVΔG-ZEBOV-GP PRNT 60 across all treatment groups on all study days tested. GMT estimates, standard deviations (SDs), and 95% confidence intervals (CIs) were based on an analysis of variance (ANOVA) model including treatment group as a categorical covariate. The ANOVA model was performed separately on all study days tested. Other analyses included estimation of geometric mean ratios (GMRs) of ZEBOV-rGP ELISA and rVSVΔG-ZEBOV-GP PRNT 60 between all treatment groups compared pairwise, with GMR estimates and 95% CIs based on the same ANOVA model performed separately on all study days. A test of linear trend for vaccine dose was performed by both including the placebo group and excluding the placebo group by day using the SAS (Version 9.4) CONTRAST statement, with p values based on the same ANOVA model. Since these were both estimation studies, no multiplicity adjustments were made. An integrated analysis of Study 1 and Study 2 was performed. The relationships between survival and dose or between survival and titer response were assessed using a Cox proportional hazards model and logistic regression, where survival was the outcome and dose or titer response was the predictor. Survival analyses were performed using SAS Version 9.4. A post hoc analysis was conducted to evaluate a predictive threshold of immune response above which animals are protected. An overall vaccine group was formed by combining all NHP in non-zero vaccine dose groups ( N = 45) and a placebo control group composed of all NHP in zero dose groups ( N = 5). The data from the same assay on the same relative study day were pooled across the two studies by dose group. A logistic regression model was used to model the risk of death from EBOV infection in NHP animals as a continuous function of immune response. 3.1. Observations After Vaccination 3.2. Antibody Response After Vaccination 3.3. rVSVΔG-ZEBOV-GP Vaccine-Associated Viremia 3.4. Observations After Challenge 3.5. Statistical Analysis to Predict a Correlate of Protection Threshold 3.6. Antibody Responses After Challenge 3.7. EBOV Viremia After Challenge 3.8. Pathology After Challenge 3.9. Anatomic Pathology After Challenge In both studies, no signs of illness or Draize reaction were observed as a result of vaccination. In Study 1, most animals gained weight after vaccination and most had a slight decrease in body temperature noted on Day 7, although body temperatures commonly fluctuated. There were no clear hematology or clinical chemistry changes. On Day 28, one animal in the 2 × 10 7 pfu group (Study 1) had low body temperature and a distended abdomen. The animal’s condition rapidly deteriorated despite repeated treatment to relieve gas in the stomach, and it was ultimately euthanized on Day 28 due to gastric bloat as determined by a veterinarian pathologist . In Study 2, the average body weight was stable, there was no observed pattern of a febrile reaction, and no remarkable hematology or clinical chemistry changes. Vaccinated animals at all dose levels in both studies developed robust EBOV-GP-specific IgG binding ELISA titers as measured in the validated assay, with responses detectable in some animals starting at Day 7 post-vaccination, detectable in all vaccinated animals by Day 14 and plateauing at 28 to 35 days post-vaccination ( A,B). Day 28 geometric mean ELISA responses in the vaccinated animals ranged from 15,610 to 26,097 EU/mL. A test of linear trend for vaccine dose showed a linear trend for vaccine dose on Days 7, 14, 28, and 35 ( p < 0.0001), driven primarily by low ELISA titers for placebo animals. When placebo animals were excluded, this trend was observed only on Day 7 ( p < 0.0001, and the ELISA titer on Day 35 showed no effect of dose ( p = 0.3112), but higher ELISA titers on Day 35 were associated with a lower hazard ratio for death ( p = 0.0262). Similarly, univariate logistic regression analyses performed on dose, and the ELISA titer on Day 35 showed no effect of dose ( p = 0.2908), but higher ELISA titers on Day 35 were associated with a lower odds ratio for death ( p = 0.0127). Of the 44 surviving NHPs (out of 45 in the vaccinated groups), the Day 28 GP-ELISA response was 4540 EU/mL or higher, which represents a 300× or higher rise over prevaccination, compared with six non-survivors (5/5 in placebo group and 1/45 in vaccinated group) where Day 28 GP-ELISA response was 14 EU/mL (1× rise) in the placebo group ( n = 5) and 14,226 EU/mL (>1000× rise) in the vaccinated group ( n = 1). Importantly, at all doses tested in both studies, vaccinated animals developed robust EBOV-GP-specific neutralizing antibody titers that were measured in the validated assay, with responses detectable in some animals starting at 7 days post-vaccination, and in all vaccinated animals by Day 28 post-vaccination ( C,D). PRNT 60 titers remained elevated, with responses plateauing at 28 to 35 days post-vaccination. Day 28 geometric mean PRNT 60 titers in the vaccinated animals across both studies ranged from 1084 to 456649), driven primarily by low PRNT titers for placebo animals. When placebo animals were excluded, the trend was observed on Days 7 and 14 ( p < 0.0076), suggesting that higher vaccine doses are associated with higher titers at early time points post-vaccination. Univariate Cox proportional hazards models performed on dose and the PRNT titer on Day 35 showed no effect of dose ( p = 0.3112) or PRNT titer on Day 35 ( p = 0.1063) for the hazard ratio for death. Similarly, univariate logistic regression analyses performed on dose and the PRNT titer on Day 35 showed no effect of dose ( p = 0.2908) and or PRNT titer on Day 35 ( p = 0.0624) for the odds ratio for death. Of the 43 surviving NHP with available PRNT 60 data (44/45 survived in the vaccinated groups), the Day 28 PRNT response was 156 or higher, which represents an 8× or higher rise over pre-PRNT response was 20 (assay LLOQ), representing a 1× rise in the placebo group ( n = 5) and 531 (27× rise) in the vaccinated group ( n = 1). Non-validated research-grade assays were initially used to collect interim data on anti-EBOV GP IgG titers and neutralizing antibody titers as measured by PsVNA. Results from the non-validated ELISA for both studies and results from the PsVNA assay are shown in . Vaccinated animals in Study 2 had peak rVSVΔG-ZEBOV-GP viremia on Days 2–3 post-vaccination ( ). Animals vaccinated with higher doses had higher levels of vaccine viremia which were detectable earlier (Day 1) than animals vaccinated with lower doses. These levels on Day 1 post-vaccination ranged between 1189 and 30,227, 380 and 12,305, and 69 and 1219 copies/mL in the 3 × 10 6 , 3 × 10 5 , and 3 × 10 4 pfu groups, respectively. By Day 2 post-vaccination, vaccine-associated viremia was detected in all animals with levels ranging between 1379 and 39,229, 3036 and 35,637, 2854 and 12,305, 35 and 3057, and 1 and 421 in the 3 × 10 6 , 3 × 10 5 , 3 × 10 4 , 3 × 10 3 , and 3 × 10 2 pfu groups, respectively. Most vaccine-associated viremia was undetectable by Day 7. Statistical analysis was not conducted. Detectable levels of vaccine viremia were observed to be variable across the dose groups and among the animals within the dose groups. Forty-two days after vaccination, animals were challenged with EBOV Kikwit via IM injection (Day 0 post-challenge). In Study 1, 26 animals were challenged with an actual viral dose of 358 pfu, as verified by plaque titration of challenge inoculum. All three animals (100%) in the control group succumbed to the EBOV virus on Day 7 post-challenge and one animal in the 3 × 10 6 vaccine group succumbed on Day 9 post-challenge ( ); all other animals (22/26) survived until the end of the in-life phase, Days 28–31 post-challenge ( A). The four animals that succumbed in Study 1 became clinically ill (defined as having a responsiveness score of at least ‘1′) on Day 6 post-challenge, with reduced activity/responsiveness and developing rash, anorexia, and other symptoms consistent with EVD. Among the 22 animals that survived to the end of Study 1, four (two in the 1 × 10 8 pfu group, one in the 2 × 10 7 pfu group, and one in the 3 × 10 6 pfu vaccine group) had a score of ‘1’ on Day 7 post-challenge, but all returned to normal health (score of ‘0’) before Day 9 post-challenge; none of the animals presented with a rash. Clinical observations included lymphadenopathy (13/22 animals; typically noted in the axillary lymph node), challenge site reaction (7/22), anorexia (7/22), and abnormal stool (6/22); however, most of these observations only lasted a day to a few days. Most vaccinated animals in the study gained weight during the challenge phase; a few animals had sporadic slight decreases in body weight noted. A majority of the animals had a slight increase in body temperature between Days 3 and 7 post-challenge. Nine of the twenty-six animals exhibited fever (≥39.5 °C), four of those animals succumbed to EBOV, while five survived (one in the 1 × 10 8 pfu group, one in the 2 × 10 7 pfu group, and two in the 3 × 10 6 pfu group). In Study 2, 24 animals were challenged with an actual viral dose of 645 pfu, as verified by plaque titration of the challenge inoculum. The two animals in the control group succumbed on Day 7 post-challenge; all other animals survived to at least 28 days post-challenge ( B). Both animals in the control group became ill on Day 6 post-challenge with both of them exhibiting a responsiveness score of up to a ‘2’ by the end of Day 6, with no urine or stool output, hemorrhage, and reduced biscuit consumption and rash from Day 5 post-challenge. One control animal was observed scoring a ‘4’ early on Day 7, meeting criteria for euthanasia first thing that morning, while the other control animal developed a score of ‘3’ by the end of Day 7, meeting euthanasia criteria at day’s end. For the animals that survived until the end of the study, five of the twenty-two animals became ill from Days 6 to 8 post-challenge (one in each of the 3 × 10 6 , (one animal had responsiveness scores of ‘1-2-1’ for 3 days) 3 × 10 5 , (one animal scored a ‘1’ for 3 days straight) and 3 × 10 3 pfu groups (scored a ‘1’ for 1 day), and two in the 3 × 10 2 pfu group one scored a ‘1’ for 4 days, and the other a ‘1 to 2’ for 6 days), with signs of illness including reduced activity/depressed behavior, fevers, and/or petechial rashes consistent with EVD. These animals all recovered by Day 13 post-challenge. Sixteen out of twenty-two vaccinated animals (73%) showed some level of quadriceps muscle swelling at the site of challenge, but none of these animals demonstrated responsiveness scores above ‘0’, therefore showing no signs of clinical illness. Most animals had sporadic slight decreases in body weight, with all animals in the lowest rVSVΔG-ZEBOV-GP dose group showing weight loss (<10%). Temperature elevations were observed during the post-challenge phase of the study. Six of the twenty-four animals exhibited fever; two of those in the control group succumbed to EBOV, while four survived (one in each of the 3 × 10 6 , 3 × 10 5 , 3 × 10 3 , and 3 × 10 2 pfu groups). In a post hoc analysis integrating the two studies, the risk of death from EBOV infection was modeled as a continuous function of immune response. The Day 28 ELISA cutoff for 50% probability of survival (C) was 978 EU/mL and the Day 28 PRNT C was 132 titer units. This analysis was repeated with C and with the maximum of C and 4× the LLOQ to account for assay variability. The bootstrap mean and 95% CI were calculated by resampling the NHP data 1000× while maintaining the original rVSVΔG-ZEBOV-GP/Placebo ratio ( ). To investigate vaccine-induced development of immunological memory, EBOV-specific recall response was evaluated using PsVNA in Study 2 post-challenge in a subset of eight animals (six vaccinated and two control animals), which were selected based on range of clinical responses, overall health of the animal, and other criteria ( ). EBOV-specific IgG titers, as assessed by the USAMRIID non-validated ELISA, were elevated after viral challenge on Day 42 for all vaccinated animals, reaching a peak mean titer of 113,707.5 by 2 weeks post-challenge. By contrast, EBOV-specific IgG titers were not detected in the control (non-vaccinated) group by 1 week post-challenge ( ). In addition, all six vaccinated animals that had neutralizing antibody responses measured using PsVNA responded to the EBOV challenge by producing higher levels of serum neutralizing antibodies (GMT range of 10,103 to 225,863 PsVNA 50 by Day 28 post-challenge) than those detected prior to challenge ( ). The two unvaccinated control animals did not have neutralizing antibody levels above baseline values on Day 5 post-challenge, but there was a response detected by Day 7 post-challenge (GMT range 3174 to 3485), which was most likely the neutralizing antibody response to native viral infection. Some vaccinated animals in both studies had detectable EBOV viremia post-challenge but at levels significantly lower than non-vaccinated controls, and all survivors cleared the viremia. In Study 1, for the majority of vaccinated NHPs, EBOV viremia was detected but was below the LLOQ for the qRT-PCR assay ( A). By Day 5 post-challenge, all control animals had a quantifiable viral load. By Day 7 post-challenge, only four of seven vaccinated animals had a quantifiable viral loaOn Day 10 post-challenge, only one animal in the 2 × 10 7 pfu group still had a quantifiable level of circulating viral genome. The vaccinated animal in the 3 × 10 6 group that succumbed to the EBOV challenge had a quantifiable viremia on Day 7 post-challenge, peaking at levels similar to those reached by unvaccinated controls. In Study 2, only five out of twenty-two vaccinated animals had virus above the LLOQ for the assay, with viral loads much lower than levels observed in the control animals ( B). Both control animals had a quantifiable viral load by Day 5 post-challenge. By Day 5 post-challenge, three animals, all in the 3 × 10 2 pfu group, had a quantifiable viral load. On Days 7 and 10 post-challenge, two (one in each of the 3 × 10 6 and 3 × 10 4 pfu groups) and one animal in the 3 × 10 2 pfu group, respectively, had a quantifiable viral load. Hematology and clinical chemistry in both studies, and coagulation in Study 2 ( ), showed patterns of changes consistent with EBOV infection for those that succumbed as well as a few of the other animals, depending on the level of infection demonstrated. In Study 1, between Days 3 and 5 post-challenge, most animals experienced a slight decrease in lymphocytes and platelet counts, particularly notable for the animals that succumbed to EBOV. A general trend of increasing lymphocytes was observed in all three vaccine groups from Day 7 to 21 post-challenge. Also, common among the vaccinated animals was a sharp reduction in circulating lymphocytes between Day 21 post-challenge and the end of the in-life phase, approximately 1 week later, with a return to normal levels. Increased platelet values were also noted in several animals in all vaccinated groups and generally occurred on Days 10 and/or 14 post-challenge. In Study 2, lymphocyte counts decreased in the control group on Day 5 post-challenge and rebounded on Day 7 post-challenge. Lymphocyte counts were, on average, elevated in the 3 × 10 6 and 3 × 10 5 pfu groups across most days post-challenge relative to baseline and the lower dose groups. Lymphocyte counts were elevated across all dose groups at Days 14 and 21 post-challenge and remained elevated until Day 28 post-challenge. Platelet values were increased on average in all vaccinated groups on Day 14 post-challenge. Typical EBOV infection results in increases in circulating aspartate aminotransferase (AST) concentrations . In both studies, post-challenge AST levels were comparatively normal in the vaccine groups, with slightly elevated levels in some animals, whereas levels in the control group were substantially elevated. The maximum level of circulating AST in the vaccine groups (290 U/L) was observed in the 3 × 10 6 pfu group on Day 10 post-challenge. In comparison, the control group had AST levels approaching 1500 U/L on Day 7 post-challenge. In Study 2, all of the animals in the vaccinated groups had comparatively normal levels of AST, whereas levels were elevated in the control animals. Ebolavirus infections are also often associated with dramatic increases in circulating alkaline phosphatase (ALP) due to the massive amount of viral replication in the liver and resulting hepatocellular necrosis . Increased ALP values were particularly prominent in the animals that succumbed to EBOV in Study 1 (one animal in the 1 × 10 8 pfu group, and all three control animals) and in the two control animals in Study 2 (1.5-fold ALP values on Day 47, with >10-fold elevations in ALP by Day 7 post challenge (Day 49)). The animals that succumbed to EBOV exhibited gross, histologic, and immunohistochemical changes consistent with IM EBOV infection in NHPs. The principal tissues affected were the spleen, liver, kidney, lungs, and peripheral lymph nodes. Although the pathology lesions were similar among the non-survivors in Study 1 ( n = 4), histologic lesions in the single vaccinated animal that died (in the 3 × 10 6 pfu group) were slightly more widespread in distribution and severity (albeit subjectively) than lesions in the control animals. This could be due to the slightly longer survival than any of the control animals (succumbed on Day 9 vs. Day 7) or that the vaccination may have influenced the course of disease (i.e., delayed onset and/exacerbation of the disease). For the animals that survived to termination, there were no gross necropsy findings or histologic lesions indicative of active EBOV infection, with the exception of two animals in Study 2 that had evidence of active residual infection with viral antigen detected in the inoculation site quadriceps muscle (both) and the eye (one animal). Immunohistochemistry analyses also showed immunoreactivity to Ebolavirus antigen in the muscle and eyes of these two animals in Study 2 and weak/mild immunoreactivity in the inguinal lymph nodes in two animals in Study 1 (one each in the 2 × 10 7 pfu group and 3 × 10 6 pfu group, with very weak extracellular staining in a few germinal centers in the spleen in the latter animal). In situ hybridization analysis of the eyes in Study 2 detected viral RNA in the eyes of three animals: in the choroid and uveal layers in both non-survivors and in the vitreous chamber adjacent to the inner limiting membrane in one survivor (in the 3 × 10 2 group). At the time these studies were conducted, the 2014–2016 Western Africa Ebola outbreak was at its height. These studies were used to justify the human dose using the identical clinical-grade vaccine product and vaccination regimen and the same vaccine doses as used in human clinical trials. These data packages were submitted for regulatory agency review in support of product licensure. Testing with the same validated assays as were used to evaluate human immune responses, vaccination of NHPs with rVSVΔG-ZEBOV-GP resulted in high levels of EBOV-specific IgG measured by ELISA and neutralizing antibody titers with no dose-dependent response in either study across dose groups ranging from 1 × 10 8 to 1 × 10 2 pfu when evaluated after antibody responses were fully developed in all animals, after Day 7 by ELISA or after Day 14 by PRNT. Our findings demonstrated a correlation between EBOV-specific IgG titers and neutralizing antibody titers in vaccinated animals. In addition, there was more variability in magnitude of responses among animals at lower vaccine doses, also observed in human Phase 1 dose ranging studies [ , , , , , , ]. The lack of a dose response at later timepoints in our studies was likely due to the replicating nature of the live attenuated vaccine, with all except one vaccinated animal fully protected against death by the dose received. Antibodies appear to have a critical role in rVSV-mediated protection against EBOV as shown in the study by Marzi et al., in which depletion of CD8+ T cells had no impact on survival but the depletion of CD4+ T cells during vaccination with rVSVΔG-ZEBOV-GP in macaques resulted in no IgG response and no protection from subsequent EBOV challenge . Robust immunological responses have been demonstrated in clinical trials in healthy adults, although with dose-dependent responses observed in these clinical trials using similar vaccine doses [ , , , , ]. A study using an rVSV-based vaccine expressing the EBOV-Makona GP in the NHP model also titrated the vaccine and observed complete protection from challenge with 1000 LD50s Makona strain of Ebola virus at vaccine doses ranging from 1 × 10 1 to 1 × 10 7 pfu, with no dose-dependent effects on IgG titers or neutralizing responses observed in surviving animals (although lower doses of 1 × 10 0 and 1 × 10 −1 did not provide complete protection) . Interestingly, an NHP study with rVSVΔG-ZEBOV-GP observed higher soluble glycoprotein-binding antibody levels and higher GP-ELISA titers in animals that survived EBOV challenge . The results of our NHP study indicate that although there may be differences in the kinetics and/or consistency of the immune responses associated with lower doses of vaccine, the replicating vaccine is able to protect NHPs across a very wide dose range. In humans, however, dose may have a higher impact, as differences in immune responses are detected even at Day 28 post-vaccination in some studies . The difference in immune responses between humans and macaques warrants further investigation. Across these two NHP studies, 44/45 EBOV challenged animals survived (97.8% survival) after immunization with the subsequently licensed clinical-grade rVSVΔG-ZEBOV-GP vaccine across a wide range of dose groups. In Study 1, all animals in the 1 × 10 8 and 2 × 10 7 , and seven out of eight animals in the 3 × 10 6 pfu vaccine dose groups, survived challenge to the end of the study. The one non-surviving vaccinated animal succumbed on Day 9 post-challenge, although this animal had a robust IgG titer prior to challenge, similar to that of vaccinated survivors. In addition, one animal in the 2 × 10 7 pfu group was humanely euthanized during the vaccination phase due to gastric bloat (which occurs sporadically in captive NHPs in preclinical studies ). Based on Study 1 results, the dose deescalation design of Study 2 was implemented to evaluate the possibility of determining a breakthrough vaccine dose in NHP, which might yield a suboptimally vaccinated animal group allowing the study of correlates of immune protection. However, in Study 2, all of the vaccinated animals survived, and even the lowest dose of vaccine, 3 × 10 2 pfu, was fully protective against death (but not disease) in this challenge model. Previous NHP studies evaluating the research-grade rVSVΔG-ZEBOV-GP vaccine at a single dose level similar to that used in Study 1, approximately 1 × 10 7 pfu, showed similar efficacy, with 100% effectiveness when delivered >14 days prior to challenge with EBOV [ , , , ]. The 2 × 10 7 pfu dose used in Study 1 was selected for Phase 2/3 trials based in part on the NHP data in Study 1 as well as the Phase 1 trials conducted in parallel . Human clinical efficacy of the vaccine when administered at a dose of 2 × 10 7 pfu was demonstrated in the Phase 3 ring vaccination trial (Ebola Ça Suffit) conducted in Guinea in early 2015 . Integrated analysis of both NHP studies, including controls, demonstrated that higher ELISA titers on Day 35 post-vaccination were associated with a lower odds ratio for death upon challenge on Day 42. With robust antibody responses at all doses and so few non-survivors in this study, a threshold level of antibody response that predicts survival or time to death could not be determined from the initial analysis of these data where survival was the outcome and dose or titer response was the predictor using a Cox proportional hazards model and logistic regression. Subsequent post hoc analysis using logistic regression to model the risk of death from EBOV infection in NHP as a continuous function of immune response indicated a Day 28 cutoff for 50% probability of survival for ELISA was 978 EU/mL and for PRNT was 132 titer units. Antibody titers in NHP are notably higher than in humans receiving the same or similar dose, yet efficacy is similarly high in both species, suggesting that antibody titers below these thresholds are likely protective in NHP. Application of the threshold previously reported for human ELISA responses that best distinguishes between vaccine and placebo recipients, at least 200 EU/mL and at least 2-fold increase over baseline , is strongly associated with protection from EBOV challenge in these NHP studies. This is consistent with a recent post hoc analysis of data from the rVSVΔG-ZEBOV-GP Phase 2/3 clinical trialsture studies and analyses exploring other immune response measures (such as Fc effector function, antibody affinity, and epitope specificity) may provide additional insights into a potential immune correlate threshold of protection. Few, if any, signs of EBOV disease were observed in any of the vaccinated animals post-challenge, but a trend toward more viremia and signs of disease were noted at lower doses despite overall survival; post-challenge injection-site reactions (quadriceps muscle swelling—likely immune response contained EBOV at site of challenge and not disseminated EVD) occurred in seven of twenty-two animals in Study 1 and sixteen of twenty-two animals in Study 2. In Study 2, all of the vaccinated animals had similar high titer anti-EBOV GP antibody responses post-challenge, and the six vaccinated animals tested for neutralizing antibodies all had higher levels of antibodies post-challenge. This was consistent with robust anamnestic responses following challenge and demonstrated that the vaccine did not induce sterilizing immunity but was able to contain the virus and prevent disease and death. For the majority of vaccinated animals across both studies, EBOV viral RNA after challenge was below the LLOQ of the qRT-PCR assay. The vaccinated animal that succumbed to EBOV challenge in Study 1 had a quantifiable viremia on Day 7 post-challenge, peaking at levels similar to those of unvaccinated controls on Day 5 post-challenge. Only eight out of forty-three vaccinated animals who survived had virus above the lower limit of quantitation, all at much lower levels than observed in the control animals. Many of the remaining animals had a transient, low level of detectable EBOV RNA, below the lower limit of quantitation for the assay. There was no clear vaccine–dose relationship to EBOV replication levels. Pathology parameters showed patterns of change consistent with EBOV infection in the non-survivors and in a few of the vaccinated animals with higher levels of infection. Among the non-survivors that succumbed to EBOV, the observed gross, histologic, and immunohistochemical changes were also consistent with IM EBOV infection in NHPs. A possible explanation for the slightly more widespread and severe histologic lesions in the one vaccinated non-survivor may be that this vaccinated animal survived slightly longer than any of the control animals or that the vaccination may have influenced the course of the disease (e.g., delayed onset). Two vaccinated animals in Study 2 had evidence of viral antigen in the eye or the muscle at the virus inoculation site, but plasma viremias were resolved in these survivors. Long-term persistence of EBOV after infection has been documented in immune-privileged organs of human survivors [ , , , , ]. In summary, vaccination with rVSVΔG-ZEBOV-GP induced high levels of EBOV-specific IgG and neutralizing antibody titers with no dose response observed. Notably, after Day 14 post-vaccination, the identical validated assays used in clinical trials of the now licensed vaccine were employed in these evaluations. Vaccination with rVSVΔG-ZEBOV-GP conferred 97.8% protection against lethal EBOV challenge, with 44/45 challenged animals surviving across all dose groups ranging from as low as 3 × 10 2 up to 1 × 10 8 pfu across both studies. All control animals succumbed on Day 7, while only one vaccinated animal succumbed, on Day 9 post-challenge. Although a specific antibody threshold predicting survival or time to death could not be identified from the initial analysis, post hoc analysis suggested that lower antibody titers may still be protective (an ELISA titer of at least 200 EU/mL or a 2-fold increase over baseline is strongly associated with protection). These findings align with clinical trial data indicating correlates of protection in humans and reinforce the robust immune response and high level of protection conferred by rVSVΔG-ZEBOV-GP vaccination.
Empatía médica en residentes y tutores de medicina familiar y comunitaria. La visión del profesional y del paciente
8e0136dd-4ea4-4979-9619-b4ff17aea256
7063160
Family Medicine[mh]
En el marco de la atención al paciente, la empatía es un atributo predominantemente cognitivo (no solo emocional) que involucra un entendimiento (no solo sentimiento) de las experiencias, inquietudes y perspectivas del paciente, combinado con una capacidad de comunicar este entendimiento, sabiendo que ambos componentes de la empatía (cognición y emoción) no son completamente independientes , . Bylund y Makoul recalcan la importancia que tiene la comunicación al enfermo de esta comprensión. De otra manera, médicos empáticos podrían no ser reconocidos como tales por sus propios pacientes. Este entendimiento mutuo es la base de una relación beneficiosa para ambas partes, no solo en términos de satisfacción del paciente , y del profesional , sino en mejor competencia clínica del médico , mayor adherencia a los tratamientos , y mejores resultados en salud , . Hojat et al. , desarrollaron un instrumento específico, válido y fiable para medir el grado de empatía, la Escala de Empatía Médica de Jefferson, en sus versiones para estudiantes de medicina (JSE-S), estudiantes de ciencias de la salud (JSE-PS) y profesionales sanitarios (JSE-HP). Desde su creación, múltiples estudios , han corroborado su validez y fiabilidad. Se ha traducido, adaptado culturalmente y validado en más de 55 idiomas/dialectos y ha sido utilizada en al menos 74 países diferentes . En España se ha validado al castellano la JSE-HP, tanto en profesionales sanitarios como en estudiantes que participan en programas de Inmersión Clínica Precoz . Diferentes trabajos han descrito una tendencia a la baja en las puntuaciones de la empatía conforme avanzan los estudios de medicina , (sobre todo desde el inicio del periodo clínico de formación) y la especialización posterior . Se apuntan como posibles factores etiológicos la vulnerabilidad emocional, el desgaste profesional, la depresión, el exceso de trabajo o la baja calidad de vida . Hojat et al. añaden la falta de buenos mentores, la presión por el tiempo, el escaso descanso, factores derivados de las exigencias de los pacientes o el bajo reconocimiento. A pesar de que el deterioro de la empatía en los estudiantes de medicina y residentes en formación en el ámbito anglosajón está ampliamente demostrado, diferentes autores , plantean que este hecho no está tan claro en otros países, sugiriendo que son necesarias futuras investigaciones que tengan en cuenta las diferencias culturales, las curriculares o las derivadas de diferentes sistemas de atención sanitaria. En España se han publicado dos estudios , con residentes de atención primaria y hospitalaria que analizan los factores culturales que podrían influir en las diferencias en el grado de empatía encontradas entre residentes españoles y latinoamericanos. Una de las posibles limitaciones de la JSE-HP podría ser que mide la empatía autopercibida por el profesional y no sus comportamientos empáticos. Para analizar la empatía de los profesionales desde la visión de los pacientes disponemos de la Escala de Percepciones de los Pacientes sobre la Empatía de sus Médicos de Jefferson (EPPEMJ). Algunos estudios , han demostrado que existe relación entre las puntuaciones de la JSE-HP y la EPPEMJ, lo que aporta validez a ambas escalas. Todos ellos han sido realizados fuera de nuestro medio. El presente trabajo pretende analizar el grado de empatía de residentes y tutores de medicina de familia de una unidad docente multiprofesional de Madrid y su relación con la edad, el sexo, los años de residencia, la experiencia profesional y el lugar de origen. Así mismo, busca determinar si existe relación entre la empatía autopercibida por los médicos y la valorada por sus pacientes. Diseño Participantes Instrumento y mediciones Análisis estadístico Estudio observacional transversal realizado en una unidad docente multiprofesional de atención primaria de la provincia de Madrid, España, entre los meses de octubre y diciembre del 2016. Se invitó a participar a los 127 residentes de medicina de familia de la unidad docente y a sus 91 tutores, enviándoles por correo electrónico la JSE-HP, junto a una breve explicación del procedimiento de cumplimentación de la misma. En ese mismo periodo, en un centro de salud perteneciente a la misma unidad docente se captó presencialmente, de forma oportunista, a 428 pacientes que rellenaron la EPPEMJ y variables de carácter general, como edad, sexo, médico asignado y tiempo en el cupo del mismo. Los once médicos de familia (100%) que realizaban el seguimiento de los pacientes estudiados cumplimentaron la JSE-HP. Todos los cuestionarios fueron anónimos. El estudio fue autorizado por la comisión local de investigación correspondiente. Como criterio de inclusión se requirió ser residente de cualquier año o tutor de residentes de la unidad docente. Los pacientes tenían que tener más de 18 años y haber sido atendidos por uno de los médicos de familia del centro de salud de octubre a diciembre de 2016. Como criterios de exclusión, se determinaron el desconocimiento del idioma español o la existencia de discapacidad cognitiva que impidiera contestar la encuesta. Para medir la empatía autopercibida por los profesionales se utilizó la Escala de Empatía Médica de Jefferson, versión profesionales sanitarios (JSE-HP) traducida, adaptada y validada al español ( ). La escala consta de 20 ítems con una puntuación mediante escala Likert de 1  punto (totalmente en desacuerdo) a 7 puntos (totalmente de acuerdo). El rango de puntuaciones posibles oscila entre 20 y 140 puntos, de modo que los valores más altos se asocian a mayor grado de empatía. La escala analiza tres dimensiones de la empatía: 1) toma de perspectiva (empatía cognitiva); 2) atención con compasión (empatía emocional), y 3) ponerse en el lugar del paciente. La empatía cognitiva de la escala refleja el entendimiento racional del profesional acerca del padecimiento del paciente, mientras que la empatía emocional evalúa la aparición de sentimientos semejantes a los del enfermo. La primera es una habilidad cognitiva que permite al médico asumir el rol de su paciente y es modificable por el aprendizaje. La segunda es más innata . Para determinar la empatía de los médicos desde el punto de vista de sus pacientes se utilizó la Escala de las Percepciones del Paciente sobre la Empatía Médica de Jefferson (EPPEMJ) . Es también autoadministrada y está compuesta por 5 ítems, puntuados mediante escala tipo Likert de 7 puntos, siendo su rango de puntuación entre 5 y 35 puntos. La fiabilidad de las escalas JSE-HP y EPPEMJ se analizó mediante el coeficiente alfa de Cronbach. Las variables cualitativas se presentaron con su distribución de frecuencias y porcentaje. Las variables cuantitativas, con su media, desviación estándar (DE) e intervalo de confianza al 95%. La asociación entre las puntuaciones de la JSE-HP y la edad se estudió mediante correlación de Pearson tras comprobar normalidad. La relación entre las calificaciones de la JSE-HP y las variables cualitativas se determinó mediante la t de Student para muestras independientes. Para las comparaciones de más de dos grupos, se aplicó el test de análisis de la varianza (ANOVA) con análisis univariante posterior de Tukey y Bonferroni. La correlación entre la puntuación de la JSE-HP de cada médico del centro de salud y la media de las puntuaciones de sus pacientes en la EPPEMJ se realizó mediante el coeficiente de correlación de Spearman. Para el análisis estadístico se utilizó el programa IBM SPSS Statistics_21 con un nivel de significación a priori de alfa = 0,05 en todos los análisis. De un total de 127 residentes de medicina familiar y comunitaria de la unidad docente, contestaron a la encuesta 50 residentes (39,4%). La media de edad de los residentes participantes fue de 29,14 años (DE 4,1 años) con un rango de 25 a 42 años. El 84% eran mujeres. Según país de procedencia, 41 eran españoles (82%), 8 de origen latinoamericano (16%) y 1 (2%) de otro lugar diferente. Once residentes eran de primer año (22%), 9 de segundo año (18%), 11 de tercer año (22%) y 19 de cuarto año (38%). Cuarenta y cuatro residentes (88%) no tenían experiencia profesional previa y 49 de ellos (98%) estaban cursando su primera especialidad. De los 91 tutores de medicina de familia de la unidad docente, 41 (45%) accedieron a participar. La media de edad fue de 53,56 años (DE 6,20 años), rango de 40 a 65 años. El 63,4% (26) eran mujeres. La media de años de experiencia laboral resultó ser de 25,29 años (DE 7,23 años), con un rango entre 8 y 38 años. Los 428 pacientes del centro de salud presentaron una media de edad de 52,81 años (DE 17,01 años). El 58,6% (251) fueron mujeres. Un 63% llevaba más de 5 años en el cupo de su médico de familia, mientras que un 17% llevaba menos de 2 años. La JSE-HP mostró adecuadas propiedades psicométricas ( ). Su alfa de Cronbach fue de 0,84 en la muestra de residentes y de 0,87 en la de tutores. Se encontraron correlaciones positivas y significativas entre cada uno de los ítems y el resultado global de la escala, siendo su mediana de 0,45 (p < 0,01). Lo mismo sucedió con la EPPEMJ, con un alfa de Cronbach de 0,82. La media global de la puntuación de los residentes en la JSE-HP fue de 119,72 (IC 95%: 116,43-123,01) puntos ( ). No hubo diferencias significativas entre hombres y mujeres, con una diferencia de medias en las puntuaciones totales de la escala de 0,71 (IC 95%: −8,35 a 9,76) puntos . Las características de las puntuaciones de la JSE-HP en los residentes según el año de residencia quedan reflejadas en la . Se encontró una correlación negativa (r = −0,29; p = 0,04) entre la edad y la dimensión 2 de la JSE-HP. Con respecto a los residentes con experiencia profesional previa, los que carecían de ella obtuvieron 11,61 (IC 95%: 1,97-21,26) puntos más en la JSE-HP total ( ). Con respecto a los residentes latinoamericanos, los de nacionalidad española puntuaron 12,30 (IC 95%: 4,06-20,54) puntos más alto ( ). Puntuaron más los tutores que los residentes en la JSE-HP total: diferencia de medias 3,63 (IC 95%: −0,64-7,90) puntos. Los tutores obtuvieron 2,53 (IC 95%: 0,14-4,91) puntos más en la dimensión 1 de la JSE-HP (empatía cognitiva) que los residentes. Se observó una correlación negativa (r = −0,32; p = 0,04) entre los años de experiencia como tutor y la dimensión 2 de la JSE-HP (empatía emocional). La puntuación media de los 11 médicos del centro de salud en la JSE-HP fue de 124,36 (IC 95%: 115,89-132,84) puntos. La media de puntuaciones de los 428 pacientes en la EPPEMJ resultó ser de 30,5 (IC 95%: 18,74-42,26) puntos, con una calificación máxima de 35 y una mínima de 5. Se encontró correlación positiva significativa (r = 0,72; p = 0,01) entre la puntuación en la JSE-HP de cada médico del centro de salud (autopercepción de la propia empatía del profesional) y la media de las puntuaciones de sus pacientes en la EPPEMJ (empatía percibida por el paciente) ( ). La empatía es un pilar básico de la relación médico-paciente. La importancia de que los médicos se muestren empáticos con los enfermos a los que atienden y tengan las habilidades comunicativas necesarias para hacer sentir a sus pacientes que entienden su sufrimiento radica, por una parte, en el valor que supone para el paciente el sentirse comprendido en su sufrimiento por el médico y, por otra, en la relación, mostrada por diferentes estudios, que esta actitud y comportamiento tienen con los resultados de la asistencia , , , , , , . La centralidad que la empatía tiene en el acto médico y las repercusiones en los resultados del mismo han motivado su estudio, siendo uno de los principales aspectos el realizar una «medición» válida de la misma, sobre todo para captar cómo esta es percibida por el paciente. Así, diferentes estudios han explorado esto y su relación con resultados del encuentro mediante diferentes cuestionarios , , . Por otra parte, la importancia de conocer el grado de empatía que tienen los sanitarios o los estudiantes, así como los factores que afectan a la misma, han sido también objeto de atención, generalmente con el fin de implementar actuaciones que la afiancen o potencien. Para ello se han utilizado cuestionarios que permiten la valoración de conductas consideradas empáticas en la propia interacción clínica por observadores externos , con pacientes simulados , y con pacientes reales , o los autocumplimentados , , entre los que se encuentra la Escala de Empatía Médica de Jefferson, versión profesionales sanitarios (JSE-HP) . También se ha desarrollado otra escala de Percepciones de los Pacientes sobre la Empatía de sus Médicos de Jefferson (EPPEMJ) , . Un aspecto especialmente importante en este proceso radica en conocer en qué medida la empatía percibida por el paciente se correlaciona con la reportada por el médico. Ruiz Moral et al. realizaron una aproximación a esta cuestión y mostraron, en entrevistas grabadas de estudiantes de medicina con pacientes simulados, que su empatía, objetivada en conductas valoradas por un observador externo, se correlacionaba con la empatía percibida por los pacientes simulados. Además, se objetivó una mayor sensación de efectividad en el encuentro clínico por parte de los estudiantes. El objetivo principal de nuestro trabajo fue valorar el grado de correlación entre la percepción que tienen los pacientes sobre el grado de empatía que muestra su médico y la autopercepción que este tiene sobre la misma. Se demostró una potente relación entre la empatía autopercibida por los sanitarios y la visión que los pacientes tenían sobre la empatía de sus médicos. De esta manera, una menor sensación de empatía por parte del profesional se corresponde con una peor percepción de empatía por parte de sus pacientes, y viceversa. Queremos resaltar aquí las implicaciones prácticas derivadas de estos resultados. Por una parte, en el contexto educativo y clínico destaca la importancia de poder objetivar cambios en la empatía percibida por los alumnos y médicos en ejercicio con el tiempo, la experiencia o tras poner en marcha las actuaciones educativas necesarias para potenciarla, sabiendo que estos cambios van a ser detectados por los pacientes. Por otra parte, en lo relativo a aspectos puramente investigadores, confirma la validez de constructo de las escalas utilizadas, resaltando la importancia de evaluar el punto de vista de los profesionales cuando no sea factible conocer la opinión de sus pacientes. En nuestro trabajo, presentaron menor empatía global los residentes que tenían experiencia profesional previa. De la misma manera, objetivamos que la empatía emocional decaía en relación con el tiempo, tanto en residentes como en sus tutores. Encontramos una relación inversa entre la edad de los residentes y la dimensión 2 de la JSE-HP (aspectos emocionales de la empatía): a mayor edad de los residentes, menor empatía emocional. Los tutores con más años de experiencia también puntuaron significativamente más bajo en la empatía emocional, hecho que podría ir en la línea del declive de la empatía con el tiempo, descrito por Neuman et al. en su revisión sistemática. Aunque no sean cohortes comparables entre sí, en nuestro trabajo, al igual que lo descrito en otros estudios , los residentes de cuarto año presentaron menor empatía global que el resto. Este declive podría atribuirse a la carga laboral, al aumento progresivo de la responsabilidad que asumen los residentes de último año, a síntomas depresivos, a la ansiedad por la incertidumbre laboral… Necesitamos conocer con más evidencia si existe en España un deterioro de los niveles de empatía de los médicos con el transcurso del tiempo, al igual que lo descrito en otros países . En caso de confirmarse, sería preciso valorar si este declinar se produce de manera aleatoria o si existen causas identificables susceptibles de modificación y qué programas formativos son los más eficaces para potenciarla . Aunque Delgado et al. no encontraron diferencias en las puntuaciones de empatía entre los cuatro años de residencia de medicina de familia, lo cierto es que un profesional que se sienta maltratado por el sistema tendrá menos probabilidad de ser empático. Las instituciones sanitarias tienen el reto de favorecer ambientes laborales emocionalmente positivos, limitar las horas de trabajo excesivas y ofrecer oportunidades formativas y actividades motivadoras que contrarresten las altas tasas del síndrome de desgaste profesional entre profesionales sanitarios. En nuestro estudio, presentaron mayor empatía global los residentes de nacionalidad española, en la misma línea que otros autores han descrito . Este hecho podría ser atribuido a razones culturales, de sistemas de salud, formación sanitaria, demanda social y acceso a los recursos sanitarios en los distintos países de origen . Son deseables futuros estudios que profundicen en las causas que subyacen a las diferencias culturales observadas en los niveles de empatía entre residentes españoles y latinoamericanos y si estas diferencias tienen relevancia clínica. El análisis de las características psicométricas de la JSE-HP traducida y adaptada al español aporta datos que confirman su validez y fiabilidad en nuestro medio. Los autores del cuestionario describieron un alfa de Cronbach de 0,87 en su muestra de residentes, mientras que en nuestro estudio observamos valores levemente inferiores, aunque dentro de los límites aceptables. La puntuación global de empatía de los residentes en la JSE-HP es moderada según los puntos de corte propuestos por Hojat y Gonnella para estudiantes de medicina. Otros estudios realizados a nivel nacional describen puntuaciones medias de empatía en sus residentes de 121 puntos y de 114 puntos . Sería interesante conocer los resultados de la escala aplicada en otros lugares de España, así como definir niveles de empatía que podamos clasificar como deficientes, aceptables o excelentes en nuestro medio. Los tutores puntuaron más alto que los residentes en los aspectos cognitivos de la empatía. Este hecho posiblemente tenga relación con un claro sesgo de selección. Los médicos de familia que son tutores suelen tener un índice de motivación y profesionalismo superior a la media. Por último, podemos añadir que es deseable profundizar más en el estudio de la influencia de la empatía de los médicos en los resultados en salud de los pacientes en nuestro medio. Posibles limitaciones del estudio La tasa de respuesta de residentes (39,4%) y tutores (45%) fue inferior a la tasa habitual de respuestas a cuestionarios enviados por correo a médicos, ya que el trabajo se realizó sincrónicamente a otros proyectos en los que se solicitaba colaboración a los mismos profesionales vía electrónica. Algunos subgrupos de la muestra son de escaso tamaño, tales como los varones, los residentes de origen latinoamericano o los que tenían experiencia laboral previa, aunque este hecho no invalida las comparaciones realizadas. En nuestro estudio, aunque puntuaron más alto las mujeres (residentes y tutoras), no se alcanzó la significación estadística, posiblemente por lo reducido de la muestra. Aunque la JSE-HP mide actitudes empáticas y no comportamientos reales, cabe suponer que ambos van de la mano para evitar tensiones psicológicas en el individuo, fenómeno denominado «disonancia cognitiva». Es cierto que, aunque las actitudes empáticas no aseguren comportamientos empáticos, sí que los hacen mucho más probables . El fenómeno de deseabilidad social se ha intentado paliar mediante las medidas de anonimato y confidencialidad referidas. Los estudios de Hojat et al. , demostraron el bajo impacto de este fenómeno en los resultados obtenidos. Otro problema aparece a la hora de la generalización de los resultados. La escasa tasa de respuesta podría ir asociada de un sesgo de selección de los residentes y tutores más motivados, lo que comprometería la validez externa de estudio. Su carácter transversal lo incapacita para demostrar relaciones causales. Lo conocido sobre el tema Qué aporta este estudio - Confirma, en nuestro medio, la relación existente entre la empatía autopercibida por los médicos y la referida por sus pacientes. - Corrobora los resultados reflejados en otros estudios realizados en España sobre las calificaciones más altas en empatía de los residentes españoles frente a los latinoamericanos. - Abre la puerta a futuros estudios de investigación longitudinales que analicen el deterioro de la empatía a lo largo de la residencia o la experiencia profesional y los factores que lo condicionan. - Diferentes estudios corroboran un deterioro de los niveles de empatía con el paso del tiempo en estudiantes de medicina y residentes, no estando tan claro este punto en nuestro entorno. - Hay pruebas que apoyan que la empatía autopercibida por los propios médicos repercute positivamente en la salud de los pacientes y en la satisfacción de ambos. - Estudios fuera de nuestro ámbito han asociado la empatía autopercibida por los médicos y estudiantes de medicina y la reportada por sus pacientes (reales o simulados). os autores declaran no tener ningún conflicto de intereses.
Redundancy in microbiota-mediated suppression of the soybean cyst nematode
9f9f2f23-489c-47f1-990f-51b56fc0e748
11247744
Microbiology[mh]
Continuous monoculture often causes poor crop growth due to changes in soil properties, such as acidification, autotoxin accumulation, increased soilborne pathogen inoculum, and bacterial community dysfunction . This practice has also been documented to induce soil suppressiveness against soilborne pathogens . Disease suppressiveness could be transferred to “disease-conducive soils” by incorporating a small amount (1–10% w/w) of “disease-suppressive soils” but eliminated by heat or biocidal treatment, suggesting that microorganism(s) cause disease suppression and carry soil memory of disease suppressiveness . When faced with severe soilborne disease, plant roots "cry for help" to recruit microbial species antagonistic to the causal pathogen . The resulting disease suppressiveness may dissipate in the absence of disease pressure but will bounce back upon new infections . Advances in sequencing, bioinformatics, and functional genomics have helped identify and quantify soil/plant-associated microorganisms and predict the biology and role of identified taxa using previously characterized species as references . Characterization of disease-suppressive soils using such approaches has uncovered some underlying mechanisms, such as the production of non-ribosomal peptide antibiotics by Pseudomonas and Flavobacterium in sugar beet-associated suppressive soils and the secretion of a heat-stable antifungal thiopeptide against Fusarium oxysporum by Streptomyces present in strawberry monoculture soil . Like pathogen suppressiveness, the soil suppressiveness of plant parasitic nematodes (PPN), threats to many crops around the world , can be induced by long-term monoculture . Because PPN require living hosts to complete their life cycle, they do not kill host plants. However, they can cause severe yield losses, especially when the inoculum level is high, by disrupting root development/function, syphoning off nutrients, and facilitating root infections by soilborne pathogens. Root-knot and cyst nematodes are sedentary endoparasites and damage many crops worldwide . Cyst nematodes produce the cyst, a structure derived from dead female body to keep eggs carrying infective juveniles protected during long-term dormancy (Fig. A). Previous studies have reported suppressive soils against cyst nematodes, including Heterodera avenae in wheat , H. schachtii in sugar beet , Globodera rostochiensis and G. pallida in potatoes , and H. glycines in soybean . Some fungal species, such as Nematophthora gynophila and Pochonia chlamydosporia in H. avenae -suppressive soils , and Dactylella oviparasitica in sugar beet cyst nematode-suppressive soils, have been shown to suppress specific nematodes . Interestingly, H. schachtii cysts infested with D . oviparasitica in suppressive soils could transfer suppressiveness into H. schachtii -infested conducive soils . With an annual production of 330 million tons, soybean is a vital food/feed crop that is also widely used for biofuel production and nitrogen fixation . The soybean cyst nematode (SCN, H. glycines ) is one of the most devastating threats to soybean production, annually costing > 2 billion US dollars due to yield loss in the USA . In China, SCN is distributed across 22 provinces , causing yield losses equivalent to ~ 120 million US dollars per year . Soybean monoculture has been shown to create SCN-suppressive soils in both China and the USA . Previous analyses have identified diverse microbial taxa associated with SCN cysts and the soybean rhizosphere and endosphere . Despite extensive research on SCN-suppressive soils, the mechanism underpinning SCN suppression remains poorly understood. We previously reported that upon SCN infestation, the relative abundance of some bacterial taxa in bulk soils, the soybean rhizosphere and endosphere, and the cysts increased . However, which taxa are crucial for SCN suppressiveness and how they suppress SCN remained elusive. Since sugar beet cyst nematode suppressiveness could be transferred by incorporating its cysts formed in suppressive soils , and that the cyst bacterial community was established via consecutive selection of taxa from the root endosphere , we hypothesized that cyst-associated microorganisms are critical for SCN suppressiveness. To test this hypothesis, we performed growth room pot experiments with soybean grown in SCN-suppressive (S) and conducive (C), transplantation (CS, created by mixing 10% S soil with 90% C soil), and heat or formalin-treated S soils. Our specific objectives were to (a) evaluate whether different soils and cysts formed in these soils suppress SCN, (b) comparative analysis of the composition of bacterial communities associated with SCN cysts formed in these soil treatments; (c) isolation and identification of candidate bacterial taxa from SCN cysts, (d) verification of the involvement of these bacteria in disease suppressiveness, and (e) elucidation of the mechanisms employed by cyst-enriched bacteria to suppress SCN. Contribution of SCN cysts to disease suppressiveness Identification of candidate bacterial species involved in SCN suppression Chitinophaga and Dyadobacter employ distinct mechanisms to suppress SCN Bacterial species belonging to Proteobacteria, Actinobacteria, Bacteroidetes, and Firmicutes were isolated from cysts formed in the S soils (Fig. A). A phylogenetic analysis of several Chitinophaga and Dyadobacter isolates, identified using their 16S rRNA gene sequences, revealed that the Chitinophaga isolates clustered with two species ( C. soli and C. niabensis ) and the Dyadobacter isolates clustered with D. luticola , D. beijingensis , and D. endophyticus . The 16S rRNA sequences of Chitinophaga C7, C18, C42, C54 and Dyadobacter D18C were 100% identical to OTU18 and OTU26, respectively (Fig. B). The effect of Chitinophaga and Dyadobacter isolates on J2 mortality and egg hatching was first investigated. When freshly prepared J2s were incubated with individual isolates, most of them (~ 97%) remained alive 7 days post inoculation, indicating that they did not cause J2 mortality (Fig. C). However, when freshly prepared cysts were incubated with these isolates in autoclaved soils, Chitinophaga isolates C54 and CN7, but not Dyadobacter isolates, significantly reduced egg hatching (Fig. D). Chitinophaga abundance was significantly higher in cysts than in soil (Fig. S7). All three isolates (C54, CN7 and D18C) were able to colonize SCN females and cysts (Fig. E). Soybean plants were grown for 56 days in the C soil mixed with extra eggs (1500/100 g of dry soil) and Chitinophaga (C54 and CN7) and Dyadobacter (D18C), individually and in combination. SCN could complete two life cycles during this period. Only D18C, D18C-C54, D18C-CN7, and D18C-C54-CN7 significantly decreased the egg density (Fig. F), suggesting that CN7 and C54 alone may require more than one growth cycle to reduce egg hatching and vitality. To test this hypothesis, we grew new soybean plants in the same pots twice (56 days for each cycle) without additional treatments. Egg density was significantly reduced by all treatments after the second and third cycles (Fig. G), supporting the involvement of both Chitinophaga strains in SCN suppression. Microscopic observation of the eggs in cysts treated with these three isolates (Fig. H), indicated that Chitinophaga C54 and CN7 caused severe malformations of first-stage juveniles (J1s) within eggs (Fig. I). Their body organization was abnormal and eventually deteriorated completely. Both isolates significantly suppressed egg hatching, comparable to that of the cysts formed in the S soils (Fig. D). The glycosyl hydrolase (GH18) family of chitinases in Chitinophaga isolated from disease-suppressive soils have been shown to contribute to suppressing fungal pathogens (Carrion et al., 2019). Consistent with this, both Chitinophaga isolates CN7 and C54 produced chitinases in the presence of SCN eggs, which supported their role in SCN-suppressive activity (Fig. J). We then sequenced the CN7 and C54 genomes using PacBio sequencing (Fig. S8 and S9). Based on the average nucleotide identity, the close relatives of CN7 and C54 were C. niabensis and C. soli , respectively. To further confirm the involvement of the chitinases in SCN suppression, we identified three putative GH18 family genes in the CN7 genome (Chpbs_4014, Chpbs_4904 and Chpbs_5684) and determined their expression levels using qRT-PCR. They exhibited distinct patterns of expression in the presence of SCN eggs, with the expression of Chpbs_5684 significantly increasing at 24 and 48 h after applying CN7 (Fig. S10A). The expression of Chpbs_4014 did not change, whereas the expression level of Chpbs_4904 appeared to slightly increase at 24 h, but not at 48 h. We then produced chitinases encoded by these genes in Escherichia coli to test their activity against SCN (Fig. S10B and S10C). Chitinases in the culture filtrate of CN7 and those partially purified using E. coli were sufficient to deform juveniles in eggs and stop egg hatching. Dyadobacter (D18C) cells were found attached to the surface coat of J2s hatched under axenic conditions (Fig. K). The absence of a direct negative effect of D18C on nematode suggested that this strain might indirectly suppress SCN by affecting soybean plants. To test this hypothesis, we inoculated soybean plants with SCN J2s, D18C, and J2s with attached D18C and quantified the expression levels of selected soybean defence-related genes. The expression of three genes under the control of the salicylic acid signalling pathway ( Gm SAMT Gm PR1 and Gm NPR1) was significantly up-regulated by the inoculation of SCN J2 relative to those treated with water and D18C. In contrast, only J2 colonized by D18C significantly induced the expression of defence-related genes Gm SAMT, Gm PR1, Gm NPR1, Gm ACS9b, Gm CHIA1, Gm PR10, Gm PAD4, Gm PAL, and Gm WRKY31 involved in the salicylic acid, jasmonic acid and ethylene pathways at different levels (Fig. L). SCN suppressive and conducive soils collected from two regions in China, Baicheng (BC) in Jilin Province and Fulaerji (FL) in Heilongjian Province, were used to investigate how SCN suppressiveness forms and work. The S soils from BC (45 years of monoculture) and FL (37 years of monoculture) harbour significantly fewer SCN eggs than the C soils collected in nearby fields with only 3 years of monoculture (Fig. S A). Their physical and chemical characteristics were similar (Table S1). Soybean plants grew slightly better in the S soils than the C soils, but the difference was insignificant (Fig. S B). When soybean plants were grown for 56 days in the S, C, and CS soils inoculated with SCN eggs (1500 eggs per 100 g of dry soil), the resulting egg densities in the S and CS soils were significantly lower than those in the other soils (Fig. B and C; Fig. S C), confirming SCN suppressiveness. Heat treatment at 80 °C (S80) and fumigation resulted in a partial to complete loss of SCN suppressiveness (Fig. B and C). SCN suppressiveness of the soils collected in BC and FL appeared primarily due to the cyst-associated microbiota rather than the rhizosphere microbiota (Fig. S2A and S2B). To determine whether SCN suppression is caused by cyst microbiota or rhizosphere microbiota, instead of using 10% S soil to make CS, we designed three different regimens each for SCN cysts and soybean rhizosphere to differentiate their involvement in disease suppression. The cyst treatments included SCN native cysts isolated from S soils and transferred to C soils (C1-F), newly-formed cysts collected from soybean roots after 56 days of soybean grown in S soils under greenhouse condition and transferred to C soils (C2-G), and a microbiota suspension prepared by ground C2-G cysts and transferred to C soils (C3-G). Similarly, the three soybean rhizosphere treatments included the transfer of soybean seedlings grown in S soils for 2 weeks and transplant to C soils (Sb-G), rhizosphere soil collected from soybean seedlings grown in S soils for 2 weeks and then transferred to C soils (R1-G), and a microbiota suspension prepared from rhizosphere and root of soybean seedling from S soils were transferred to C soils (R2-G). All the cyst treatments significantly lowered egg densities when mixed with 90% C soils, resulting in densities as low as those observed in CS soils created using the soils collected at BC and FL (Fig. C). However, the rhizosphere treatments of 90% C soils were not as effective as the cyst treatments in reducing egg densities (Fig. C). Egg-hatching rates of the cysts extracted from the field S soils were significantly lower than those from the field C soils. In contrast, soil extracts prepared from the S and C soils did not significantly affect egg hatching of the freshly prepared cysts using an autoclaved soil (Fig. D). However, when the cysts formed in autoclaved soils were placed into the S and C soils for 2 weeks, the cysts recovered from the S soils showed lower egg-hatching rates than those from the C soils (Fig. E), suggesting that the cysts acquired some microbiota inhibitory to egg hatching and that SCN suppressiveness could be transferred to different soils by the cysts colonized by such microbiota. To prepare for testing the hypothesis that cyst-associated bacteria confer SCN suppressiveness, we analysed and compared the microbiome of the cysts formed in S, C, CS, S80 and SF soils. In total, > 1.4 million high-quality-filtered reads, corresponding to 3073 bacterial OTUs (> 97% sequence identity), were obtained from 30 samples. The α-diversities, including Shannon’s diversity index and Simpson index of cyst bacterial communities, were higher in the S and CS treatments than in the C, S80 and SF treatments (Fig. A; Fig. S3A). Cysts formed in the S and CS soils harboured more shared bacterial OTUs than those formed in the S and C or CS and C soils (Fig. B). Significantly negative correlation was observed between the cyst bacterial diversity and SCN egg densities ( P = 1.01e − 05, P = 3.45e − 06; Fig. C). A Bray–Curtis dissimilarity analysis showed that the cyst-associated bacterial communities at both soil sampling locations could be separated into two main clades, one corresponding to the S, CS and C soils and the other for the S80 and SF soils, in the cluster dendrogram, indicating that heat- and formalin-treatments had a significant impact on the assembly of cyst bacterial microbiota. Those corresponding to S and CS form one subclade, and those for C form another subclade, consistent with the degree of SCN suppressiveness (Fig. D; Fig. S3B). Overall, these results imply that the bacterial microbiota associated with SCN suppressiveness could be transferred to C soils by adding a small amount of S soils and the disease-suppressive bacteria can effectively colonize and proliferate in newly formed cysts. Analysis of the relative abundance (RA) of different phyla suggested Bacteroidetes as the phylum associated with SCN suppressiveness. The cysts formed in the S and CS soils have significantly greater RA of Bacteroidetes and family Chitinophagaceae than those formed in C soils (Fig. E; Fig. S4A, S4B, S5A and S5B), and their RA was reduced when the S soils were treated with heat or formalin. Linear relationship analysis showed a significant negative correlation between the RA of Bacteroidetes and egg density ( P = 1.17e − 05; Fig. F), supporting their role in SCN suppressiveness. The RA of Actinobacteria inhabiting the cysts formed in S and CS was also higher than that in C, but the difference was not statistically significant compared to the other treatments (LSD test). We identified OTUs enriched in the cysts formed in the S soils from BS and FL and the CS soils derived from these S soils. For the cysts formed in the S soils of BC and FL, 13 and 14 OTUs, respectively, were significantly enriched. For those formed in the CS soils, 16 (BC) and 12 (FL) OTUs were significantly enriched. In contrast, 11 (BC) and 13 (FL) OTUs were enriched in both S and CS soils. Among these enriched OTUs, only 5 were commonly present in those formed in both the S and CS soils (Fig. A). One OTU (OTU18) was enriched in cysts formed in the S (BC) and CS (both BC and FL) soils (Fig. B), and its presence significantly correlated with reduced SCN egg density. Six OTUs potentially associated with SCN suppression are Chitinophaga (OTU9 and OTU18) and Dyadobacter (OTU26) of Bacteroidetes, Nocardiopsis (OTU20) and Microbacterium (OTU55) of Actinobacteria, and Ralstonia (OTU48) of Proteobacteria. Further analysis of these 6 OTUs after different treatments revealed that the RA of Chitinophaga (OTU9 and OTU18), Dyadobacter (OTU26) and Nocardiopsis (OTU20) was significantly higher in the S and CS soils (Fig. C) and negatively correlated with the egg density (Fig. D). The RAs of Ralstonia (OTU48) after SF and S80 treatments and Microbacterium (OTU55) after S80 treatment were not significantly different from those of S (Fig. C). Chitinophaga (OTU9 and OTU18), Dyadobacter (OTU26) and Nocardiopsis (OTU20) were highly sensitive to SF and S80 treatments (Fig. C). In addition, the S80 and SF treatments significantly shifted the RA of several S OTUs (Fig. S6). Because Chitinophaga and Dyadobacter had the highest RA in the cysts formed in the S soils, and their RAs were significantly correlated with reduced nematode egg density, we next investigated if whether these two Bacteroidetes suppress SCN by isolating strains belonging to these taxa. ynthetic pesticides have helped limit crop losses caused by diseases and pest infestations but incur increasingly unsustainable environmental and ecological costs. Rapid advances in understanding how biotic and abiotic factors affect plant growth and health and their underlying mechanisms have greatly facilitated the development of alternative strategies for crop protection ; however, many critical knowledge deficiencies still exist. Here, we addressed one such deficiency, how soil suppressiveness of SCN forms and works, to support the effective management of SCN and other PPN. Because SCN persists once established, available control strategies, such as crop rotation and planting SCN-resistant varieties, target to suppress its population to a level that does not cause significant yield loss. Our study offers new insights that can be harnessed to augment existing strategies and develop new ones. Continuous cultivation of soybean plants susceptible to SCN initially increases the density of SCN, but the population declines significantly after several years . Induced soil suppressiveness is not unique to the soybean-SCN system and has been observed in other cyst nematode-crop systems, including Heterodera avenae –cereals , H. schachtii -sugar beets , and Globodera pallida / G. rostochiensis -potato . Similar to pathogen-suppressive soils , soil suppression of PPN is characterized by the ability of resident microbial communities to reduce nematode populations . Metabarcoding analyses of microbial communities associated with different parts of diverse plants and soils have helped identify taxa correlated with disease suppressiveness . However, in most cases of plant pathogen-suppressive soils, the studies are descriptive, candidate taxa have not been identified at the species level, and their involvement in suppression has not been validated . Accordingly, the microbial traits that drive pathogen suppression remain largely mostly unknown, with a few exceptions. For example, genes encoding non-ribosomal peptide synthetases and polyketide synthases are crucial for root disease suppression by Chitinophaga and Flavobacterium enriched in sugar beet roots, suggesting the involvement of secondary metabolites in pathogen suppression . For cyst nematode-suppressive soils, the focus has been on identifying fungal species associated with disease suppressiveness . Analyses of fungal communities inhabiting H. schachtii cysts identified Hyalorbilia oviparasitica (formerly Dactylella oviparasitica ), Fusarium oxysporum , and Lycoperdon spp. as potential taxa underlying suppressiveness, and subsequent analyses confirmed the involvement of H. oviparasitica in suppressiveness . Interestingly, H. schachtii cysts infested with H. oviparasitica in suppressive soils could transfer suppressiveness to H. schachtii -infested conducive soil . SCN cysts likely harbour diverse microorganisms because females encounter soil and root tissue-associated microbiota as they migrate towards roots, penetrate roots, and differentiate to form cysts . A previous study estimated 2.6 ± 0.5 × 10 5 bacteria inhabiting a SCN cyst . Such bacteria likely affect the viability and fitness of nematodes, but the taxa linked to nematode suppressiveness remain unknown. We observed that SCN suppressiveness could be established and transferred by bacteria isolated from cysts formed in S soils (Fig. C). This observation, the transfer of SCN suppressiveness by incorporating cysts formed in S soils , and previous studies showing the critical role of bacteria in fungal pathogen-suppressive soils led us to analyse cyst-associated bacterial taxa to identify ones closely linked to SCN suppressiveness. This analysis (Figs. and ) revealed that Bacteroidetes Chitinophaga and Dyadobacter , enriched in the cysts formed in S soils confer and transmit SCN suppressiveness. Isolation of multiple strains belonging to Chitinophaga and Dyadobacter allowed us to validate their role in SCN suppression by inoculating them into C soils and to characterize their mechanism of SCN suppression. Members of Chitinophaga are chitin-decomposers and were found to be enriched in the rhizosphere of wheat upon long-term monoculture and infection by Rhizoctonia solani , a fungal pathogen of diverse plants . Chitinophaga is one of the two key taxa associated with soil suppressiveness of R. solani developed after long-term monoculture of sugar beet, and their chitinase activity seems critical for the suppression . Our previous study revealed that the bacterial community in SCN cysts was established because of the consecutive selection of microbial taxa from the soybean root endosphere and that Chitinophaga was highly abundant in the cyst, followed by the root endosphere and rhizosphere in suppressive soils (Fig. S7C) . Chitin is an essential element for the fungal cell wall and the nematode eggshell and body, which is why Chitinophaga preferentially colonize SCN cyst from the root endosphere and reduces egg hatching (Fig. D) and caused morphological defects in the first stage juveniles inside the eggs (Figs. I and ). Chitinophaga enrichment and activity against parasitic nematodes and fungal pathogens in disease-suppressive soils suggest that they suppress diverse pests and pathogens. Dyadobacter was identified as a taxon potentially involved in suppressing Fusarium banana wilt , suggesting that it also is involved in suppressing multiple pathogens and pests. Three Dyadobacter isolates did not inhibit SCN egg hatching (Fig. D). However, isolate D18C attached to the surface coat of hatched J2s (Fig. K) strongly induced defence-related gene expression of in soybean plants (Fig. L), suggesting that Dyadobacter indirectly suppresses SCN by inducing soybean defence against SCN during the initial stage of infection (Fig. ). It has been previously demonstrated that only a specific subset of soil microorganisms attach to the surface coat of nematodes . The carbohydrate-rich protein layer over the nematode epicuticle has been identified as the main area for microbial attachment . The interaction between non-parasitic bacteria and plant root surfaces is lectin-specific, which may results in the recognition of nematodes by plants . Thus, some microorganisms attached to the surface coat of nematode juveniles use them as carriers to the plant roots and induce resistance in plants . Our data suggest that soybean plants have evolved to recognize some of such bacteria. Our results raised several questions, including why and how long-term monoculture enriches Chitinophaga and Dyadobacter in SCN cysts. Did SCN evolve to modulate its virulence/proliferation by recruiting specific microbial taxa to balance two conflicting needs (proliferation vs. preventing severe host dysfunction/collapse)? SCN, an obligate pest with limited hosts, needs to balance these needs because uncontrolled proliferation/virulence likely causes a severe fitness penalty. Alternatively, is this driven by an unknown mechanism that soybean has evolved to reduce pest population? Related questions include which factors are critical for the enrichment of Chitinophaga and Dyadobacter , if these taxa are involved in inducing SCN suppressiveness in diverse fields (as both taxa were enriched in two agricultural fields at distinct geographical location within China) or other taxa/mechanisms drive the suppressiveness in different areas or field conditions, whether Chitinophaga and Dyadobacter confer suppressiveness against different cyst nematodes affecting other crops, and how soybean plants recognize Dyadobacter associated with J2s to activate defence responses. Answers from these questions, along with Chitinophaga and Dyadobacter isolates that effectively suppress SCN, will help develop novel strategies for controlling SCN and potentially other parasitic nematodes (e.g. modifying soil ecosystems by incorporating formulated Chitinophaga and Dyadobacter isolates as biocontrol agents, judiciously deploying cultural practices to promote or speed up their SCN colonization/enrichment). Our study showed that the specific suppression of SCN in two long-term soybean monoculture soils was due to the enrichment of disease-suppressive bacteria in nematode cysts. The cyst microbiome presented high bacterial diversity and a unique subset of microbes capable of transmitting nematode suppressiveness to conducive soil environments. Specifically, the cyst-enriched bacteria Chitinophaga and Dyadobacter employed two distinct mechanisms to protect soybean against SCN: (a) malformation of J1 in eggshell due to chitinases secreted by Chitinophaga strains and (b) the Dyadobacter strain induced soybean defence responses, thus reducing infection. Our findings not only present a compelling case for the trade-off wherein SCN cyst-enriched microbes inhibit nematode proliferation both directly and indirectly, but also shed light on the potential discovery of specific microbial consortia from suppressive soils to develop synthetic microbial communities for targeting plant-parasitic nematodes. Soil collection and preparation Analysis of soil physicochemical properties Isolation of SCN cysts and egg density measurement Growth chamber evaluation of SCN suppressiveness Evaluation of rhizosphere and cyst microbiota for their ability to suppress SCN Egg hatching assay Cyst bacterial community profiling via amplicon sequencing Bioinformatic analyses of amplicon sequencing Isolation of candidate bacteria, identification, and phylogenetic analysis Anti-SCN activity of Colonization of the SCN cyst and female by Measurement of chitinase activity in Whole genome sequencing of Real-time qRT-PCR analysis of transcripts from Cloning, expression and partial purification of Real-time qRT-PCR analysis of transcripts from soybean defence-related genes Effect of Chitinophaga and Dyadobacter on egg density and hatching The C soil was inoculated with Chitinophaga CN7 and C54 and Dyadobacter D18C individually and in combination (CN7/C54, CN7/D18C, C54/D18C, and CN7/C54/D18C) to determine whether they suppress SCN. For all inoculations, regardless of the number of strains used, the cell density was 1 × 10 7 CFU/g of soil. After thoroughly mixing the soil inoculated with bacterial cells, 1500 eggs (per 100 g of dry soil) were gently but thoroughly mixed with the inoculated soil. Three PVC pots filled with each treated soil were used to grow soybean plants as noted above. The plants were harvested 56 days after seeding. The egg density was measuredThis experiment was repeated twice. We grew a new soybean plant in the pots used for evaluating Chitinophaga isolates without adding a new batch of bacterial cells to determine if they could continuously suppress egg hatching. After another 56 days of soybean growth, we measured the egg density and hatching, and ANOVA and LSD test ( p < 0.05) were used to evaluate significant differences among the treatments. SCN suppressive soils were first identified in northeastern China by Sun and Liu . The S soils used in the current study were collected from two fields in this region, Baicheng of Jilin Province (BC; N 45°37′ E 122°47′) and Fulaerji of Heilongjian Province (FL; N 47°20′ E 123°62′) with 45 and 37 years of soybean monoculture, respectively, as previously described . The C soils were collected from two fields, located near the S soil collection sites, with only three years of soybean monoculture. Soil cores (depth of 0–30 cm), including soybean roots, were sampled from the rows of plants in a zigzag pattern at the time of crop harvest, and five random sites in each field were sampled. All soil cores from each field were mixed thoroughly and sieved through a 2-mm mesh to remove plant debris and stones. Selected physical and chemical properties of the S soils from BC and FL are listed in Table S . Advanced Standard Technical services (Beijing, China) analysed the physicochemical properties of the soils. Total N and P were measured using an AA3 HR AutoAnalyzer (SEAL Analytical) following the procedures provided by the manufacturer. The data were viewed and analysed using Windows-based AACE software (SEAL Analytical). Soils containing SCN cysts were washed with a vigorously applied water stream through an 850 µm aperture sieve onto a 250 µm aperture sieve and extracted by centrifugation in 76% (w/v) sucrose solution at 2500 rpm for 6 min . Eggs were released by breaking the cysts in a 40 ml glass tissue grinder (Fisher catalogue No. 08–414-10D). The eggs were separated from the debris by centrifugation in a 35% (w/v) sucrose solution for 5 min at 2500 rpm and collected from a 25 µm aperture sieve . The resulting egg samples were dispensed into 12-well tissue culture plates (Nest Biotechnology) and counted using an inverted microscope (Olympus CK40) to calculate egg density in 100 g of dry soil. We evaluated the SCN suppressiveness of the following soil samples: (i) S soils, (ii) C soils, (iii) transfer soils prepared by mixing 10% S and 90% C soils (CS), (iv) S soils incubated at 80 °C for 1 h (S80), and (v) S soils treated with formalin (SF). For the heat treatment, a 1000 mL glass flask containing 500 g of each S soil (with an adjusted moisture content of approximately 10%) was placed in a water bath set at 80 °C for 1 h. For formalin treatment, 1.5 kg of each S soil was thoroughly mixed with formalin (3.8 mL 40% formaldehyde per kg of soil) in a 4.5 L plastic bag . We multiplied SCN and produced cysts using the susceptible soybean variety ‘Sturdy’ planted in autoclaved soil (collected from a field in the Changping district, Beijing). The cysts formed were extracted, and egg suspensions were prepared as described above. Isolated eggs were treated with 0.1% sodium hypochlorite for 2–3 min and rinsed with sterilized water 3 times. Each soil sample was gently but thoroughly mixed with eggs (1500 eggs/100 g of dry soil), and 500 g of treated soil with a moisture content of approximately 10% (v/w) was placed in each PVC pot (9 cm diameter). Two seeds of the variety Sturdy were sown in each pot, which was covered with a polyethylene bag to retain moisture during seed germination. The seedlings were thinned to a single plant after 1 week. The growth chamber was set at 23–25 °C with a 16-h light/8-h dark cycle, and the plants were watered every 2 days. Three replicates (completely randomized) were used for each treatment. Soybean roots harvested 56 days after seeding were stained with fuchsin to measure the degree of penetration by J2s. The density of eggs (the number of eggs/100 g of dry soil) in each pot was measured after plant harvesting, and ANOVA and the LSD test ( p < 0.05) were used to evaluate significant differences among the treatments. To determine whether the microbiota associated with cysts and the soybean rhizosphere in S soils contribute to SCN suppressiveness, the preparations noted below were mixed with 450 g of C soil. The following treatments were prepared to evaluate the involvement of cyst-associated microbiota: (i) native SCN cysts directly extracted from 50 g of field S soils, (ii) newly formed cysts in S soils after 56 days of soybean growth following the inoculation of 1500 eggs per 100 g of dry soil, and (iii) cyst suspensions prepared by grinding the cysts from (ii). To evaluate the involvement of soybean rhizosphere microbiota, we performed the following treatments: (i) soybean seedlings grown in the S soil for 2 weeks were transplanted to the C soil, (ii) rhizosphere soils collected from soybean roots prepared as described in (i) as previously described , and (iii) soybean rhizosphere and root microbiota suspension prepared by grinding the roots attached with soil particles from (i) in pestle and mortar in the presence of distilled water. C soils were inoculated with 1500 eggs/100 g of dry soil and thoroughly mixed before sowing soybean seeds. Two seeds were sown in each PVC pot (9 cm diameter) filled with 500 g of treated soil, with an initial moisture content of approximately 10% (v/w). Plants were grown in a growth chamber as described above. Cysts extracted from SCN-infested field soils and newly-formed cysts extracted from autoclaved soil (collected from a field in the Changping district in Beijing) after growing soybean plants under greenhouse condition (as shown above) were used. Egg hatching rates from the cysts extracted from field soils and the newly formed cysts in the S, C and CS soils or autoclaved soil (control) after 56 days of soybean growth were measured by placing 100 cysts on a micro sieve in 0.05% w/v ZnCl 2 solution (used to stimulate egg hatching) for 14 days. The total number of juveniles hatched from each cyst sample was counted using an inverted microscope (Olympus CK40). Six-well tissue culture plates were employed for counting. Cysts prepared using autoclaved soil were used to measure egg hatching rates in S and C soil extracts prepared by shaking 100 g of soil in 250 ml distilled water in a flask for 48 h followed by passing the supernatant through 0.22 µm aperture sieve to eliminate microorganisms. To analyse the diversity and abundance of bacteria associated with the cysts formed in the S, C, CS, S80 and SF soils, 30 cysts isolated from each soil were crushed using a glass grinder to release the eggs. DNA was extracted from these samples using the FastDNA Spin Kit for Soil (MP Biomedicals) and the FastPrep instrument (2 min at the speed setting of 6.0). After adjusting the concentration to 20 ng/µL, the V4 region of the 16S rRNA gene was amplified in triplicate reactions using the specific bar-coded primer pair 515F (5′-GTGCCAGCMGCCGCGGTAA-3′) and 806R (5′-GGACTACHVGGGTWTCTAAT-3′) in a Veriti thermal cycler (Applied Biosystems). The PCR condition was initial denaturation at 94 °C for 5 min, followed by 35 cycles of 94 °C for 50 s, 54 °C for 30 s, 72 °C for 40 s, and the final extension at 72 °C for 10 min. Negative control reactions with no DNA were included. The resulting PCR products were separated via agarose gel electrophoresis to purify the amplicons using a DNA Gel Extraction kit (Takara) and pooled to ensure the same concentrations before sequencing. Paired-end sequencing was performed using the Illumina MiSeq sequencer at Allwegene Biotechnology Co., Ltd. (Beijing, China). Sequences were quality-trimmed using Trimmomatic v0.36 and assigned to individual samples based on their barcodes using QIIME . De novo and reference-based chimaera were checked, and those characterized as chimeric were removed. Sequence reads were binned into operational taxonomic units (OTUs) at the ≥ 97% sequence similarity level using an open-reference OTU picking protocol in the UPARSE pipeline , and the most abundant sequences from these OTUs were taken as representative sequences. Taxonomic assignment of these OTUs was performed using the Basic Local Alignment Search Tool (BLAST) against a subset of the Silva database . We calculated the alpha diversity including Shannon’s diversity and the Simpson index on the OTU table using the R package vegan. Analysis of variance and LSD test were used to evaluate the results and compare the treatment mean values. Treatments were considered significant when p < 0.05. Venn diagrams were created to identify unique and shared OTUs between different treatments using the R package VennDiagram. The linear regression analysis relating SCN egg densities to the bacterial diversity and specific taxa was conducted using the R package ggplot. Beta diversity calculations were performed on non-rarefied OTU counts. The Bray–Curtis dissimilarity matrix for cluster analysis was calculated using the function vegdist in the vegan R package on ‰ OTU relative abundances log 2 transformed. The OTUs with a relative abundance (RA) above 1‰ in at least one sample were included in the analysis. The average RA percentage of abundant phyla and families in the rhizosphere and cysts sampled after five soil treatments was calculated based on classified OTU reads and subsequently plotted using the package ggplot2. Analysis of variance and LSD test were used to evaluate the data and compare the treatment mean values. The OTUs enriched in the cysts formed in the S (both Baicheng and Flaerji) and CS soils compared to the cysts formed in the C soils were identified and visualized in ternary plots, in which linear statistics were employed on RA values (log2, > 1‰ threshold) using the Limma R package. Differentially abundant OTUs between groups were identified using a moderate t -test, and the obtained P -values were adjusted using the Benjamini–Hochberg correction method. Heatmaps were constructed to visualize enriched OTUs using the function heatmap.2 in the gplots. 100 cysts isolated from each S soil sample were crushed using a glass grinder and resuspended in 2 ml sterile phosphate buffer. After serially diluting the suspension, 100 µl aliquots of 10 −4 to 10 −7 dilutions were spread on R2A agar plates using glass beads. After incubating the plates at 25 °C for 2–5 days, distinct bacterial colonies based on morphology, colour and shape were picked and streaked on new R2A agar plates for purification and identified based on their 16S rRNA gene sequence. Cultures of the purified strains in 25% glycerol were stored at − 80 °C. A single colony was picked for colony PCR using 16S rRNA primers. Sequences of each PCR product were quality checked, trimmed and used to search the NCBI nucleotide database. After aligning the 16S rRNA gene sequences using CLUSTAL_X , a neighbour-joining phylogenetic tree with the Kimura 2-parameter model was constructed using MEGA 5 with 1000 bootstraps. Chitinophaga and Dyadobacter isolates Chitinophaga and Dyadobacter isolates cultured on R2A plates at 25 °C were used to prepare cell suspensions in 10 mM MgCl 2 (1 × 10 7 CFU ml −1 ). Juvenile mortality caused by individual isolates was measured by adding 1 ml cell suspension and 100 J2s to each well of a 12-well tissue culture plate (Nest Biotechnology Co. Ltd), incubating the plate at 25 °C, and counting dead J2s every day for up to 7 days. To determine whether these strains affected egg hatching, we mixed 20 g of autoclaved C soil with each isolate (1 × 10 7 CFU/g of soil) and 100 cysts (freshly produced using plants grown in autoclaved soil) and incubated for 7 days. Autoclaved soils mixed with cysts but no bacterial cells served as controls. Cysts in the treated soils were recovered as described above and placed on a micro sieve in ZnCl 2 solution. Egg hatching was measured in a 12-well tissue culture plate using an inverted microscope. SCN eggs and juveniles were photographed using a Nikon ECLIPSE 80i compound microscope equipped with a Canon EOS 600D digital camera, and the resulting images were recorded using the ImageAnalysisSystem11 image processing software. Chitinophaga and Dyadobacter Cells of each Chitinophaga and Dyadobacter isolate (1 × 10 7 CFU/g of dry soil) and SCN eggs (1500 eggs/100 g of dry soil) were gently but thoroughly mixed with autoclaved C soil. After growing plants using treated soils for 56 days, as described above, female and cyst samples were collected from soybean roots and crushed using a glass grinder containing 1 ml of 100 mM sodium phosphate buffer at pH 7. The resulting suspensions were transferred to 1.5 ml tubes. A series of tenfold dilutions made using 100 mM sodium phosphate buffer (pH 7), with thorough vortexing before each dilution, were plated onto R2A and incubated at 25 °C to determine CFU. Chitinophaga Chitinase activity of Chitinophaga isolates CN7 and C54 was measured using a kit based on the dinitrosalicylic acid (DNS) method following the manufacturer’s instruction (Nanjing Jiancheng Bioengineering Institute, Nanjing, China). After incubation, the bacteria cell-free culture broth was prepared by centrifugation at 10,000 rpm for 15 min, and the reaction was stopped by placing the reaction tube at 100 °C for 5 min and centrifugated at 8000 rpm for 10 min at 4 °C. After diluting the supernatant tenfold, 700 μl of the diluted supernatant was mixed with 0.5 μl DNS and heated at 100 °C for 10 min. Absorbance at 540 nm was measured using a Fluoroskan Ascent FL microplate reader (Thermo Fisher, Waltham, MA, USA). One unit (U) of enzyme activity, defined as the amount of enzyme needed to release 1 mg of N-acetyl-glucosamine per hour and expressed as U ml −1 . Chitinophaga isolates CN7 and C54 Genomic DNA from the Chitinophaga isolates CN7 and C54 was extracted using a SDS-based method. The extracted DNA was visualized using agarose gel electrophoresis and quantified using a Qubit 2.0 Fluorometer. Sequencing libraries were generated using the NEBNext Ultra DNA Library Prep Kit for Illumina following the manufacturer’s recommendations, and index codes were added to attribute sequences to each sample. Whole genomes of CN7 and C54 were sequenced using the PacBio Sequel platform and Illumina HiSeq/ NovaSeq PE150 by Allwegene Biotechnology Co., Ltd. (Beijing, China). Genome assembly was performed using the SMRT Link v5.0.1. Hereafter, the prediction of the components of coding genes and functional annotation using various databases such as NR, CAZy and COG were performed. Chitinophaga GH18 genes Chitinophaga isolate CN7 cells (10 9 CFU/ml) were added to each well containing 100 SCN eggs in 12-well tissue culture plates and incubated for 0, 24 and 48 h. Bacterial cells cultured in the absence of SCN eggs were used as controls. Bacterial RNAs were extracted using the Zhuangming U-Fast bacterial RNA extraction kit following the manufacturer’s instruction. After measuring the RNA concentration, 1 μg of RNAs was converted to complementary DNAs (cDNA) using a Tiangen reagent kit. SYBR Green PCR master mix (Takara, Mountain View, CA, USA) was used for quantitative RT-PCR using ABI Prism 7500 (Life Technologies, Carlsbad, CA, USA) with the programs recommended by the manufacturer. The primers used for amplifying three glycosyl hydrolase 18 (GH18) genes of Chitinophaga , chpbs_4014 , chpbs_4904 , and chpbs_5684, are listed in Table S2. The glyA gene was used as an internal control. The threshold value (CT) was used to quantify relative gene expression levels using the comparative 2 −ΔΔCT method. Chitinophaga GH18 genes Three GH18 genes of Chitinophaga , including chpbs_4014 , chpbs_4904 , and chpbs_5684 , were cloned into the MCS of pET28a using the ClonExpress II one-step cloning kit (Vazyme) to construct protein expression vectors (the primers used are listed in Table S3), and the vectors were transformed into E. coli BL21 for protein expression. Transformants were inoculated into LB containing 40 μg/ml kanamycin, incubated at 37 °C until the OD 600 value reached 0.8, and then induced with IPTG (final concentration of 0.1 mM) for 20 h at 16 °C. The cultures were collected by centrifugation, resuspended in the buffer [25 mM Tris–HCl, 200 mM NaCl; pH = 7.8], and disrupted by ultrasonication. After centrifugation at 12,000 rpm for 30 min at 4 °C, the supernatants were transferred to a Ni 2+ -nitrilotriacetic acid (NTA) column and incubated at 4 °C for 2 h. After pre-experiment, we used 50 mM imidazole buffer (25 mM Tris–HCl, 200 mM NaCl, 50 mM imidazole; pH = 7.8) to elute non-target proteins, and 150 mM imidazole buffer (25 mM Tris–HCl, 200 mM NaCl, 150 mM imidazole; pH = 7.8) was used to elute proteins with His tag. The collected proteins were assayed for protein concentration using an ultra-micro spectrophotometer and detected by SDS-PAGE. To determine whether Dyadobacter D18C cells attached to surface coat of J2s induce host defence responses during root invasion, we quantified transcripts from selected defence-related genes in roots of soybean plants grown in twice autoclaved soils after treating with ~ 2000 J2s, J2s attached with D18C (after co-incubating freshly prepared J2s with ~ 2000 D18C cells overnight, the J2s were collected using 25 µm sieve), D18C cells (~ 1 × 10 7 cells/g of soil), and 5 ml of 50 μM PBS buffer, pH 7.0 (control). Each treatment in 5 ml of 50 μM PBS buffer was applied to the soil near the roots of 7-day-old plant (6 biological replicates). The treated roots were gently removed from the soil 7 days later, thoroughly rinsed with tap water to remove attached soil, air dried on paper tissues, weighed, and divided into two portions. One portion (approximately 0.25 g) was wrapped in aluminium foil, treated with liquid nitrogen, and stored at − 80 °C until RNA extraction. The remaining roots were stained with fuchsin to assess the degree of penetration by J2s. Total RNAs were isolated using the RNAprep Pure Plant Kit (Tiangen, Beijing, China), and 1 μg of RNAs was converted to cDNA using the Superscript III Reverse Transcription Kit (Invitrogen). SYBR Green PCR master mix (Takara) was used for quantitative RT-PCR using ABI Prism 7500 with the programs recommended by the manufacturer. The primers used are listed in Table S4. The actin gene was used as an internal control. Threshold value (CT) was employed to quantify relative gene expression levels using a comparative 2 −ΔΔCT method. upplementary Material 1: Table S1. Physical and chemical properties of SCN suppressive and conducive field soils used in this study. Table S2. Primers used to quantify the transcript levels of GH18 genes of Chitinophaga isolate CN7 by qRT-PCR. Table S3. Primers used to amplify three GH18 genes of Chitinophaga isolate CN7 for protein production in Escherichia coli. Table S4. Primers used to quantify the transcript levels of selected defense-related genes in soybean via qRT-PCR. Fig. S1. SCN egg densities and soybean growth in the suppressive (S) and conducive (C) soils collected from soybean fields in Baicheng and Flaerji. A. Egg densities in the S and C soils. B. Soybean shoot weight after 56 days of growth using the S and C soils in a growth chamber. C. Cyst densities and soybean shoot weight after 56 days of growth in the following soils inoculated with 1,500 eggs per 100g of dried soil: S and C soils, transferred soil (CS; created by mixing 10% S soil with 90% C soil), heat (80°C)-treated S soil (S80), and S soil fumigated with formalin (SF). Different letters above the bars denote statistically significant differences according to LSD test ( p < 0.05). Fig. S2. SCN suppression by different field soils and soil treatments. Greenhouse trials were performed to determine how various soils and soil treatments affect SCN egg densities and whether bacteria and fungi in the rhizosphere and SCN cyst contribute to suppressing SCN. The suppressive (S) and conducive (C) soils collected from Baicheng (A) and Fulaerji (B) were used. The egg densities were measured after 56 days growth of soybean with inoculation of 1,500 eggs/100g dried soil. The classical trial for suppressive soil included the treatments: suppressive (S) soil, conducive (C) soilTo illustrate contributions of cysts (Cy) and soybean rhizosphere (Rh) microbiota in SCN suppressiveness, microbial suspensions were prepared from the cysts and rhizosphere in the S soil equal to 10% amount S soil in CS treatment and then mixed with 90% C soil. The rhizosphere microbial suspensions were prepared by collected rhizosphere (including roots) soil from soybean seedlings growing in S soil equal to 10% S soil in CS treatment for two weeks and ground in distilled water (CS-Rh). Cysts microbial suspensions were created by extracting cysts from native S soil equal to amount of 10% S soil in the CS treatment and ground in distilled water (CS-Cy). The microbial suspensions were also treated with antibiotics to kill fungi (pimafucin=200 μg/mL) and remain bacteria (CS-Rh-bacteria or CS-Cy-bacteria) or to kill bacteria (penicillin=100 U/mL+streptomycin=100 μg/mL) and remain fungi (CS-Rh-fungi or CS-Cy-fungi) or to kill all microbiota by 3 antibiotics (CS-Rh-antibiotics or CS-Cy-antibiotics3. Within-sample diversity (α-diversity) of bacterial community inhabiting the cysts isolated from five soil treatments. A. Bacterial community cluster analysis based on Bray–Curtis dissimilarity showed clear separation between SCN cyst in S and CS soil treatments than C, S80 and SF (left in Baicheng and Flaerji). The horizontal bar within each box indicates median value. The tops and bottoms of boxes represent 75th and 25th quartiles, respectively. Linear regression relationship between the cyst bacteria diversity and egg densities. Regression line is in brown, and the shaded region represents the confidence interval (geom_smooth function, method = lm). B. Cluster analysis based on Bray–Curtis dissimilarity indicates clear separation between bacterial communities inhabiting the SCN cyst in S and CS soil treatments than C, S80, and SF. Bacterial OTUs with RA > 1‰ in at least one sample were included in the analysis. Fig. S4. Averaged relative abundance (RA) and linear regression relationship between the most dominant bacterial phyla and the egg density. Data from analysing the S and C field soils collected in (A) Baicheng and (B) Fulaeji were used for this analysis. The brown line and the shaded region on the right side indicate the regression line and the confidence interval (geom_smooth function, method = lm), respectively. Only the taxa with RA of > 1% in at least one sample were included in the analysis. Fig. S5. Average relative abundance (RA) of the most dominant bacterial families associated with the cysts formed in different soils. Chitinophagaceae was more abundant in the cysts formed in the S and CS soils than those formed in the C soils collected from two provinces (A=Beicheng, B=Fulaerji). Only taxa with RA of > 1% in at least one sample were included. Fig. S6. Ternary plots and heatmaps demonstrating the OTUs significantly enriched in heat (S80)- and formalin (SF)-treated soils compared to S across two district geographical locations (Baicheng-BC, A and B; Fulaerji-FL, C and D). Dark golden circles mark cyst OTUs significantly enriched in S80 than S (S80 > S OTUs). Pink circles mark cyst OTUs significantly enriched in SF than S (SF > S OTUs). Green circles mark cyst OTUs simultaneously enriched in S80 and SF compared to S (S80+SF > S OTUs). Each circle represents one OTU. The size of each circle represents its relative abundance. The position of each circle is determined by the contribution of the indicated compartments to the total relative abundance. Only taxa with RA > 1‰ in at least one sample were included in the analysis. Fig. S7. Detection and quantification of Chitinophaga in the suppressive soil and cysts by qPCR. A. The gel image shows specific amplification of the targeted Chitinophaga DNA region using the primer pair hu3135254F (5′-CATTGAGAGGCATCTTTTG-3′) and hu1185451R (5′-CGGTGCTTATTCATCTGGTA-3′). M indicates DNA size markers (2000 bp). Lanes 1-6 show PCR amplification products in the absence of Chitinophaga genomic DNA. Lanes 7-12 show PCR amplification products in the presence of Chitinophaga genomic DNA. B. Estimation of the percentage of Chitinophaga among the bacterial cells in the suppressive soil and those associated with cysts. The primers 27F and 1492R, designed to amply the bacterial 16S rRNA genes, were used to quantify the total bacterial community size in each sample. (C) Chitinophaga RA % in rhizosphere, root and cyst of suppressive soils challenged with nematodes in a previous study by Hussain et al. [1]. Fig. S8. The circos plot displaying the genomic features of chromosomal level Chitinophaga isolate CN7 genome. COG functional gene distribution. The legend is shown on the right side. Fig. S9. The circos plot displaying the genomic features of chromosomal level Chitinophaga isolates C54 genome. COG functional gene distribution. The legend is shown on the right side. Fig. S10. Relative expression of chitinases encoded by three glycosyl hydrolase 18 (GH18) genes of Chitinophaga isolate CN7 challenged by SCN eggs and their heterogeneous expression using Escherichia coli BL21 and their purification. The PCR primers used for expression vector construction are listed in Table S3. A. Expression of GH18 genes after 0, 24 and 48 h by inoculation of 1 ml CN7 cell suspension (10 9 cfu/mL) to 100 SCN eggs. B. SDS-PAGE of cell lysates before and after IPTG induction. The bands corresponding to target proteins were marked in black boxes. Proteins in the whole cell lysate (A) and the supernatant (S) and precipitate (R) of the lysate after ultra-sonication and centrifugation were analyzed. C. SDS-PAGE of purified proteins. The target protein bands are marked in black box. M=molecular weight (kDa) markers; T=the cell lysate before passing through Ni-NTA; 1-2=elutants with 50 mM imidazole buffer; 3-7=elutants with 150 mM imidazole buffer; 8=elutant with 500 mM imidazole buffer.
Changes in Doppler Ultrasonography and Echotexture Parameters in Cows During the Last 10 Days of Pregnancy
e3a63397-4249-4099-bf81-705b3b50a07c
11911928
Cardiovascular System[mh]
Introduction The placenta is a structure composed of trophoblastic cells, foetal membranes, placentomes, amniotic and allantoic fluids (Peter ). In cows, the allantochorionic membrane and endometrium are modified at specialized contact points called placentomes. This structure, which consists of maternal caruncles and foetal cotyledons, is the organ where respiratory gases, nutrients and waste products are exchanged. The growth and development of placentomes are essential for foetal growth and development (Laven and Peters ). Increasing blood flow to the uterus during the last trimester of pregnancy helps to improve uterine perfusion, which in turn meets the foetus’ increasing need for nutrients and oxygen (Ferrell and Ford ). Placental perfusion adapts during pregnancy to the changing demands of the growing foetus, which is crucial for foetal development (Ferrell ). Uterine arteries have historically been examined for blood flow using invasive techniques; these arteries are crucial to uterine perfusion (Ford and Chenault ; Waite et al. ). However, these techniques have been replaced by colour/power Doppler sonography (Bollwein, Baumgartner, and Stolla ). Blood flow volume (BFVo), vascular resistance, pulse waves and variations in blood flow are all measured via non‐invasive Doppler sonography (Dickey ). This imaging method is used in human medicine for the diagnosis and monitoring of pregnancies with pathologies (Goldenberg and Cliver ). It can also be used in humans to assess both foetal and placental circulation in advanced pregnancy to facilitate the diagnosis and monitoring of important conditions, such as foetal growth restriction, foetal anaemia and preeclampsia (Mone, McAuliffe, and Ong ). However, in veterinary medicine, this method is less developed, and the evaluation of uterine blood flow in large animals has not been as widely implemented as it has in human medicine. Studies conducted in the late stages of pregnancy have generally investigated how drug administration affects pregnancy (Kim‐Egloff et al. ; Waldvogel and Bleul ). Doppler sonography has also been used to study uterine perfusion in cows during normal pregnancy (Bollwein, Baumgartner, and Stolla ; Herzog et al. ; Panarace et al. ). Because the uterine artery provides the major portion of blood supply to the bovine uterus, changes in blood flow in these vessels reflect changes in uterine perfusion (Vollmerhaus ). These changes can also be revealed in detail with the Spectral mode of Doppler. Studies have also been conducted to monitor placental blood flow throughout pregnancy in large animal species, particularly mares (Bollwein et al. ). The other mode of Doppler is Color Doppler sonography describes blood flow based on the frequency shift of a flow volume, whereas power‐mode Doppler sonography displays the strength of the Doppler signal in colour by determining all moving particles in the blood, which allows recording of blood flow independent of blood flow velocity (BFVe) and direction. This is crucial for the quantitative and semiquantitative assessment of blood flow in tissues with low BFVe and numerous blood vessels (Bude and Rubin ) such as placentomes. Nowadays, using computer‐assisted image analysis instead of traditional subjective image analysis allows for a more objective evaluation of the tissue being examined (Bader et al. ). Primarily utilized in human medicine as part of evidence‐based practice, this methodology has also been adapted for the reproduction of farm animals (Demi̇r, Kaçar, and Polat ; Schmauder et al. ; Cengiz et al. ). The ultrasonographic appearance or image pattern of an organ is referred to as ‘echotexture’. Computer‐assisted ultrasonographic image processing of B‐mode ultrasonography (USG) has been used in recent years to evaluation of female reproductive organs, including the corpus luteum (Davies et al. ), ovarian follicles (Vassena et al. ), uterus (Ginther ) and placentome (Demi̇r, Kaçar, and Polat ). Ultrasonographic images consist of pixels numerically represented in grey tones (0–255) according to their brightness intensities (Tom, Pierson, and Adams ). In the evaluation of the tissue echo image, a mathematical matrix is created using these numerical values and the average pixel value is analysed (Singh, Adams, and Pierson ). Sonographic evaluation of the placenta has become an important part of routine foetal monitoring. Placental thickness is a technique utilized for diagnosing pregnancy abnormalities (Jauniaux, Ramsay, and Campbell ; Tongsong and Boonyanurak ). Through such monitoring techniques, the thickness of the myometrium can be determined by transrectal sonographic measurement of the uterus and placenta (Renaudin, Troedsson, and Gillis ). The thickness of the placenta can help diagnose gestational age in animals and humans (Afrakhteh et al. ; Zoller et al. ; Campos et al. ). Studies have shown a positive correlation between gestational age and placental thickness, especially in mares and cows (Campos et al. ; Maldonado et al. ; Kimura et al. ). In addition, myometrial contraction activity is critically important for optimal reproductive performance; it is necessary for the transport of sperm in the oviduct and the movement of the embryo within the uterus as well as playing a significant role during delivery (Eytan ). Coordinated contractions assist in the expulsion of the foetus, placenta and lochia, and in the involution of the uterus (Kündig et al. ). The disruption of the physiological mechanism of myometrial contraction is the source of many obstetric (Noakes, Parkinson, and England ) and reproductive (Slama, Vaillancourt, and Goff ) disorders that directly affect fertility. Towards the end of pregnancy, both the placenta and the foetus undergo a profound process of maturation (Ferrell and Ford ). The maturation process of the placenta is crucial for the occurrence of a normal delivery. This hypothesis was formulated to describe the sonographic characteristics of this physiological process occurring in cows in the last days of pregnancy. The aim of this study is to investigate changes in uterine artery Doppler sonographic measurements providing blood flow to the placenta, as well as differences in placental echotexture/perfusion and myometrial thickness during the last 10 days of pregnancy in cows. In addition, the study will evaluate the correlations between B‐mode and Doppler ultrasonographic parameters to examine their interrelationships. Materials and Methods 2.1 2.2 2.3 2.4 2.5 Statistical Analysis The sample size ( n = 8) was determined on the basis of an effect size of 1.87 reported in a previous study (Demi̇r, Kaçar, and Polat ), with a power of 95% and a significance level of 5%. G*Power program (Version 3.1.9.7, Germany) was used for power analysis. The suitability of the data for normal distribution was assessed using the Shapiro–Wilk test, homogeneity of variances was examined using the Levene test and sphericity assumption was verified using the Mauchly test. Changes in placentome area, MT, myometrium MGV, placental MGV and placental perfusion over sampling time were determined using repeated measures analysis of variance (ANOVA). Bonferroni correction was applied for pairwise comparisons between days. For the analysis of BFVe, BFVo, RI, PR and DM measurements obtained from pregnant and non‐pregnant horns, two‐way ANOVA was employed to assess the group effect (pregnant horn vs. non‐pregnant horn), time effect and group × time interactions statistically. Tukey's multiple comparison test was used for pairwise comparisons within groups across days and between groups on sampling days. Pearson correlation coefficient was used to evaluate relationships between parameters. A significance level of p < 0.05 was adopted for all statistical analyses. The data were presented as mean ± standard error of mean (SEM). GraphPad Prism (Version 8.0, GraphPad Software Inc., San Diego, CA, USA) and SPSS 26 (SPSS Inc., Chicago, IL, USA) software packages were utilized for data analysis. Animals In the presented study, eight clinically healthy Brown Swiss cows, aged between 3 and 7 years (with an average age of 3.40 ± 0.45 years), each having calved at least once and body condition scores ranging from 3.25 to 3.75, at Day 270 of gestation were utilized. Throughout the study period, cows were managed and fed under standard conditions, receiving a mixed ration consisting of pasture grass, wheat straw and cattle‐concentrated feed. Experimental Design After the oestrus synchronization protocol, the artificially inseminated cows were subjected to ultrasound examination (Draminski, iScan, Poland) on Day 28, and the cows determined to be pregnant were included in the study. Imaging was tracked over a period of time, and the last 10 days of data were used, starting 1 day before the cows were born. Examinations were conducted at the same time each day (i.e., between 07:30 and 12:00), with each cow's total imaging time limited to a maximum of 15 min. A single operator conducted the imaging. Prior to the procedure, the cows were placed in separate paddocks and allowed to rest in a calm environment for at least 30 min. B‐Mod and Doppler USG B mode and Doppler ultrasonographic images were obtained for the investigation using a Doppler USG (ESAOTE brand Mylab DeltaVet model, Esaote Biomedica, Genova, Italy) and an automatic multifrequency (5–10 MHz) linear probe that belongs to this device. In order to USG standardization, the following initial image parameters were recorded before to the first examination: frequency 7.5 MHz, image width 5 cm (maximum) and focus 2.5 cm. All examinations were performed using these standard settings. According to Bollwein, Baumgartner, and Stolla ( ), the uterine artery's (arteria uterina media) anatomical position was established. The insonation angle was maintained between 20° and 60° for every blood flow analysis. Doppler USG images were recorded in a way that at least three consecutive wave currents could be observed in high quality. The uterine artery's diameter (DM) was measured using three different B‐mode images, each of which had a perfect circular structure free of artefact in the transverse section. The average of these three measurements was used for the computations. After imaging the pregnant horn, myometrium images were obtained from the interplacentomal regions in the dorsal part of the uterus. Images from three different regions were selected, and the average of the three images was taken. If there were differences in thickness between different parts of the myometrium, the thickest part was used. Uteroplacental roughness and distinguishability of the uterus and placenta, that is, if images were obtained where a clear boundary line between these tissues could be seen, were recorded. Placentome images were taken from the dorsal curvature of the uterus, the same region of the posterior placental location. Sampling was performed by selecting at least three different placentomes consecutively in the horizontal plane from the dorsal line of the horn. First, B‐mode imaging was used to determine the angle and image clarity. Then, colour Doppler mode images with the best blood flow imaging were selected. It was ensured that similarly sized placentomes in the region were imaged. At least three different placentomes were selected, and at least three sagittal images were captured from each placentome using both B‐mode and colour Doppler modes. Calculations were made on the average of these three measurements. Image Analyses ImageJ software (NIH, USA—Image Processing and Analysis Java) was used for the analysis of B‐mode images. PixelFlux software program (Chameleon Software, Münster, Germany) was employed for calculating placental perfusion values and haemodynamic parameters of the uterine artery in Doppler images (Figure ). Four regions of interest (ROIs) with square areas of 225 (15 × 15) pixels were chosen on the placenta after the B‐mode ultrasound images were examined with ImageJ software (Figure ). When choosing ROIs, great care was taken to make sure they were all contained within the myometrium. For each sample, the arithmetic mean of ROI values was calculated from three different myometrium images obtained, resulting in a single value for mean grey value (MGV), which was then transferred to the software program for statistical analysis. The PixelFlux software program was used to determine pulse rate (PR), resistance index (RI), BFVe and BFVo via the pulse wave Doppler image taken from the uterine artery. B‐Mode image was also used to determine placentom area (PA), DM and myometrium thickness (MT). After determining the spectrums to be measured in the Pixel Flux program, the desired data were obtained (Figure ). Results 3.1 3.2 3.3 3.4 The Pearson Correlation Coefficients Between Variables Determined by B‐Mod The Pearson correlation coefficients between the variables determined by B‐Mod are given in Table . There were correlations between placental area and myometrial thickness ( r = 0.622, p < 0.01), myometrial MGV ( r = −0.430, p < 0.01), placental MGV ( r = −0.498, p < 0.01) and placental perfusion ( r = 0.351, p < 0.01) (Table ). B‐Mod/Echotexture The placentome area showed a decreasing trend from the beginning of sampling until delivery ( p = 0.005). Placentome area (cm 2 ) showed statistically significant differences over time between prepartum days 1–6 ( p = 0.049), 8 ( p = 0.009) and 10 ( p = 0.015). The placentome area was lowest on Day 1 prepartum (4.205 cm 2 ) compared to Days 6, 8 and 10 (Figure ). In terms of MT, a significant decrease was observed between the first 2 sampling days (prepartum days 9 and 10) and the last 2 sampling days (prepartum days 1 and 2) ( p = 0.009). MT, particularly, exhibited a significant decrease in the last 2 prepartum days (Figure ). The myometrium MGV showed an increasing trend throughout the study period, which was statistically significant ( p < 0.001). Especially, the last 3 days prepartum were significantly different from prepartum days 7–10 ( p < 0.05) (Figure ). Placentome MGV exhibited similar increasing trends as myometrium MGV throughout the study period. Placentome MGV increased as parturition approached, and the measurements taken on these days were statistically significant in multiple comparisons ( p < 0.001). Especially, the difference between prepartum days 10 and 9 and the last 3 days was statistically significant ( p < 0.05). Placentome MGV was determined as 107.1 on prepartum day 10 and 131.9 on prepartum day 1 ( p < 0.05) (Figure ). According to the image analysis conducted on prepartum days 10 and 1, the area belonging to the same placentome decreased (Figure ), whereas the placentome MGV increased (Figure ). Doppler Placental perfusion exhibited a decreasing trend 3 days before parturition. Notably, the difference in placental perfusion between Days 3 and 1 prepartum was statistically significant ( p = 0.008) (Figure ). In this study, there was a group effect on uterine artery BFVe ( p < 0.001). The uterine artery BFVe between pregnant and non‐pregnant horns was statistically significant on prepartum days 9 ( p < 0.05), 7 ( p < 0.001), 3 ( p < 0.05), 2 ( p < 0.001) and 1 ( p < 0.001). However, there was no statistically significant time effect on uterine artery BFVe (Figure ). The uterine artery BFVo in pregnant and non‐pregnant horns was not statistically significant throughout the study period ( p > 0.05) (Figure ). The RI in non‐pregnant horns was numerically higher than in pregnant horns throughout the study period, but there was no statistically significant difference between the two groups ( p < 0.05). Additionally, there was a time effect on RI ( p = 0.005) (Figure ). The uterine artery PR of both horns increased significantly throughout the study period, with a time effect ( p = 0.009). PR on prepartum days 1 and 2 was higher than on prepartum days 8–10 ( p < 0.05) (Figure ). There was a group effect on uterine DM between pregnant and non‐pregnant horns ( p = 0.02), but there was no statistically significant time effect ( p < 0.05). Specifically, the difference between the two groups was statistically significant after prepartum day 7 (Figure ). The Pearson Correlation Coefficient Between Doppler Findings The correlation coefficients and significance levels between Doppler findings are provided in Table . Placental perfusion correlated with pregnant uterine artery RI ( r = −0.362, p < 0.01) and DM ( r = 0.343, p < 0.01). Similarly, there was a significant correlation between non‐pregnant uterine artery RI ( r = −0.323, p < 0.01) and DM ( r = 0.540, p < 0.01) of non‐pregnant horns and placental perfusion (Table ). Pregnant horn BFVe showed a strong positive correlation with pregnant uterine artery BFVo ( r = 0.818, p < 0.01) and DM ( r = 0.391, p < 0.01) (Table ). Discussion Transabdominal B‐Mode/Doppler USG is commonly used in women for the diagnosis of foetal/placental developmental anomalies and placental lesions (Fadl et al. ). However, in veterinary medicine, particularly in farm animals, studies evaluating placental perfusion during pregnancy are quite limited and typically involve intermittent sampling during the last month of gestation (Kim‐Egloff et al. ; Waldvogel and Bleul ; Demi̇r, Kaçar, and Polat ; Hussein ). These studies mostly focus on investigating the effects of different medical agents on perfusion or assessing pathological conditions (Kim‐Egloff et al. ; Waldvogel and Bleul ; Hussein ). A literature search did not reveal any information on how daily ultrasonographic parameters physiologically change during the last 10 days of pregnancy, particularly regarding the process of placental maturation. 4.1 4.2 4.3 4.4 4.5 Feasibility of Measurements Ultrasonographic imaging is a condition that requires precision. It is one of the reasons that makes it difficult to apply intestinal spasmolytics during transrectal sonographic examinations in cows. Intestinal movements make it difficult to keep the transducer fixed on the uterine artery and placentomes for a long time. Therefore, the transducer was temporarily removed from the measurement side and measurements were made again after waiting for a while. This eliminates problems caused by peristalsis (Waldvogel and Bleul ; Thijssen et al. ). In addition, some of the other problems encountered during image acquisition are the angle of the transducer, the presence of faeces in the rectum, the movement of the foetus, the mobility of the mother and repeated measurements of the same area. Necessary precautions should be taken to minimize such problems, and the examination can be repeated after a certain period of time. To minimize errors, the risk of error can be reduced by ensuring that the image of the area to be examined is homogeneous and by taking multiple images at the specified sampling point and averaging them. Echotexture Due to the high margin of error in evaluations conducted by eye on echotexture, there has been a need for computer‐aided systems. Although tissue densities or characteristics determined by USG can be observed, they are not easily measurable by the human eye, and everyone's perception of greyscale, leading to excessive variability in image interpretation. Therefore, there is a tendency to achieve more quantitative results by evaluating ultrasonographic images using computer‐aided programs (Scully et al. ). The placenta is the organ that transfers some hormones and nutrients from the mother to the foetus during pregnancy. Researchers have been investigating the echotextural properties of the placenta for a long time (Demi̇r, Kaçar, and Polat ; Cengiz et al. ; King ). A correlation has been reported between placental MGV and maternal age in humans, including the first trimester of pregnancy (Pala et al. ). Measurements of placental MGV in the second and third trimesters in women have shown an increase in the level of calcification in matured and ageing placentas (de Paula et al. ). The increase observed in MGV obtained in this study may be associated with increased calcification. Additionally, the distance between foetal and maternal blood vessels shortens in order to meet the increasing demands of the foetus in the last months of pregnancy (Gross, Williams, and Russek‐Cohen ). Accordingly, the increased connective tissue within the crypt can be considered the cause of the increase in placentome MGV. The myometrium of cattle consists of smooth muscles, thus possessing the ability for spontaneous contractions (Kaufmann et al. ). This property, also known as myogenic activity, is regulated by hormonal and neural stimuli (Popescu et al. ). The relationship between various hormones during the periparturient period and the regulation and expression of receptors involved in myometrial contractility have not been fully investigated in cows towards the end of pregnancy (Górriz‐Martín et al. ). Most in vitro studies conducted so far have focused on the myometrial contractility of cyclic cows and cows in the first 7 months of pregnancy (Kaufmann et al. ; Hirsbrunner et al. ). In one study, weekly ultrasonographic examinations from the second month of pregnancy until parturition in cows showed no change in myometrial thickness throughout pregnancy (Zoller et al. ). In mares, the thickness of the uterus and placenta did not change between Months 4 and 8 of pregnancy but significantly increased between Months 10 and 12. However, in mares with delayed delivery, the thickness of the uterus and placenta significantly decreased (Renaudin, Troedsson, and Gillis ). When the foetus enters the birth canal, a sensory stimulus initiates a neural pathway that terminates in the paraventricular nucleus of the posterior pituitary, causing a release of abundant oxytocin, which in turn stimulates myometrial contraction. These myometrial contractions result in pressure changes in the foetal villi, leading to alterations in hyperaemic and ischaemic conditions and ultimately, physical separation of the feto‐maternal unit (McNaughton and Murray ). Our study suggests that the progression of myometrial thickness remains stable until the fourth day prepartum and decreases in the last 2 days, indicating the initiation of myometrial activity. Various changes occur in the cell population of placenta during pregnancy. Such changes are necessary for the maturation of the placenta and the subsequent expulsion of the foetal part of the placenta after delivery. Placental maturation involves complex mechanisms encompassing morphological, functional and endocrinological processes (Woicke et al. ). Around the ninth month of pregnancy, the endometrial epithelium begins to thin due to a decrease in the number of trophoblast giant cells, which directly expose the foetal trophoblast epithelium to the endometrium (Boos, Janssen, and Mulling ). Additionally, the decrease in foetal blood flow at the time of delivery leads to rapid contraction of cotyledon villi (Leiser et al. ). In a study conducted in cows, the size of the placenta decreased after the 250th day of pregnancy (Laven and Peters ). Similarly, another study in cows found both increases and decreases in placental size during the last month of pregnancy (Singh et al. ). In this study, a decrease in placental area during the last 10 days prepartum was identified, which can be associated with the mechanism described above. Placentome Perfusion Differences in uterine artery blood flow variables directly affect placental perfusion values. This suggests that placental perfusion can be predicted using uterine artery blood flow. Many foetal chorionic epithelial cells undergo apoptosis when the foetus goes out into the external environment. In cow placentomes, this is seen as a stage of placental maturation (Boos, Janssen, and Mulling ). At least some of this chorionic epithelial cell apoptosis is a result of decreased blood flow to the uterus (Kamemori et al. ). There are few studies examining the perfusion of placentomes physiologically in the late pregnancy in cows (Demi̇r, Kaçar, and Polat ). In one study, the perfusion of placentomes in both pregnant and non‐pregnant horns in cows did not change significantly in the last month prepartum (Kim‐Egloff et al. ). In this study, the decrease in placentome perfusion 3 days before delivery could be associated with both placental maturation and reduction in placentome area. Additionally, the positive correlation between placentome perfusion and placentome area supports the mechanism described above. Spectral In the last trimester, the increase in the size of the foetus and the uterus is accompanied by an increased demand for nutrients and oxygen. It is assumed that this demand is met by the increased perfusion of the uterus and foetus (Satterfield et al. ). In cows, there has been a linear increase in uterine blood flow during the second half of pregnancy (Herzog et al. ). In a different study, from Days 30 to 270 of pregnancy, BFVe increased 3‐fold, uterine artery DM increased 20‐fold and BFVo increased 17‐fold (Panarace et al. ). Furthermore, in cows, as pregnancy progresses, RI decreases, whereas BFVe, BFVo and DM increases (Bollwein, Baumgartner, and Stolla ). In this study, the RI of the ipsilateral uterine artery was lower compared to the contralateral uterine artery, but DM, BFVe and BFVo were higher. These results were consistent with other studies (Bollwein, Baumgartner, and Stolla ; Panarace et al. ). Throughout pregnancy, there is generally an upward trend in the physiological changes of both uterine BFVe and its associated BFVo (Bollwein, Baumgartner, and Stolla ; Herzog et al. ). In this study, a positive correlation was found between BFVe and BFVo in both pregnant and non‐pregnant uterine horns. Regarding BFVo, a statistically significant decrease was identified in the late pregnancy period (from prepartum 4.5 to 0.5 weeks) (Kim‐Egloff et al. ). Another study reported that uterine artery BFVe and BFVo did not significantly change throughout the study period during the last month of pregnancy (Waldvogel and Bleul ). Additionally, in cows, BFVo increased during the last month of pregnancy, whereas BFVe and BFVo numerically decreased in the last 2 days (Demi̇r, Kaçar, and Polat ). Although the numerical decrease in BFVe and BFVo data 2 days before parturition was not statistically significant ( p < 0.05), it was considered a noteworthy finding. These findings suggest a decrease in blood flow in the last 2 days, indicating a decrease in perfusion. We can consider this change as an effect of placental maturation. Perhaps in future studies using a larger number of cows, this difference could be better elucidated. When studies conducted in late pregnancy are examined, it is observed that the spectral parameters RI and PR have not shown significant changes (Kim‐Egloff et al. ; Waldvogel and Bleul ; Demi̇r, Kaçar, and Polat ; Hartmann et al. ). The increase in PR in our study is similar to other studies (Kim‐Egloff et al. ; Waldvogel and Bleul ). The increase in PR in pregnant women is associated with an increase in plasma volume and is most pronounced in the second half of pregnancy (Guyton and Hall ). The decrease in RI in women accompanies the increase in plasma volume, allowing more blood to be carried to the placenta. This is achieved by structural expansion of the uterine vascular bed and reduction of vascular tone (Thornburg et al. ). This increase is also likely to be associated with the increased size and efficiency of the heart in bovine and human foetuses (Mary and James ). In a study conducted in cows, RI decreased throughout the 36th week of pregnancy (Panarace et al. ). The difference in RI in the late pregnancy was not statistically significant (Waldvogel and Bleul ; Demi̇r, Kaçar, and Polat ). Similarly, in this study, there was no significant difference in RI throughout the study period, but there was a noticeable increase numerically in the last 2 days before delivery. This situation indicates compatibility with decreased perfusion in the last 2 days. Uterine Artery DM In our study, uterine artery RI did not change significantly and numerically decreased from prepartum day 2 to parturition. Additionally, although blood flow velocities during the last 2 days prepartum BFVe, BFVo and uterine artery DM moved in a similar direction, the same was not true for RI. Studies in cows have determined that there is no statistically significant change in uterine artery DM between the 37th and 39th weeks of pregnancy (Herzog et al. ; Körte ), which is consistent with our findings. Conclusion The use of B‐mode and Doppler USG in the evaluation of placental maturation in cows is quite limited, and there are important gaps in this regard. In this study, physiological changes in perfusion, echotexture and certain measurements (thickness/area) occurring in the last days of pregnancy were determined, and significant differences in these parameters were observed 2 days before delivery. Furthermore, it is considered that ultrasonographic examinations can be effectively used to evaluate inflammatory changes in the uterine wall and placental abnormalities in late pregnancy. Murat Can Demir : writing – review and editing, validation, methodology, project administration, investigation, formal analysis, data curation. Merve Sena Demir : investigation, writing. Burak Büyükbaki : investigation. Mushap Kuru : formal analysis, validation, editing, software. Semra Kaya : methodology, investigation, formal analysis. Cihan Kaçar : writing – review and editing, investigation. The study was conducted in accordance with ethical principles following approval from the Kafkas University Animal Experiments Local Ethics Committee (KAÜ‐HADYEK‐2023/146256 .
Impact of clear aligners on gingivitis incidence and prevention strategies in adolescents and adults: a prospective observational study
e98c6232-ff56-4763-94e8-bed5bcf288dc
11737181
Dentistry[mh]
Orthodontic treatment in dentistry is an important field of modern dental medicine and a key factor affecting individual social and psychological health and overall quality of life . It can improve bite relationships and oral function by adjusting the position of the teeth and jaws, significantly enhancing facial aesthetics for patients . Traditional metal braces, which use ligature wires to secure the archwire, are the most common conventional braces. However, they have drawbacks, such as being aesthetically unpleasing, inconvenient during eating and brushing, and irritating the inner side of the mouth, which can lead to oral inflammation . With the advancement of medical technology, Clear Aligner Therapy, an innovative orthodontic method, has become a preferred choice for adolescent and adult patients due to its excellent concealment and comfort. This technology uses transparent aligners designed and manufactured with computer assistance to precisely control tooth movement without being noticeable, optimizing treatment outcomes . Orthodontic treatment is no longer limited to simply “aligning teeth.” It now requires consideration of functionality, aesthetics, and its impact on oral and overall health. The treatment process is convenient and comfortable for patients, and it aims to be a more aesthetically pleasing alternative to traditional metal braces . Invisible aligner therapy is now widely accepted to solve various dental issues, including crowding, spacing, overbite, underbite, and open bite. By closing gaps and correcting tooth alignment, clear aligners improve aesthetics and may alleviate mild sleep apnea symptoms through changes in jaw alignment . The success of clear aligner therapy depends on factors such as the complexity of tooth movement, patient compliance, and the orthodontists’ experience. Studies indicate that the success rate for mild to moderate tooth movement ranges from 80 to 90%, demonstrating the efficacy of this technology . Moreover, the development of clear aligner technology has driven advancements in orthodontic research and clinical practice. Traditional fixed orthodontic appliances, as a classic method for orthodontic treatment, are commonly used to correct malocclusions. However, using metal components and adhesives significantly increases the risk of plaque accumulation and is a common cause of gingivitis . Among adolescents undergoing fixed appliance therapy, the prevalence of gingivitis ranges from 35 to 50%, which is notably higher than the 20–25% observed in adults . This difference is likely associated with hormonal fluctuations, poorer oral hygiene habits, and weaker adolescent plaque control abilities. In contrast, adults undergoing fixed appliance therapy have a lower incidence of gingivitis, likely due to more established oral care habits and reduced susceptibility to inflammation . Additionally, the excessive adhesive used with fixed appliances can lead to open gingival embrasures (OGE), particularly prominent among adult patients . Although fixed appliances achieve satisfactory outcomes in correcting malocclusions, their aesthetic limitations and impact on periodontal health remain challenges. Clear aligners, as an innovative orthodontic treatment modality, have gained popularity among adolescent and adult patients due to their removability, high level of concealment, and precise control over tooth movement. However, this treatment requires significant patient compliance, with daily wear exceeding 22 h; failure to meet these requirements may extend treatment duration and reduce efficacy . Studies show that compared with fixed appliances, adolescent patients treated with clear aligners generally exhibit lower plaque and gingivitis scores. Nevertheless, some studies suggest that the risk of gingivitis with clear aligners may, in some instances, approach that of fixed appliances, potentially due to cleaning difficulties caused by attachments and changes in the oral microenvironment . Factors contributing to periodontal issues in adolescent patients during clear aligner treatment include fewer daily brushing sessions, pre-treatment white spot lesions, frequent consumption of carbonated drinks, reduced cleaning frequency after aligner placement, and a higher number of attachments . Additionally, levels of inflammation-related cytokines (such as CXCLs and ILs) are significantly elevated in adolescent patients following clear aligner therapy, suggesting that changes in the gingival microenvironment may further increase the risk of gingivitis. Although existing studies have explored the effects of clear aligners and fixed appliances on gingival health, systematic research directly comparing the incidence of gingivitis between adolescents and adults under these two treatments still needs to be improved. Furthermore, discrepancies among current findings necessitate further investigation to clarify the specific impact of clear aligners on gingival health across different age groups. This study aims to address this gap by systematically analyzing the effects of these two types of orthodontic appliances on gingival health in patients of different age groups, providing scientific evidence for managing periodontal health during orthodontic treatment. In this study, we conducted an in-depth exploration of gingivitis associated with clear aligner therapy. A total of 120 patients were divided into adolescent and adult groups for comparative analysis. Routine periodontal treatment and oral hygiene instructions were provided, and the incidence of gingivitis was evaluated after six months. Gingival index (GI) scores were assessed using standardized periodontal probes and specific guidelines such as the AAP Periodontal Disease Classification System. To ensure the rigor and reliability of statistical analyses, we performed normality tests. This study systematically investigated changes in gingivitis incidence before and after treatment and its impact on periodontal health, providing scientific guidance and optimized treatment strategies for clinical practice. The primary goal of this study is to comprehensively evaluate the effectiveness of clear aligner therapy in different populations, particularly in the prevention and management of gingivitis. By analyzing the mechanisms and influencing factors of gingivitis and assessing the effectiveness of various preventive measures, this study aims to deliver refined health management strategies for orthodontic treatment, ensuring that patients achieve both aesthetic and safe, effective treatment outcomes. Study designCriteria for participant selectionGroup allocation and randomizationGI assessmentStatistical analysisThis prospective observational study investigates the impact of clear aligner orthodontic treatment on gingival health in adolescent and adult patients. The study adheres to ethical standards, including the Declaration of Helsinki, and the relevant ethics committee has approved the research protocol. The privacy and confidentiality of all participants were respected, with all data processed anonymously. Sample collection procedures were designed to minimize inconvenience and risk to participants. We ensured that all experimental procedures complied with national and international guidelines on biosafety and bioethics. This observational study covers 120 patients who received clear aligner treatment in the Department of Stomatology between June 2018 and September 2023. The clear aligners used in this study were from the Invisalign brand, primarily made from SmartTrack material. These patients were divided into two groups: an adolescent group of 66 cases, with an average age of approximately 14.2 years, and an adult group of 54 cases, with an average age of approximately 25.7 years. Power analysis was conducted to determine the sample size, ensuring scientific rigor and statistical reliability in the study design. Referring to existing studies on the incidence of gingivitis during clear aligner and fixed appliance treatments , a medium effect size (Cohen’s d = 0.5) was assumed. Using the G*Power software, the minimum required sample size was calculated to be 88 participants at a significance level of α = 0.05 and a power (1-β) of 0.8. Considering potential dropout rates, the final sample size was increased to 120 participants, ensuring sufficient statistical power to detect differences between groups. Each group was further divided into a study group and a control group, with 33 cases in each subgroup for the adolescent group and 27 cases in each for the adult group. The inclusion criteria for the study were as follows: (1) Patients had no systemic diseases or history of metal allergies. (2) Patients had not been exposed to nickel-contaminated environments. Since nickel exposure can independently affect periodontal health, patients with such exposure were excluded to avoid confounding effects. (3) There were no other metal restorations in the oral cavity. (4) Patients did not have a history of alcohol consumption or smoking. (5) Body mass index (BMI) was within normal range. (6) Apart from the third molars, there were no impacted or congenitally missing teeth. (7) Considering the severity of malocclusion, only patients with mild cases were selected for this study. (8) All patients could understand the study’s purpose and actively cooperate with the research process. (9) Consent was obtained from both the patients and their guardians. Participants exposed to a nickel-contaminated environment were excluded to avoid confounding effects, as nickel exposure can independently affect periodontal health, regardless of orthodontic treatment. The screening, grouping, and follow-up process of participants is shown in Fig. , illustrating the entire procedure from screening to analysis. After enrollment, all study participants were allocated to either the study group or the control group using a computer-generated random number table to ensure a balanced distribution. Randomization maintained equilibrium between the adolescent group (33 participants per group) and the adult group (27 participants per group) and ensured an approximately 1:1 male-to-female ratio within each group. Participants were blinded to their group assignments, and all evaluations and treatments were conducted by uniformly trained healthcare professionals following standardized protocols. Participants in the control group maintained their routine oral hygiene practices during the study. A periodontist assessed periodontal health at each follow-up visit and provided professional periodontal treatments, including supragingival and subgingival scaling and root planing, effectively removing factors promoting periodontal disease. Before cleaning, alkaline fuchsin dye was applied to all teeth surfaces, including buccal, lingual, and interproximal areas, to visualize plaque accumulation. Based on this feedback, participants were guided to improve their daily cleaning methods. This intervention primarily relied on professional treatment and visual feedback from plaque staining to optimize periodontal health without incorporating systematic behavioral intervention. Participants in the study group received personalized oral hygiene education and technical guidance during follow-up visits. It included instructions on proper Bass brushing techniques, the use of dental floss and interdental brushes, and the operation of oral irrigation devices. Oral hygiene knowledge was reinforced at each follow-up through specially designed educational materials and regularly monitoring intervention effectiveness. These measures helped participants gradually establish and consolidate effective cleaning habits. Healthcare professionals recorded participants’ self-reported cleaning behaviors and clinical observations to monitor compliance, adjusting educational content as needed. Additionally, the study group received specific recommendations on details such as toothbrush replacement and interdental cleaning frequency to further enhance their oral hygiene levels. The randomized allocation and differentiated intervention approaches ensured baseline consistency between groups while highlighting the specific impact of personalized education on periodontal health. All intervention and follow-up procedures adhered strictly to standardized protocols, ensuring the reliability and scientific validity of the study results. The Gingival Index (GI) score was assessed using standardized periodontal probes and specific guidelines, such as the AAP Periodontal Disease Classification System (PMID: 14121956; PMID: 26125117). The GI categorizes the severity of gingivitis into four levels, 0–3, where 0 represents gingival health, 1 represents mild inflammation, 2 represents moderate inflammation, and 3 represents severe inflammation. The scoring criteria for GI were defined as follows: 0 indicated healthy gums; 1 indicated mild inflammation with slight color change and swelling, no bleeding upon probing; 2 indicated moderate inflammation with redness and significant swelling, bleeding upon probing; 3 indicated severe inflammation with marked redness, swelling, or ulceration, and spontaneous bleedingIn this study, all participants underwent a comprehensive periodontal health assessment before orthodontic treatment, including supra- and subgingival scaling and root planing, to ensure that the GI for each patient was 0 before wearing the orthodontic appliance. The assessment and treatment were administered by periodontal specialists who received specific training using standardized periodontal tools (ultrasonic and hand scalers). Tooth selection for pre-treatment assessment followed the Ramfjord teeth index (teeth 16, 21, 24, 36, 41, 44), which served as representative teeth for regular periodontal health tracking. The follow-up interval for both groups of patients was every 1.5 months. During each follow-up visit, all patients had their GI evaluated by the same experienced periodontist according to the guidelines in the 2nd edition of Clinical Periodontology. Baseline measurement: Initial assessment was conducted before orthodontic treatment to ensure all participants had a GI score of 0. Follow-up assessments: Evaluation performed at each follow-up visit after the start of treatment (every 1.5 months) until the end of the study. Post-treatment assessment: Final GI assessment conducted immediately after the completion of orthodontic treatment and compared with baseline and interim assessment results. Data recording and storage: A data manager recorded all data electronically and kept it in a secure database for future statistical analysis and validation. After six months of orthodontic treatment, another oral examination was conducted, and the GI value was measured again to evaluate changes in gingival health during the treatment period. To enhance the rigor and reliability of the statistical analysis, we first conducted tests for normality, such as the Shapiro-Wilk test, on the data. It ensured that the data from each group—pre-treatment, 1.5-month, and post-treatment groups—met the normal distribution requirements before performing t-tests. Additionally, a test for homogeneity of variances (Levene’s test) was performed to decide whether to use the equal or unequal variance version of the t-test. To examine the changes in GI scores between the two groups and within each group during the treatment period, a two-way ANOVA was used for analysis. Multiple comparison correction methods, such as Bonferroni correction, were employed to reduce Type I errors for comparisons between multiple groups. All statistical analyses were carried out using the latest version of the SPSS software to ensure the accuracy and efficiency of the analysis. A significance level of P < 0.05 was considered statistically significant. Comparison of the incidence rate of Gingivitis in different age groupsAnalysis of GI changes during Clear aligners correction In this study, we conducted a detailed comparison of changes in GI before and after clear aligner treatment between the study and control groups in the adolescent and adult cohorts (Table ). The results indicated statistically significant differences in GI changes across different groups and time points. In the adolescent group, the GI in the study group significantly increased from 0.17 before treatment to 1.49 during treatment (standard deviation increased from 0.05 to 0.25, t = -27.424, P-value (0.000), less than 0.05), showing a more pronounced change. In the control group, the GI increased from 0.15 before treatment to 0.59 during treatment (standard deviation increased from 0.07 to 0.19, t = 4.421, P-value (0.035), which is less than 0.05), indicating that the treatment process significantly increased the risk of gingivitis. A comparison of GI during treatment between the two groups revealed that the GI increase in the study group was significantly higher than in the control group (P-value (0.007), less than 0.05). Furthermore, significant changes in GI were observed between the study and control groups starting from 1.5 months into treatment (P-value (0.006), less than 0.05), suggesting that adolescent patients face a higher risk of gingivitis early in the treatment process (Table ). The larger increase in the study group may reflect lower adherence to oral hygiene education among adolescent patients, while the minor increase in the control group highlights the more direct effect of basic periodontal treatment in reducing the risk of gingivitis. In the adult group, the GI in the control group increased from 1.19 before treatment to 1.42 during treatment, but this change was not statistically significant (P-value (0.067), greater than 0.05). In the study group, the GI increased from 1.18 before treatment to 1.42 during treatment, with a statistically significant change (P-value (0.041), less than 0.05). However, intergroup comparisons showed no significant differences (P-value (0.067), greater than 0.05). Similarly, the GI measurements at 1.5 months in the adult group showed no significant changes (P-value (0.958), greater than 0.05) (Table ), indicating that personalized oral hygiene education and basic periodontal treatment had similar effects in adults, likely due to their higher adherence and established oral hygiene habits. These findings suggest that the use of clear aligners in adolescents resulted in a more significant increase in the risk of gingivitis, while in adults, there was no significant difference in the effects of personalized hygiene education and basic periodontal treatment. It highlights the need to strengthen personalized education and regular periodontal treatment during orthodontic treatment in adolescents (Fig. ). Based on the data collected in this study (Table ), we compared the incidence of gingivitis in the adolescent and adult groups after using clear aligners for correction. The statistical results in the table show that the male-to-female ratio in both the experimental and control groups was maintained at 1:1, which helps eliminate errors caused by gender differences (Table ). The results showed that in the adolescent group (mean age 14.2), out of 66 cases, 26 cases developed gingivitis (Fig. a), accounting for 39.39%. In contrast, in the adult group (mean age 25.7), out of 54 cases, 38 cases had gingivitis (Fig. b), reaching a rate of 70.04%. Among the 120 patients, 54 cases experienced gingivitis, resulting in an overall incidence rate of 50.00% (Fig. c). The chi-square test results used in the statistical analysis showed a P-value less than 0.05, indicating a significant difference in the incidence of gingivitis between the adolescent and adult groups. Specifically, the overall gingivitis rate among all patients wearing aligners was evaluated before the subgroup analysis and statistical comparisons were conducted. Overall, the incidence of gingivitis was significantly higher in the adult group compared to the adolescent group. In conclusion, during clear aligners correction, the incidence of gingivitis in adult patients is significantly higher than in adolescent patients. This finding highlights the importance for adults undergoing clear aligners correction to pay closer attention to periodontal health management and the implementation of preventive measures. Clear aligners have been widely used in orthodontic treatment in recent years, and compared to traditional metal braces, they offer advantages such as aesthetics and comfort, making them particularly popular among patients, especially young patients . However, clear aligner usage can lead to oral health issues such as gingivitis during treatment , raising concerns among clinicians and researchers. Gingivitis, as a common oral disease, if not promptly treated, can progress to more severe periodontal diseases, impacting the patient’s oral health . Therefore, studying the relationship between clear aligners and the incidence of gingivitis and exploring effective preventive measures holds significant clinical and scientific importance. Some studies have found a higher incidence of gingivitis among users of clear aligners, suggesting that clear aligners may increase plaque accumulation, leading to gingivitis . However, some studies have not found a significant difference in the incidence of gingivitis with clear aligners . In this study, through a comparison of two groups of adolescents and adults, it was found that the incidence of gingivitis was significantly higher in the adolescent group compared to the adult group (Fig. ). This finding addresses the limitations of existing studies focusing on a single population . Through this research, a more comprehensive understanding of the impact of clear aligners on gingival health can be achieved. This study has several strengths in its design. Firstly, the relatively large sample size allows for a more accurate reflection of real-world conditions and indicates that the statistical power of the results is reliable. Secondly, the study employed a strict grouping method, subdivided into study and control groups within each group. The study also included routine periodontal treatment intervening measures and oral hygiene guidance. By evaluating the GI and the incidence of gingivitis, the study could provide a comprehensive reflection of the impact of clear aligners on gingival health. Overall, the higher gingivitis rates observed in all adult groups highlight the importance of periodontal health management and preventive measures in adults. Comparative study results show that the incidence of gingivitis in the adolescent group was significantly higher than that in the adolescent control group, as well as higher than the differences observed between the adult study and control groups. This significant difference may be related to the physiological characteristics, oral hygiene habits, and poorer compliance of adolescents. Specifically, hormonal fluctuations during puberty increase gingival vascular permeability, making adolescents more sensitive to local inflammatory factors . Moreover, adolescents often exhibit less consistent oral hygiene practices than adults, with improper brushing techniques and inadequate cleaning leading to plaque accumulation and a higher risk of gingivitis. These physiological and behavioral factors together contribute to the higher risk of gingivitis in the adolescent group. In addition, adolescents often find it challenging to follow proper oral hygiene education. Studies have shown that the proportion of adolescents who brush their teeth effectively each day is much lower than that of adults, and their use of additional cleaning tools (such as dental floss or interdental brushes) is also less frequent . It highlights the urgent need for more effective preventive measures for adolescents during orthodontic treatment, especially in improving oral hygiene compliance. In contrast, adults typically demonstrate higher compliance, including attending follow-up appointments, regular cleanings, and using oral hygiene aids, which may explain the smaller changes in gingivitis incidence during orthodontic treatment . To reduce the risk of gingivitis in adolescents undergoing clear aligner therapy, we recommend paying particular attention to oral hygiene during treatment. Specific measures include proper brushing techniques, regular plaque removal, routine professional cleanings, and oral examinations to eliminate local pathogenic factors promptly. Additionally, fostering good hygiene habits, such as brushing twice daily and rinsing before meals, is crucial. It is also important to focus on physical and mental health, maintaining a regular lifestyle to ensure a positive interaction between oral and overall health. Furthermore, this study also found that the use of clear aligners showed significant changes in the gingival health index, especially in the study group of adolescents, where the increase in the GI was more pronounced. It may indicate that while clear aligners offer convenience and aesthetics for teeth correction, their impact on the periodontal environment should not be overlooked. Additionally, the adolescent group exhibited a significantly greater increase in GI than the adult group, indicating poorer oral hygiene habits and lower compliance among adolescents. Even with oral health education interventions, the improvement remained limited. Therefore, we recommend reinforcing regular periodontal health check-ups for adolescent patients during orthodontic treatment and enhancing their periodontal health through personalized oral hygiene education programs. Moreover, the design and use of clear aligners should consider the biomechanical properties of the teeth, ensuring that each tooth’s movement stays within a safe biomechanical pressure range. Future research should further explore the specific impact of clear aligners with different designs on gingival health and how to improve designs to reduce their adverse effects on the gums. In the comparison of interventions, basic periodontal treatment demonstrated significant short-term effects. In the adolescent control group, patients who received basic periodontal treatment had significantly lower GI values than those in the study group who underwent oral health education. It indicates that professional supragingival and subgingival scaling effectively remove dental plaque and reduce the occurrence of gingivitis. However, although health education showed less immediate impact than basic treatment, it could gradually improve patient compliance and gingival health, especially in long-term follow-ups. Adolescent patients’ lower compliance may limit the short-term effectiveness of health education. Therefore, future interventions should integrate health education with basic periodontal treatment to enhance long-term efficacy. Overall, this study followed the incidence of new gingivitis cases and the GI indices during clear aligner treatment in both adolescent and adult groups, revealing the mechanisms of gingivitis development in this treatment approach (Fig. ). Clear aligner therapy led to increased difficulty in maintaining oral hygiene, plaque accumulation, and poorer adaptation to the aligners. Therefore, we recommend implementing personalized oral hygiene education tailored to the specific needs of individual patients, combined with regular professional periodontal treatment, to effectively prevent and control gingivitis. Additionally, optimizing the design of clear aligners to minimize mechanical irritation and the risk of plaque accumulation could further enhance their clinical safety and effectiveness. Our research also provides valuable guidance for clinical practice. Firstly, they highlight the need for clinicians to pay attention to the gingival health issues of adolescents when providing clear aligners for orthodontic treatment. Secondly, there is an emphasis on strengthening personalized oral hygiene education to prevent and reduce the occurrence of gingivitis, ultimately improving patient treatment compliance and outcomes. By implementing personalized oral hygiene guidance, the incidence of gingivitis can be significantly reduced, enhancing the effectiveness of clear aligners’ orthodontic treatment. At the same time, we call for future studies on more extensive and more diverse populations to validate and expand upon these findings. It will provide a more substantial scientific basis for ensuring the safety and effectiveness of clear aligner therapy. However, this study has several limitations. First, although the sample size is relatively large, it is still limited, which may restrict the generalizability of the findings. Additionally, this study did not consider the potential impact of gender differences on the results. Future research should expand the scope of analysis to address this aspect. Second, the study duration was relatively short, only six months, which may not be sufficient to capture long-term changes in gingival health fully. Previous studies have shown that gingival health is influenced by various long-term factors, such as lifestyle, dietary habits, and overall health, and short-term observations may fail to reveal the cumulative effects of these factors. Therefore, future studies should incorporate longer follow-up periods to comprehensively evaluate the long-term impact of interventions on gingival health and validate the stability of short-term findings. Moreover, the participants in this study were primarily drawn from a single region, and differences in regional and cultural backgrounds may affect the generalizability of the results. Future research should consider larger-scale, multi-center, long-term follow-up studies to further validate and expand upon the findings of this study. Additionally, other relevant factors, such as dietary habits, genetic predispositions, and psychological factors, should also be explored better to understand their role in the incidence of gingivitis.
Investigation of the cleaning performance of commercial orthodontic cleaning tablets regarding biofilm removal on PMMA test specimens
ae276e1f-30a9-4b26-b9ac-1299fe7d3e1b
11861341
Dentistry[mh]
Removable orthodontic appliances (ROAs) are used for active treatment and retention. ROAs consist of wire elements and mostly of polymethyl methacrylate (PMMA) . Within minutes of inserting an appliance into the oral cavity, the surfaces of the appliance become colonised with bacteria . They associate together to form biofilms, a complex matrix-like structure and harbour pathogens such as bacteria, viruses, fungi and protozoa . Bacteria of the oral flora are the cause of local diseases such as periodontitis, peri-implantitis and dental caries . The risk of caries can increase during the use of ROAs as they can promote for the colonisation of Streptococcus mutans and lactobacilli in the oral cavity. Furthermore, oral bacteria can be the cause of systemic diseases and are associated with neurodegenerative diseases such as Alzheimer’s disease . Consequently, removing biofilm from these appliances and removable partial dentures is of great importance. Common cleaning methods include manual cleaning with a toothbrush, often in combination with toothpaste or dishwashing liquid, as well as chemical cleaning with cleaning tablets and immersion in disinfectant solutions or acetic acid . According to Diedrich especially in hard-to-reach areas such as around wire elements embedded in acrylic and screws, mechanical cleaning is insufficient, while cleaning tablets lead to better cleaning. Based on their active ingredients, chemical denture cleaners can be classified into alkaline peroxides, alkaline hypochlorites, acids, disinfecting agents and enzymes . Alkaline peroxides are often available in form of tablets. They produce an effervescent alkaline solution that generates hydrogen peroxide and active oxygen when added to water. This process leads to disinfection and oxidation of coloured deposits. In the presence of acids, sodium bicarbonate reacts forming carbon dioxide and creating a foam. Furthermore, the rising gas bubbles create a mechanical component. Substances such as surfactants serve as an additional cleaning component, and the addition of potassium monopersulfate has bactericidal, fungicidal and virucidal effects . Other ingredients include fillers as well as colouring and flavouring agents, which are intended to give the consumer a feeling of freshness. The use of cleaning tablets in addition to mechanical cleaning is recommended to their patients by 70% of Greek and 72% of Turkish orthodontists , while 37% of German orthodontists recommend cleaning with chemical adjuvants only . In previous studies on the performance of orthodontic cleaners, cleaning efficiency was investigated on soft plaque which accumulated within a few hours or days . Yet, it has remained unclear whether orthodontic cleaners are effective on a 7-day-old established biofilm and its associated structural changes. The aim of the present study was to clarify this question. Sample size calculationStudy cohortTest deviceCleaning of the test specimensProtein quantification using the modified OPA methodStatistical analysis Statistical Package for Social Sciences (SPSS version 27.0; IBM, Armonk, NY, USA) was used for all statistical analyses and graphical presentations. For the comparison of the cleaning performance of the three cleaners, a paired sample t‑test was used, with a p -value of ≤ 0.05 defined as significant. All p -values were adjusted by a factor of 3 after Bonferroni. Sample size calculation was done with the procedure MOE1‑1 of nQuery 8.6.1.0 prior to recruitment based on an earlier pilot study . As three cleaners were tested, the multiple one-sided significance level was adjusted from alpha = 0.025 to alpha = 0.0071. It was further assumed that the cleaning performance of all three products is at least 80% and that the dispersion of the cleaning performance amounts to sigma = 6.1%. The one-sided one-sample t‑test against mu0 = 75% has a power of at least 80% with a sample size of n = 20. All participants of this study were informed in advance about the purpose of the study and possible risks in both verbal and written form. They gave their written consent to participate in the study. A total of 20 healthy, adult participants (11 women and 9 men, mean 34.5 years, range 19–61 years) were recruited. All participants included in this study agreed to wear a test device in the form of a vacuum-formed splint for 7 days. The splint was also worn while eating and was only allowed to be removed for toothbrushing with a non-fluoride toothpaste. The splint was not allowed to be cleaned; only food remnants were allowed to be removed carefully without touching the surfaces of the PMMA test specimens. The participants consented to not using fluoride-containing preparations or antibacterial mouth rinses containing chlorhexidine or essential oils during the 7‑day study period. Exclusion criteria for participation were severe general diseases, active caries or periodontitis, the absence of first or second molars in the upper jaw or allergies to PMMA/MMA. The use of oral antibiotics during the wearing period or within the last 3 months also led to exclusion from the study. The test device contained four identical PMMA test specimens in an individual manufactured vacuum-formed splint for the upper jaw. For this purpose, an alginate impression was taken of the upper jaw and models were made of hard dental stone. A 1 mm, hard-elastic and transparent splint (DURAN®, Scheu-Dental, Iserlohn, Germany) was pressure-moulded over the plaster models of the study participants. The PMMA test specimens were produced separately. For this purpose, the holders for the test specimens were printed using a standardised three-dimensional (3D) printing process from polylactic acid (PLA; 7 mm, 5 mm, 1.5 mm). The inner surface of each PLA holder was designed to be conical (7°) and thus prevented the test specimen halves from falling out of the holder. The PLA ring was split in the middle by a thin partition wall, creating two identically dimensioned test specimen halves. The PLA holders were coated with a separating agent (3D Isoliermittel, Dentaurum, Germany) and then filled with PMMA (Orthocryl®, Dentaurum, Ispringen, Germany). To achieve an even surface, the test specimens were placed between two glass plates, loaded by a constant weight (500 g) and then polymerised in a pressure pot at 2.2 bar and 40 °C. The test specimens were then placed symmetrically on the buccal surfaces of the maxillary molars and firmly bonded to the splint using Orthocryl LC® (Dentaurum, Ispringen, Germany; Fig. ). To achieve a stable bond, the surrounding areas of the splint were first roughened with sandpaper and covered with monomer. Four holes, previously drilled in the middle of the molar range (diameter 3 mm) served as an access to the two halves of the test specimens. During the wearing period, the holes in the DURAN® splint were covered with silicone so that only the buccal surfaces of the test specimens were colonised by biofilm. The device was finished like a retention splint and then inserted. At the end of the 7‑day wearing period, the contaminated splints were collected and were promptly examined using the modified o‑phthaldialdehyde (OPA) method. Four test specimen halves of every participant were each placed in tap water (control medium) and respectively in the cleaning solution of the following three cleaning tablets: Retainer Brite® (Dentsply International Raintree Essix, Sarasota, FL, USA), Kukis® Xpress (Reckitt Benckiser, Heidelberg, Germany) and Dontodent (Propack, Heidelberg, Germany). The cleaning tablets were each added to a glass filled with 150 ml water at a temperature of 40 ± 2 °C. The PMMA test specimen halves to be cleaned were simultaneously placed in the bubbling cleaning solution. The immersion time of the PMMA test specimens in the cleaning solution was, according to the manufacturers’ time instructions, 3 min for Kukis® Xpress, 10 min for Dontodent and 15 min for Retainer Brite® and tap water. The uncleaned half of each test specimen was used to determine the amount of protein before cleaning. After completing the cleaning time, the cleaned test specimen halves were transferred to a sample tube filled with 500 µl of 1% sodium dodecyl sulfate (SDS) solution and shaken for 30 min in an ultrasonic bath at 30 °C. The protein solution was transferred to a cuvette and the extinction was measured photometrically. Afterwards, 500 µl of OPA reagent were added to the solution. The OPA reagent was prepared daily and had to be replaced by a newly prepared solution as soon as the absorbance was not in the range of E = 0.641 ± 0.032, measured against a leucin standard solution. For this purpose, 0.04 g o-phthaldialdehyde and 1 ml methanol were stirred in an Erlenmeyer flask with the aid of a magnetic stirrer. To the homogeneous solution 0.116 g 2‑mercaptoethanesulfonic acid was added (solution A). In another Erlenmeyer flask, 50 ml aqua dem. and 1.005 g disodium tetraborate were stirred with the aid of a magnetic stirrer (solution B). This was followed by the transfer of solution A into solution B and the addition of 1.25 ml of 20% SDS solution. Protein quantification as an indicator of contamination was determined for the cleaned and the uncleaned test specimen halves using the modified OPA method. Free α‑ and ε‑terminal amino groups in amino acids, peptides and proteins react in the presence of a thiol component to form a fluorescent end product (1-alkylthio-2-alkylisoindoles). This product is spectrophotometrically detectable at 340 nm and quantifiable by measuring the extinction . To quantify the amount of protein on the test specimens, calibration with bovine serum albumin (BSA, fraction V, Sigma Aldrich®, St. Louis, MO, USA) of known concentration was carried out previously. Several dilution standards were made from a prepared BSA solution (concentration 1000 µg/ml) and measured photometrically. A regression line, y = mx + b, was created using the measured absorbance values from a total of six measurement series. The quantification of the initially unknown concentrations of the protein samples was carried out using this regression line. The amount of protein detectable after cleaning compared to the detectable amount of protein of the uncleaned control group served as an indicator for the cleaning performance. The protein reduction of the three orthodontic cleaners and the control medium water is shown in a boxplot diagram (Fig. ). The cleaning performance of the orthodontic cleaner Retainer Brite® (mean 54.5 ± 7.1%) was significantly higher than that of Kukis® Xpress (mean 39.9 ± 11.5%, p < 0.001) and Dontodent (mean 41.5 ± 9.2%, p < 0.001). There was no statistical difference in cleaning performance between the cleaners Kukis® Xpress and Dontodent ( p = 1). The cleaning performance of the control medium water was 25.9% (Table ). The amount of protein of all uncleaned test specimens ranged from 6.9 to 87.0 µg; the protein amount of the cleaned test specimens ranged from 4.3 to 53.0 µg protein. The distribution of the protein amounts of all uncleaned and cleaned test specimens in relation to the cleaner used is shown in a boxplot diagram (Fig. ). The median values of the protein amounts on the uncleaned test specimen halves ranged from 39.6 µg (Retainer Brite®) to 49.7 µg (Dontodent). The median values of the protein amounts on the cleaned test specimen halves ranged from 19.0 µg (Retainer Brite®) to 32.1 µg (tap water). The amount of protein on the uncleaned test specimen halves was higher than the protein amounts after cleaning using an orthodontic cleaning tablet or tap water. Tap water left the highest amounts of protein; Retainer Brite® left the lowest amount of protein. Quantitative detection methods for proteins are, in addition to the OPA method, the BCA (bicinchoninic acid assay) method, the Bradford reagent and the Lowry method . Furthermore, there are semiquantitative methods, such as the biuret reaction, and qualitative methods, such as the ninhydrin reaction . The modified OPA method is a suitable method for the quantitative detection of proteins. It has a high sensitivity, with a detectability down to the picomole range , which enabled the use of small test samples in this study. However, very low absorbance values led to a limitation of the method in the present study. At values below 2 µg/ml BSA, the detection limit of the photometer (Shimadzu™ UVmini-1240, Kyoto, Japan) was approached, thus, especially the quantification of small protein amounts became increasingly inaccurate. This problem can be solved by generating higher extinction values that lie in the noncritical measurement range. For example, larger test specimen surfaces with resulting higher accumulation of biofilm could have been used. It is also possible to adjust the ratio of protein solution to OPA solution. However, the absorbance values of the samples in this study were not in the critical measurement range with concentrations above 8.7 µg/ml, so an adjustment was not necessary. The OPA method was proven to be suitable in this study, and it can be concluded that quantitative protein measurement methods are appropriate for investigating the cleaning performance of orthodontic cleaners. Thus, protein measurement methods can be an alternative to common techniques such as analysis by scanning electron microscopy . The use of the test device, a thin vacuum-formed maxillary splint, that served as a holder for the test specimens was satisfactory. From previous investigations, it was found that the amount of protein on two closely spaced areas, in this case two halves of the test specimens, is almost identical. The assignment of the cleaners to the test specimens was rotated in order to compensate locally existing individual differences in the accumulation of biofilm between the participants. During the 7‑day wearing period, out of a total of 80 test specimens, only 2 were lost prematurely. For future studies, it is possible that the test device, consisting of the splint together with the holders, is 3D printed in a single piece. Fixed orthodontic appliances facilitate plaque accumulation and complicate adequate daily plaque removal by toothbrushing . During treatment with multibracket appliances especially patients with poor oral hygiene show a high prevalence of Candida albicans , Streptococcus mutans and lactobacilli . From a caries-risk point of view ROAs are considered an alternative to fixed appliances as they can be removed by the patient and oral hygiene can be carried out more easily. The clinical consequences, including plaque and inflammation of the gingiva, are lower and there is a lower risk of the development of caries compared to treatment with fixed appliances . However, shortly after being inserted into the oral cavity, ROAs are colonised by biofilm harbouring pathogens . ROAs colonised by biofilm, in turn, increase the risk of caries and gingivitis . Consequently, it is important that any orthodontic appliance that is inserted into the oral cavity is clean and as free as possible of bacteria. The aim of the present ex vivo study was the recommendation of an optimal product that would eliminate an established biofilm and help to reduce the development of caries and the rates of gingivitis in patients with removable orthodontic appliances. However, the question which product sufficiently removes biofilm and how much biofilm a product may leave on an appliance and still may be considered an acceptable cleaning product remains open. The point at which cleaning can be described as sufficient has not yet been standardised internationally and there are no binding standards for the cleaning performance of chemical denture cleaners. The literature shows that the cleaning performance of orthodontic cleaning tablets has already been the subject of previous studies . Other methods for cleaning orthodontic appliances, such as manual cleaning with a toothbrush or cleaning with antiseptic mouth rinses, have often been compared alongside chemical cleaning tablets. A study published in 2015 investigated the cleaning efficacy of three different cleaning methods on thermoplastic aligners. The combination of chemical cleaning tablets followed by manual cleaning was superior to both manual cleaning with toothpaste and manual cleaning without toothpaste . Similar results were obtained in a recent study published in 2021, in which the combination of manual cleaning with chemical cleaning tablets led to the highest biofilm reduction . Coimbra et al. investigated the impact of peroxide-based solutions on multispecies biofilm formed by Candida albicans, Staphylococcus aureus and Pseudomonas aeruginosa . The biofilm was formed in vitro on acrylic resin specimens within 24 h. Cleaning tablets had an antimicrobial effect but did not promote widespread removal of the aggregated biofilm. The cleaning performance of chemical cleaning tablets, without the additional use of a mechanical component, was investigated in a pilot study by Fathi et al. . They tested the cleaning performance of cleaning tablets on soft plaque that had accumulated in vivo over 4 days. The investigated orthodontic cleaners led to a protein reduction between 79.9% (Kukis®) and 86.8% (fittydent super®), while water led to a protein reduction of 56.5%. The question of whether an orthodontic appliance contaminated by a longer, undisturbed biofilm formation is equally effectively cleaned by orthodontic cleaners remained open and was the aim of this study. The inferior ability of orthodontic cleaners to reduce protein levels in the present study compared to Fathi et al. seems to be a consequence of the maturation of the biofilm and its associated structural changes . Several mechanisms are responsible for the increasing resistance of mature biofilms to physical and chemical stresses. In addition to the proliferation of anaerobic bacteria, there is an overall increase in all bacteria and an increase in the heterogeneity of bacterial composition . This enables the exchange of metabolic products, signalling molecules, genetic material and defence substances . Extracellular polymeric substances (EPS) represent a diffusion barrier for antimicrobial substances by slowing down the transport speed into the interior of the biofilm or impairing it through interaction with extrapolymeric substances. This makes it increasingly difficult for antimicrobial substances to penetrate into deep layers of the biofilm . In this study, Retainer Brite® achieved the highest protein reduction and Kukis® Xpress and Dontodent achieved an equally significantly lower protein reduction. The cleaning performance of all three orthodontic cleaners was higher than that of tap water (Fig. , Table ). On the other hand, the results of this study demonstrated that the effectiveness of chemical cleaning tablets is limited on a mature biofilm due to the associated structural changes. Compared to a 4-day-old biofilm , there was a clear decrease in protein reduction. The results of this study, and the results of previous studies, reinforce the need to combine the use of chemical cleaning tablets with manual brushing to assure adequate biofilm removal. Although the orthodontic cleaning tablets were able to remove some of the biofilm, a significant amount of biofilm remained. Orthodontic cleaners did not promote broad elimination of the biofilm, within the time of use recommended by the manufacture. The time of immersion is an important factor to consider. He et al. investigated the penetration of chlorhexidine into the inner part of a mature biofilm. They were able to show that a limited penetration time did not result in concentrations necessary to kill the bacteria. Coenye et al. determined the kinetics of killing for C. albicans biofilms grown on PMMA specimens. By increasing the time of exposure in a peroxide-based solution (NitrAdine™, Medical Interporous, MSI Laboratories AG, Vaduz, Liechtenstein) a significant reduction of C. albicans could be achieved. They stated that the time of exposure to peroxide-based solutions must be long enough for them to penetrate into the deep layers of the biofilm. In the present study, the difference in the cleaning performance between Retainer Brite® vs. Dontodent and Kukis® Xpress, respectively, was significant. While Retainer Brite® with an exposure time of 15 min led to the highest protein reduction, the cleaner Kukis® Xpress with the shortest exposure time of 3 min achieved the lowest protein reduction. However, the difference in protein reduction between Kukis® Xpress and Dontodent was not significant despite a longer exposure time in the cleaning solution for Dontodent. Since only cleaning performance of the use according to the manufacturers’ time instructions was tested, it remains to be clarified whether a longer soaking time in the cleaning solution leads to better cleaning results. Furthermore, it remains to be clarified whether the daily use of orthodontic cleaners can prevent the accumulating formation of biofilm. This question should be investigated in future studies. The results of this ex vivo study demonstrate that under the condition of a 7-day-old biofilm, orthodontic cleaners removed some of the biofilm, but a significant amount of biofilm remained. The cleaning performance on a mature biofilm with its associated structural changes is limited. On a mature biofilm the single use of water is ineffective and leads to inadequate cleaning results.
Cluster classification of a Brazilian gastric cancer cohort reveals remarkable populational differences in normal p53 rate
848b5edf-2d20-4011-8822-86ac759eb410
11461015
Anatomy[mh]
Gastric cancer (GC) is the fifth most common type of malignancy, with more than 1 million new cases per year worldwide. Despite improvements in diagnostic and therapeutic approaches, the prognosis of patients with GC remains unclear. It is the third deadliest cancer, with 783,000 deaths annually and a 5-year survival rate of only 31%. Historically, GC classification has relied on microscopic features associated with specific marker expression. Over the years, a better understanding of the genetic and molecular aspects of GC has resulted in new subtype stratification systems, demanding more efficient classification tools with clinical applicability, such as prognostic correlations and targeted therapies. Laurén's classification, one of the first and most widely used GC classification systems, divides gastric adenocarcinomas into intestinal, diffuse, or mixed subtypes. However, this classification does not entirely consider the heterogeneous nature of diseases. Consequently, it has a poor association with tumor response to treatment and prognosis, failing to identify patients who can benefit from new therapies. The World Health Organization (WHO) classification is more complex. Relying on more precise histological patterns, it considers all rare GC subtypes that were not previously included in Laurén's classification. Nevertheless, it lacks clinical applicability because distinct histological subgroups are generally not implicated in different outcomes. To better understand the molecular and genetic aspects of GC, The Cancer Genome Atlas (TCGA) and the Asian Cancer Research Group (ACRG) used next-generation sequencing data to identify dysregulated pathways and candidate gene mutations. These mutations have emerged as possible molecular biomarkers and may contribute to drug development for specific subsets of GC. Some of the identified molecular markers include ErbB2 (Her-2), CDH1, and mismatch repair (MMR) genes. Although they better reflect tumor heterogeneity and correlate subgroups with targeted treatments and prognoses, these new approaches lack clinical applicability, mainly because of the use of sophisticated and expensive technologies, which limit reproducibility in the clinical setting. To overcome technical difficulties in the clinical care of patients with GC and associate molecular profiles with treatment and prognosis, more straightforward techniques, such as immunohistochemistry (IHC), remain the gold standard and cost-effective alternative. For instance, patients with microsatellite instability and Epstein-Barr virus (EBV) infection are known to express PD-L1, which makes them potentially eligible for chemotherapy using immune checkpoint inhibitors, such as nivolumab and pembrolizumab. Other important biomarkers recently highlighted in GC include fibroblast growth factor receptor-2 (FGFR2) and Claudin 18.2 (CLDN18.2). Mutations in FGFR2 are present in approximately 4% of cases and are associated with a worse prognosis in GC. CLDN18.2 is exclusively expressed in differentiated epithelial cells of the gastric mucosa in primary GC. A recent study showed that therapy using the anti-CLDN18.2 chimeric monoclonal antibody, zolbetuximab, in combination with first-line chemotherapy provides significant survival benefits for patients with advanced GC. As an alternative/complement for Laurén's and WHO classification, and considering the tumor molecular aspects, Setia et al. proposed a method to segregate patients with GC into five clusters utilizing IHC. Cluster 1 (C1) is specifically for patients positive for EBV, with other markers not interfering in this classification. Patients negative for EBV and with a loss of MLH1 expression are classified into cluster 2 (C2), independent of E-cadherin (ECAD) and p53 expression. Both clusters have a better prognosis, and C1 patients usually benefit from immunotherapy. In the case of the absence of EBV and normal MLH1 expression, patients with aberrant ECAD expression (mutated or absent) belong to cluster 3 (C3), which has an unfavorable prognosis. p53 is considered for classification only if the previous parameters are normal. In this case, aberrant p53 expression determines cluster 4 (C4), while its normal expression defines cluster 5 (C5). Considering the genetic heterogeneity across populations, particularly the heterogeneity within the Brazilian population for previously described cancer markers, it is pertinent to evaluate the performance of Setia et al. proposed classification in a Brazilian cohort. To assess the cluster distribution in a cohort of 30 Brazilian patients in comparison with other genetically diverse populations, and to evaluate whether the inclusion of other clinical and histological parameters yielded a better predictive value. Case selection and pathological diagnosisImmunohistochemistryStatistical analysisDevelopment and training of gastric cancer classifier algorithmsWe identified, selected, and evaluated 30 surgical cases of primary GC, representing approximately 20% of the annual number of cases during the evaluation period. Clinical and pathological data (age, sex, tumor histology and topography, invasion level, lymph node invasion, and pTNM stage) were obtained from medical records and used for patient classification, together with Laurén's criteria. After approval by the Institutional Ethics Committee of Associação Mário Penna (CAAE: 39672920.2.0000.5121; #4.465.746), a retrospective chart review was conducted on all primary GC cases analyzed at the Instituto Mário Penna Surgical Pathology Lab between May 2018 and August 2020. All specimens were contained in paraffin-embedded blocks, which were sectioned into 3-4 μ m slices for standard immunohistochemical staining of neoplastic cells. Mouse monoclonal antibodies directed against p53 (Leica, DO-7, ready-to-use), MLH1 (Leica-Biocore, E305, 1:50), and ECAD (Leica, 36B5, ready-to-use) were used. For EBV, EBER-ISH (Leica, BOND EBER Probe, ready-to-use) was used, and each reaction included negative and positive controls. The data obtained were used to identify and stratify patients into one of the five clusters proposed by Setia et al. Data analysis was performed using GraphPad Prism ® 8.0 statistical software (GraphPad Software, Inc., San Diego, USA), using the χ 2 test on a contingency table to evaluate the association between patient features and cluster classification. Differences were considered statistically significant at p<0.05. Decision trees were built using WEKA software (Waikato Environment for Knowledge Analysis, version 3.6.11, University of Waikato, New Zealand) to classify patients with GC into one of five clusters based on clinicopathological features and IHC data. Leave-one-out cross-validation (LOOCV) was applied to estimate classification accuracy and test the generalizability of the model. A total of 30 patients were included in this study. The median diagnosis age was 61.5 years, and almost two-thirds of the patients were male (63.3%). Laurén's intestinal-type tumors were the most frequent (36.7%). Considering only the patients with available information (23/30), most cases were positive for lymphovascular (77.3%), perineural (78.3%), or lymph node invasion (63.6%). Five patients (21.7%) tested negative for all three features. Invasion of the subserosal layer was diagnosed in 73.9% of the cases. The clinical and pathological features of the patients are summarized in . No statistically significant association was observed between clinicopathological features and cluster distribution. Immunohistochemistry analyses of tumor samples for the biomarkers EBV, MLH1, p53, and ECAD were used to stratify the patients into the five clusters proposed by Setia et al. . More than one-third of the patients (36.7%) were classified as C5, characterized by p53 normal expression and the absence of the other tested markers. C2, C3, and C4 were roughly equally distributed, corresponding to 16.7%, 20.0%, and 23.3% of the patients, respectively . Only one patient was positive for EBV and was classified as C1. Decision tree analysis was usePatients with two or more missing data points were excluded, and the remaining 23 patients were considered. The clusters suggested by Setia et al. were tested using an algorithm with the additional data described in . Stratification accuracy was not altered with or without additional data, highlighting the efficient classification proposed by Setia et al. The algorithm correctly classified 95.7% (22/23) of the samples tested in both the training and LOOCV sets . The use of other clinical and pathological parameters as possible classifiers did not yield improved results . Gastric cancer is one of the most aggressive cancers, with one of the lowest overall survival rates worldwide. Despite the numerous novel chemotherapy regimens developed thus far, patient sensitivity to treatment varies, and some still fail to obtain satisfactory results. Thus, the accuracy of patient histological classification is crucial for estimating the prognosis and therapeutic strategies for GC and, consequently, improving the survival rate. Traditional classification methods, such as Laurén's and WHO classification, consider only histological patterns to subdivide patients into groups and do not fully represent the differences in treatment responses and prognosis. To overcome this limitation, Setia et al. proposed a straightforward method using cheaper and more broadly available technology to classify GC subtypes based on similar patterns found in TCGA and ACRG studies. The effect of populational genetic diversity on the worldwide use of biomarkers is well known. In this study, we described the distribution of a cohort of 30 Brazilian patients into the five clusters proposed by Setia et al. The results showed that C1 was the least represented, with only one patient (3%) positive for EBV. A similar underrepresentation was observed in a North American population (5%). On the other hand, a higher C1 frequency has been described in another Brazilian cohort (10%). In an Asian population study, C1 and C2 were equally underrepresented compared to the other clusters (7% each). The higher incidence of microsatellite instability GCs (C2) in Western populations than in Asian populations has been described previously and corroborates our results, with 17% of patients classified as C2. Similar to an Asian study, we observed an increased median age in this group (73.0 years). However, our data differed in the prevalence of Laurén's intestinal-type subtype, with all three subtypes equally represented. Our data also corroborated the characteristics described in the Asian population for the C3 group, such as the prevalence of the Laurén's diffuse subtype (83.3%) and higher aggressiveness. In our cohort, this group harbored only one patient with distant metastasis. The p53 aberrant expression rate in the absence of other IHC markers was the most discordant feature among the studies cited here, resulting in the discrepancies observed in the C4 and C5 distribution . In the Asian population, C4 is twice as frequent as C5, and in the North American population, the frequency of C4 is more than seven times higher than that of C5. The Brazilian cohort described here, in turn, corroborates a more balanced distribution between the C4 and C5 groups, as observed previously, with Laurén's intestinal-type subtype present in more than half of these patients. Considering that the C5 group consisted of individuals who were negative for all other IHC markers analyzed, we could infer that nearly one-third of the patients were not stratified. Larger cohort studies may help to better characterize this group. Subsequently, to select attributes that could permit better predictive value, we applied machine learning algorithm analysis. During the full training, the test included both clinical and pathological data. Algorithmic analysis confirmed that the accuracy obtained using the clusters proposed by Setia et al. was in accordance with our GC classification. The use of other clinical and pathological parameters as possible classifiers yielded no results, and the clusters were more suitable for GC classification. Our study has some limitations, such as its small sample size, which may have influenced the differences observed between our cohort's cluster distribution and that of other Brazilian studies. In addition, almost one-fourth of the patients were not considered for the decision tree analysis because of missing clinicopathological data. In addition, other important markers, such as PD-L1, FGFR2, and CLDN18.2, were not evaluated. Finally, the short follow-up period prevented the evaluation of cluster categories associated with overall survival. Nonetheless, our data confirm the heterogeneity among the Brazilian population and reinforce that auxiliary data complementary to clinical information are necessary for accurate prognosis evaluation and clinical outcome prediction. In conclusion, our data corroborate the distinct pattern of aberrant p53 expression in Brazilian patients with gastric adenocarcinoma compared to other populations. Furthermore, this study highlights the importance of local research in characterizing specific groups of patients for personalized medicine and improving gastric adenocarcinoma survival rates. More studies with larger cohorts and long-term follow-up are essential to fully assess the utility of molecular analysis in gastric adenocarcinoma diagnosis and prognosis evaluation.
Clinical impact of combined assessment of myocardial inflammation and fibrosis using myocardial biopsy in patients with dilated cardiomyopathy: a multicentre, retrospective cohort study
fefdd596-77c8-41ae-aa2b-426171e88ef5
11907087
Musculoskeletal System[mh]
Myocardial inflammatory cell infiltration and high left ventricular fibrosis are established predictors of poor prognosis in patients with dilated cardiomyopathy (DCM). However, their combined prognostic impact remains poorly understood. This multicentre cohort study demonstrated that both the degree of inflammatory cell infiltration and the extent of fibrosis in myocardial biopsy specimens independently predict adverse outcomes in DCM. Notably, the coexistence of severe inflammation and extensive fibrosis identified patients at highest risk for poor clinical outcomes. These findings advocate for close monitoring and careful consideration of therapeutic strategies in DCM patients with concurrent high myocardial inflammation and fibrosis. Furthermore, this high-risk population represents an important target cohort for future clinical trials evaluating novel immunosuppressive strategies. Dilated cardiomyopathy (DCM) is defined as left ventricular (LV) dilatation and systolic dysfunction, which are not explained by abnormal loading conditions. Patients with DCM experience degradation of the quality of life and prognosis due to heart failure symptoms and arrhythmia. The underlying substrates vary and comprise both genetic and environmental factors. It has recently been elucidated that chronic myocardial and surrounding inflammation are associated with poor clinical outcomes in DCM. This condition is termed inflammatory DCM and can be diagnosed by endomyocardial biopsy. In two previous studies, we assessed the relationship between biopsy-proven myocardial inflammation and prognosis, which revealed poor clinical outcomes among patients with DCM who met the myocardial inflammation criteria defined by the European Society of Cardiology (ESC). Furthermore, we found that three-tiered risk stratification according to the infiltrating CD3 + lymphocyte count was useful for predicting the detailed prognosis in DCM patients (INDICATE study). Notably, myocardial fibrosis is also a well-known cause of LV remodelling and a classical risk factor for poor clinical outcomes. While the predictive value of myocardial fibrosis is often assessed by cardiac magnetic resonance imaging (CMRI), it can also be evaluated using endomyocardial biopsy. To date, no study has investigated the prognostic impact of the combined simultaneous evaluation of myocardial inflammation and fibrosis in DCM patients. In the current study, we aimed to investigate the combined prognostic value of myocardial fibrosis and myocardial inflammation, both evaluated using endocardial biopsy, in patients with DCM. Study population and protocol Process for preparing histopathological samples Histopathological assessment for myocardial inflammation Histopathological assessment of fibrotic area Statistical analysis Patient and public involvement We performed a multicentre, retrospective, observational study and a sub-analysis of previously published research. The study included patients with DCM who underwent myocardial biopsy between January 2004 and December 2014, enabling an over 5 year follow-up period. From patients’ medical records, we obtained their clinical history, physical findings, echocardiographic parameters and haemodynamic data. LV dysfunction suspicious of DCM was defined as LVEF of ≤45% and LV diastolic diameter of >112% of the predicted value. The predicted value was calculated using the following formula: 45.3 × (body surface area) 1/3 − (0.03×Age) − 7.2. LVEF was calculated using the Bi-plane modified-Simpson method or with the Teichholz method in 80 patients who lacked LVEF using the Bi-plane modified-Simpson method. DCM was comprehensively diagnosed by the attending physician based on clinical examinations, including endomyocardial biopsy. Although DCM is sometimes diagnosed after confirming the absence of myocardial inflammation, here we used the term DCM in a broad sense, excluding secondary cardiomyopathies. Patients were excluded if they exhibited coronary stenosis of >50% at the main branch; severe primary valvular disease; history of uncontrollable or untreated hypertension for ≥1 year before LV dysfunction was documented; and other secondary cardiomyopathies, such as sarcoidosis or amyloidosis. We also excluded patients with a history of malignant disease, cardiac surgery, acute myocarditis and collagen disease; patients receiving current or prior immunosuppressive therapy; and patients with acute infection on the day of the biopsy. The clinical primary endpoint was defined as a composite: cardiovascular death or LV assist device implantation. This retrospective study was approved by the Institutional Review Board of the National Cerebral and Cardiovascular Centre (M27-063), Nagoya City University (60-16-0086) and other institutes. It was conducted in accordance with the principles of the Declaration of Helsinki. The patients’ biopsy samples were stained with haematoxylin–eosin, Masson’s trichrome and immunohistochemistry for inflammatory cells, using an autostainer at the National Cerebral and Cardiovascular Centre. T lymphocytes and macrophages were identified using anti-CD3 antibody and anti-CD68 antibody, respectively. Detailed products and methods are presented in . Whole slides were digitally scanned at the National Cerebral Cardiovascular Centre. Microscopic images were randomly taken of five high-power fields on each slide, and then, two blinded pathologists (KO-O and HI-U) counted the CD3 + cells and CD68 + cells, excluding cells in vessels ( ). First, we assessed the presence of myocardial inflammation using the ESC criteria (≥ 14 leucocyte/mm², including ≤4 monocytes/mm² and CD3 + T lymphocytes≥7 /mm²). Second, we divided the patients into three groups according to the INDICATE study criteria, as follows: T lymphocytes of <13/mm² (low); 13.1–23.9/mm² (moderate); and T lymphocytes of ≥24/mm² (high). Myocardial fibrosis was evaluated using Masson’s trichrome-stained specimens. The collagen area fraction (CAF) was calculated as the ratio of the blue-colour area (excluding the endocardium) to the area of the whole specimen. We quantified the blue area as fibrosis using the Positive Pixel Count algorithm version 9 of Aperio ImageScope ( ). The cut-off value of CAF for the primary endpoint was determined using Youden’s index, derived from the receiver operating characteristics curve. We calculated intraclass correlation coefficients (ICCs) to evaluate the intra- and inter-rater reliabilities for CAF, as determined by TN and KO-O among the latest 30 cases at the National Cerebral and Cardiovascular Centre. An ICC of ≥0.8 was considered the preferred reliability level. Continuous values were expressed as the mean±SD when normally distributed and the median (IQR) when not normally distributed. Average normally distributed continuous values were compared between groups using the Student’s t-test, whereas the Mann–Whitney U-test was used for values that were not normally distributed. Categorical values were expressed as number (percentage) and compared using the X 2 test. Cox proportional hazard analysis was performed to identify factors associated with the primary endpoint. We selected the parameters based on a previous meta-analysis investigating risk factors in heart failure. Since our retrospective study did not include data regarding systolic blood pressure, we used a history of hypertension instead. Additionally, we excluded the duration of heart failure from our multivariable analysis due to the lack of information for 82 patients. Variables with a P value of <0.05 in univariable analysis were entered into multivariable analyses. Regarding the incidence of the primary endpoint, multivariable Cox proportional hazard analyses included the infiltrating T-lymphocyte count, CAF and parameters from each category, such that the analyses were limited to include no more than four variables at once. We selected the number of CD3 + cells for grading the myocardial inflammation in the multivariable analyses since the ESC statement designates lymphocytes as the main diagnostic criterion, while considering macrophages/monocytes as a supportive finding. The log–log plot of the event-free rate was used to test the Cox proportional hazard assumption. Kaplan–Meier curves were drawn to compare event-free survival rates between groups, and the significance was assessed using the log-rank test. The patient’s follow-up data were collected until the loss of follow-up or the primary outcomes. The c-statistics were used to assess the superiority of the combined diagnostic performance of myocardial inflammation and fibrosis compared with using myocardial inflammation or fibrosis alone for the primary endpoint. The univariable and multivariable logistic analyses were performed to identify factors associated with the infiltrating T lymphocytes and CAF. All statistical analyses were performed using SPSS software ver. 26 (IBM Corp., Armonk, NY, USA), and two-sided P values of <0.05 were considered significant. Patients or the public were not involved in the design, conduct, reporting or dissemination of our research. Baseline characteristics Pathohistological findings and patient prognosis Factors associated with myocardial inflammation and fibrosis Baseline characteristics, according to high or low myocardial inflammation (ESC criteria) and fibrosis (CAF, >5.9%), are summarised in . The results of logistic regression analyses are presented in . Univariable logistic analyses revealed that positivity for ESC criteria myocardial inflammation and CAF of >5.9% were not associated with each other. Linear regression analysis also showed no significant relationship between the CD3 + cell count and CAF ( ). Multivariable logistic analyses identified that myocardial inflammation was independently associated with age and NYHA class and that only a history of hypertension was independently associated with myocardial fibrosis ( ). Among all 265 DCM patients registered from eight institutions, three were excluded due to the absence of cardiomyocytes, one due to the presence of amyloid deposits and six due to the absence of Masson–Trichrome staining ( ). Finally, 255 patients were included in our analyses. presents the baseline characteristics of all patients. The average age was 53.1 years, 78% were male and the mean LVEF was 28.0%. Histograms of CD3 + cells, CD68 + cells and CAF are presented in . For quantifying CAF, the ICCs of intra- and inter-rater reliabilities were 0.995 (95% CI: 0.990 to 0.998) and 0.98 (95% CI: 0.95 to 0.99). During the median observational period of 2688 days (IQR, 1448–3633 days), 46 patients met the primary endpoints. The CAF cut-off value was determined to be 5.9% ( ). Univariable Cox proportional hazard analyses revealed 10 variables as significant for the primary endpoint ( ). The infiltrating CD3 + cell count and CAF were independent predictors of the primary endpoint in multivariable Cox proportional hazard analyses, including each category of parameters ( ). The Cox proportional hazard assumption was verified to be met by confirming parallel curves and the absence of interactions among all variables in the multivariable analyses. The Kaplan–Meier curves showed significantly lower survival rates among patients with higher myocardial inflammation (ESC criteria) and patients with greater myocardial fibrosis (CAF, >5.9%), compared with those with lower myocardial inflammation and fibrosis ( ). Furthermore, patients with both higher myocardial inflammation and greater fibrosis had the worst outcomes compared with the other groups (log-rank p<0.001, ). Using the CD3 + cell cut-off value defined in the INDICATE study, patients with moderate CD3 + cells (13–24/mm 2 ) exhibited a significantly superior survival rate when also showing lower myocardial fibrosis (CAF, ≤5.9%), and a significantly inferior survival rate when also exhibiting higher myocardial fibrosis (CAF, >5.9%) ( and ). The c-statistics analysis revealed a higher area under the curve for the combined assessment of myocardial CD3 + cell count and CAF (0.69) compared with either CD3 + cells (0.63) or CAF (0.63) alone for the primary endpoint ( ). The current study is the first to investigate the combined prognostic impact of pathological myocardial inflammation and fibrosis in patients with DCM. This retrospective study included the greatest enrolment among investigations where myocardial biopsy has been performed while alive and its prognostic utility assessed in patients with DCM. Our results demonstrated that DCM patients with higher myocardial inflammation and greater myocardial fibrosis had poorer outcomes compared with other groups. Furthermore, among patients with infiltrating T-lymphocyte counts ranging between 13/mm 2 and 24/mm 2 (moderate in the INDICATE criteria), prognosis depended on the extent of myocardial fibrosis. Myocardial inflammation in patients with DCM Myocardial fibrosis in patients with DCM Simultaneous presence of myocardial inflammation and fibrosis Clinical implications Study limitations Over recent decades, growing evidence has revealed the clinical significance of infiltrative inflammatory cells in patients with DCM. The concept of ‘inflammatory DCM’ was proposed by Maisch et al, suggesting that DCM patients be divided according to the presence of myocardial inflammation and virus using biopsy samples. In the ESC guidelines, pathological myocardial inflammation is defined by myocardial infiltrative leucocytes of ≥14/mm 2 , which can include up to 4/mm 2 of macrophages. Using these criteria, we previously demonstrated a worse prognosis among patients with biopsy-proven inflammatory DCM, compared with those without. Clinical prognosis is also adversely affected by biopsy-proven myocardial inflammation in other cardiac diseases, including cardiac amyloidosis, Fabry disease, hypertrophic cardiomyopathy (HCM) and arrhythmogenic right ventricular cardiomyopathy. The cut-off values for myocardial infiltrative T lymphocytes were similar among these different diseases, suggesting that over 10–14/mm 2 of infiltrative T lymphocytes is an adverse prognostic indicator in cardiac disease in general. Although prednisone therapy does not yield clinical benefit for all DCM patients, Wojnicz et al demonstrated that the use of prednisone and azathioprine improved cardiac functions and symptoms among patients with HLA upregulation. Moreover, Frustaci et al showed that immunosuppressive therapy with prednisone and azathioprine improved cardiac function in DCM patients with myocardial inflammation (>7/mm 2 of CD3-positive lymphocytes or >14/mm 2 of leukocytes). These findings indicate that immunosuppressive therapy can be an effective treatment in carefully selected patients with DCM and myocardial infiltrative inflammatory cells. The cause of myocardial inflammation in cardiomyopathy is not fully understood, but infection and autoimmunity are classically considered likely explanations. Logistic regression analysis ( ) revealed only age and New York Heart Association class as independent determinants. Moreover, our findings reconfirmed heterogeneous backgrounds for myocardial inflammation, with no other identified associated factors, including circulating white blood cells and lymphocytes. We additionally investigated the circulating white blood cells, neutrophils and lymphocytes in the Cox proportional hazard analyses ( ). Interestingly, the decrease in circulating lymphocytes was a significant variable in the univariable analysis. However, it turned insignificant in the multivariable analysis, whereas the myocardial infiltrating CD3 + cells and fibrosis stayed significant. Based on these results, we believe the myocardial inflammation was not totally dependent on systemic inflammation. Importantly, regardless of the cause, we can recognise that a high myocardial infiltrating inflammatory cell count is an indicator of worse clinical condition. The extent of myocardial fibrosis is well-known to be associated with poor clinical outcomes in patients with DCM. Fibrosis is often assessed by CMRI with late gadolinium enhancement (LGE), and meta-analysis has revealed that LGE presence has a prognostic impact on DCM patients. However, LGE reflects myocardial inflammation and oedema as well as myocardial fibrosis and is considered unsuitable for quantifying interstitial-myocardial fibrosis. CMRI with extracellular volume (ECV) was recently found to show prognostic utility in patients with DCM and negative LGE. On the other hand, a previous retrospective investigation demonstrated that higher LGE predicted poor outcomes, while endomyocardial biopsy did not in the same cohort, suggesting that LGE and biopsy-proven fibrosis differed in their reflected content and area. Only biopsy enables direct documentation of myocardial fibrosis. Notably, there is not yet evidence that biopsy-proven CAF has major predictive ability in DCM. Studies have demonstrated both that biopsy-proven fibrosis predicted prognosis and that it did not. Although right ventricular endomyocardial biopsy can predict LV fibrosis, sampling error may be a problem in endomyocardial biopsy in a small-sized study. We found low specificity when using a CAF cut-off value of 5.9%; therefore, we verified another cut-off value of 8.8% (sensitivity: 0.63, specificity: 0.55), which was determined based on the nearest point from the points with a sensitivity of 1.0 and specificity of 1.0 ( ). When using this cut-off value of 8.8%, we found that the main results did not change ( ). Thus, in the current study, we chose to apply 5.9% as the CAF cut-off value, which had high sensitivity and might indicate the normal range of CAF ( ). In our logistic analysis, only history of hypertension was independently and negatively associated with CAF of >5.9% ( ). Since history of hypertension also served as an alternative parameter for blood pressure in our study, this result likely reflects that maintained blood pressure was associated with better clinical prognosis. Our logistic analysis revealed no other causes, supporting the heterogeneous background in myocardial fibrosis. Notably, myocardial inflammation is also a substrate for myocardial fibrosis. Previous research shows that the myocardial infiltrative M2 macrophage count is significantly associated with the CAF. There remains a need for further exploration of the role of M2 macrophages in establishing myocardial fibrosis. The baseline parameters that significantly differed between patients with and without myocardial inflammation were not the same parameters that differed between patients with and without fibrosis ( ). Furthermore, in the Cox proportional hazard analyses, both factors were independently associated with the primary endpoint. Thus, myocardial fibrosis and inflammation did not seem to be induced by the same mechanism. Notably, even without knowing the primary cause of infiltrating T lymphocytes and myocardial fibrosis, we have demonstrated their significant utility for predicting prognosis in DCM. Similar to our current results, we previously revealed that HCM patients with both biopsy-proven myocardial inflammation and fibrosis had the worst prognosis compared with other patient subsets. We also investigated the correlation between myocardial inflammation and fibrosis and LV remodelling. At the echocardiographic follow-up after 6–12 months, we observed significantly lower improvement in LV diastolic diameter, LV systolic diameter and LV ejection fraction among patients with both myocardial inflammation according to ESC criteria and CAF of >5.9%, compared with other patients ( ). By dividing DCM patients into three levels of inflammation according to the INDICATE criteria, we demonstrated poor clinical prognoses among the patients with high myocardial inflammation (CD3 + T lymphocytes, ≥24/mm 2 ), and the patients with moderate myocardial inflammation (13–23.9/mm 2 ) and higher myocardial fibrosis (CAF, >5.9%). Based on this, we propose that patients with moderate myocardial inflammation and CAF of >5.9%, as well as patients with high inflammation, should be considered to have insufficient reserve capacity for 10 years with the current guideline-based medicine. A history of hypertension was a strong indicator of the primary endpoint, suggesting a need to further investigate the association of hypertensive heart diseases. We performed the Cox proportional hazard analyses among only the 77 patients with systolic blood pressure data ( ) and confirmed that blood pressure was independently related to the primary endpoint. However, the history of hypertension became non-significant in multivariable analysis. These results did not change when the CD3 + cell count and CAF were entered into the analysis. Therefore, we believe that hypertension history indicated that blood pressure was being well-maintained, rather than being associated with hypertension itself or hypertensive heart disease. Based on our findings, we recommend providing close monitoring or careful consideration to ensure the most optimal current therapy for two groups of patients. One group is the patients who meet the myocardial inflammatory condition, according to ESC criteria or a T-lymphocyte count ranging from 13/mm² to 24/mm², and have a CAF of >5.9%. The other group is the patients with high myocardial inflammation according to the INDICATE criteria, with a T-lymphocyte count of 24/mm² or higher, regardless of CAF levels. Quantification of CD3 + and CD68 + cells and CAF is available at almost every institute. We recommend that these quantifications should be included in biopsy reports. Furthermore, further research investigating the efficacy of immunosuppressive therapy among DCM patients may help improve the treatment for the above-listed patient groups. The current study had several limitations. First, it was retrospective in design, and thus further prospective research is needed to confirm our results. Second, the systolic blood pressure data were missing for the majority of patients. Third, we could not exclude the influence of biopsy sampling error on our results. To minimise the effects of sampling error, we evaluated all samples of specimens. Fourth, this study did not clarify the primary causes of inflammation and fibrosis. Information was not available regarding viral genome status, circulating autoantibodies and gene mutation for cardiomyopathy. The data on circulating cytokine levels were also unavailable, which could provide possible mechanisms for the inflammation. Notably, these examinations are not covered by insurance in Japan, and it was difficult to additionally investigate them in this retrospective multicentre analysis. Further basic research is warranted to deepen our knowledge of the mechanisms of inflammation and fibrosis. Higher myocardial infiltrating inflammatory cell count and greater CAF, evaluated using biopsy samples, predicted worse prognosis in patients with DCM. Patients with both high myocardial inflammation and fibrosis had the worst clinical outcomes. We recommend careful clinical follow-up for these patients. 10.1136/openhrt-2025-003250 online supplemental figure 1 10.1136/openhrt-2025-003250 online supplemental figure 2 10.1136/openhrt-2025-003250 online supplemental figure 3 10.1136/openhrt-2025-003250 online supplemental figure 4 10.1136/openhrt-2025-003250 online supplemental figure 5 10.1136/openhrt-2025-003250 online supplemental figure 6 10.1136/openhrt-2025-003250 online supplemental figure 7 10.1136/openhrt-2025-003250 online supplemental figure 8 10.1136/openhrt-2025-003250 online supplemental figure 9 10.1136/openhrt-2025-003250 online supplemental figure 10 10.1136/openhrt-2025-003250 online supplemental table 1
Multi-Omics Association Analysis of DOF Transcription Factors Involved in the Drought Resistance of Wheat Induced by
28fd950e-9d8e-4b4a-90cf-35f92be9303a
11942236
Biochemistry[mh]
Precipitation plays a pivotal role in influencing global agricultural production, particularly grain cultivation in underdeveloped nations. As global temperatures climb, extreme weather events have become more prevalent, with droughts posing a severe threat to grain production and supply in these countries. Wheat stands out not only as one of the world’s three primary grain crops but also as a staple cultivated in arid regions. According to statistics from the United Nations Food and Agriculture Organization, wheat cultivation spans 216 million hectares worldwide, yielding an average of 3.5 tons per hectare for a total output of 765 million tons . This crucial crop is a major food import for underdeveloped regions like Africa. Considering the current state of global socio-technological development, rising temperatures are projected to decrease global wheat production by 1.9% by mid-century due to extreme weather. Notably, tropical African and South Asian underdeveloped countries will experience declines of approximately 15% and 16%, respectively . Furthermore, studies indicate that in key wheat-producing areas, high temperatures and rainfall patterns impact wheat yield formation by over 40% . Wheat cultivation predominantly occurs in temperate regions, which unfortunately coincide with areas prone to droughts over the past decade . Therefore, enhancing wheat’s drought resistance and fortifying its supply are of utmost importance to safeguard global food security and socio-economic stability. Plants effectively address water scarcity by regulating their hormonal metabolism, which in turn influences phenotypic and physiological metabolic changes. Research has demonstrated that plants can boost their water use efficiency through various mechanisms such as increasing root hair density , closing stomata , elevating carbohydrate and other compound levels , enhancing antioxidant enzyme activity, and other strategies . These adaptations ensure the smooth progression of vital growth and developmental processes like photosynthesis . However, these activities are influenced by alterations in plant hormone metabolism . Drought conditions diminish water use efficiency and stimulate the production of abscisic acid (ABA), which controls stomatal closure, enhances photosynthetic capacity, improves osmotic adjustment, and bolsters plant drought tolerance . Upon receiving stress signals, the cell plasma membrane converts lutein into ABA and releases it. Most ABA is synthesized in the root system and transported upward through vascular tissue . ABA receptors, widely present in organelles like the nucleus, cytoplasm, and chloroplast, inhibit the action of sucrose non-fermenting 1-related protein kinase 2 (SnRK2) proteins via protein phosphatase 2C (PP2C) under low ABA concentrations, leading to dephosphorylation and regulating cellular drought stress metabolism . Cytokinin and auxin are crucial hormones for plant growth, cell division and proliferation, organ formation, and material and energy metabolism, playing significant roles throughout the plant life cycle [ , , ]. RIE Nishiyama et al. directly showed that cytokinin (CK) negatively regulates salt and drought stress signals in Arabidopsis mutants. CK-deficient Arabidopsis exhibits strong stress tolerance due to increased cell membrane integrity and ABA hypersensitivity. Under normal conditions, while CK deficiency heightens plant sensitivity to exogenous ABA, it downregulates key ABA biosynthetic genes, resulting in significantly lower endogenous ABA levels in CK-deficient plants compared to wild-type plants. Research has indicated that auxin homeostasis modulates ABA production and the drought stress response . Drought significantly reduces the transcript abundance of indole-3-acetic acid (IAA) synthesis genes in rice ( Oryza sativa L.) and elevates the transcript abundance of IAA-conjugate genes . Gibberellin (GA) is involved in regulating rice seed germination, stem elongation, and reproductive development under drought conditions . Changes in GA synthesis genes can interact with other hormones, influencing various plant growth and developmental processes . As a secondary metabolic phenolic molecule, salicylic acid (SA) not only regulates plant carbon dioxide assimilation, antioxidants, stomatal closure, and photosynthesis but also controls the stomatal aperture by modulating drought-related gene expression [ , , ]. Plants overexpressing the SA synthesis transcription regulator CBP60g gene are more sensitive to ABA and exhibit enhanced drought tolerance . In water-deficit conditions, although JAZ proteins are degraded, leading to the activation of transcription factors like MYC2 and the upregulation of related drought resistance genes , exogenous jasmonic acid (JA) application can regulate plant stomatal dynamics and improve plant growth . Additionally, studies have shown that drought tolerance is reduced in brassinosteroid (BR)-deficient cotton mutants , while it is enhanced in etol1 rice mutants that accumulate more ethylene . Strigolactone , a derivative of carotenoids, plays a pivotal role in plant development, influencing the formation of roots, leaves, and branches. Additionally, it promotes the germination of arbuscular mycorrhizal fungi in roots and has been extensively studied for its ability to regulate plant stress resistance [ , , ]. However, natural Strigolactone (SL) levels are low in many plants, prompting the chemical synthesis of SL analogs such as GR5, GR7, and GR24, with GR24 exhibiting the highest activity . Osmotic stresses, including salinity and drought, negatively impact the production of Tri-hydroxylactone in dicotyledonous plants like tomato, lettuce, and Lotus Japonicus . Prior research has demonstrated that SL can mitigate drought stress in wheat by enhancing cell wall formation and optimizing root structure . Specifically, during the mid to late growth stages of wheat under drought stress, SL reduces membrane lipid peroxidation in canopy leaves and boosts the plant’s osmotic adjustment capabilities by increasing antioxidant enzyme activity. This ensures smooth photosynthesis and stable yield formation . Current studies, leveraging multi-omics technology, further clarify that exogenous GR24 can upregulate the expression or synthesis of drought-resistant molecules and metabolites in grapes , barley , and corn . While we have gained considerable insights into SL’s role in regulating plant drought resistance, the exact regulatory pathway remains elusive, particularly the intricate cross-response mechanisms between SL and other hormones. These areas require further exploration and clarification. Gene transcription regulation plays a crucial role in plant growth metabolism, hormone signal transduction, stress response, and various other biological processes. The DOF (DNA binding with one finger) family of transcription factors, unique to plants, is characterized by a distinctive single zinc finger structure. With recent advancements in species classification, numerous DOF members have been identified as key players in the plant life cycle. These DOF proteins are involved in regulating a wide array of plant biological processes, including dormancy, tissue differentiation, carbon and nitrogen assimilation, and carbohydrate metabolism. Furthermore, they have been reported to play a role in modulating hormone signals and responding to both biotic and abiotic stresses. CDF, a member of the DOF transcription factor family, is extensively involved in plant responses to various abiotic stresses. For instance, the mutant cdf3-1 gene enhances Arabidopsis sensitivity to drought and cold stress, while its overexpression unexpectedly boosts plant resistance to osmotic stress . In wheat, the DOF protein taznf promotes downstream gene expression, leading to increased Na+ excretion and enhanced salt resistance . Under long-term drought conditions, the overexpression of the woody apple DOF family gene mddof54 results in higher photosynthetic rates and branch water-carrying capacity compared to wild-type plants, while its survival rate under short-term drought conditions is significantly improved . DOF proteins also participate widely in plant stress responses by responding to plant hormone signals. In castor, many rcdof proteins exhibited differential expression levels under ABA treatment . Similarly, researchers have suggested that CmDOFs in chrysanthemum may be involved in the response to ABA and SA, leading to distinct expression patterns. Specifically, exogenous ABA significantly upregulated the expression levels of CmDOF12 and CmDOF20, while SA upregulated the expression levels of CmDOF2, CmDOF5, CmDOF6, CmDOF10, and CmDOF12 . Although some studies have investigated the mechanism of DOF’s involvement in plant hormone response regulation, research on the mechanism of DOF’s involvement in Strigolactone (SL) response regulation remains to be further conducted. 2.1. Morphological and Physiological Changes 2.2. Transcriptome Analysis and Transcription Factor Screening and Localization 2.3. Metabolome Analysis As shown in , correlation analysis and least-square correlation analysis were carried out for different periods and different treatments, and the Q2Y value was 0.952, indicating that the adopted model was effective and reliable. Using p < 0.05 and log2 (FC) ≠ 0 as the standard to screen the differential metabolites, as shown in , the differential metabolites in different periods and different treatments for 24 h (SL/ck increased by 4842 and decreased by 2763; Tis/CK increased by 5271 and decreased by 2789; SL/Tis increased by 2057 and decreased by 2685), 48 h (SL/CK increased by 3980 and decreased by 3442; Tis/CK increased by 3244 and decreased by 4820; SL/Tis increased by 4592 and decreased by 2093), and 72 h (SL/CK increased by 2410 and decreased by 4710; Tis/CK increased by 3497 and decreased by 3089; SL/Tis increased by 1465 and decreased by 4664) are shown. The clustering effect of different treatments in different periods was clear, and the differences between groups were obvious ( ), while the changes in differential metabolites were also obvious, including mannobiose, L-proline, D-lactose, malic acid, glucoside, isocitrate, indoleacetic acid, betaine, L-isoleucine, etc. According to the correlation analysis of metabolic pathways among different treatments ( ), at 24 h (SL/CK proline, indoleacrylic acid, mannose, D-lactose, 1-ketose, etc.; Tis/CK betaine, mannobiose, proline and indoleacrylic acid, etc.; SL/Tis glucoside, 2,3-butanediol glucoside, 8-propoxycaffeine, etc.), 48 h (SL/CK proline, indoleacrylic acid, isoflavone glucoside, kaempferol glucoside, etc.; Tis/CK mannobiose, betaine, indoleacrylic acid, glucoside, etc.; SL/Tis glucoside, betaine, p-chlorophenylalanine, etc.), and 72 h (SL/CK indoleacrylic acid, isocitrate, mannobiose, glucose, etc.; Tis/CK pentaacrylic acid pentatriol, proline, betaine, indoleacrylic acid, etc.; SL/Tis glucoside, raffinose, etc.), they showed significant correlation changes in the differential metabolites. After 72 h of treatment, through visual observation, the results show that the growth state of wheat canopy leaves at the seedling stage after SL treatment was better than that of CK. On the other hand, the wheat canopy for the Tis treatment group was significantly inhibited, and the degree of leaf curling and wilting was the largest in drought conditions ( A). In contrast, the relative water content, dry matter accumulation and fresh weight of wheat in the different treatments were SLs > CK > Tis ( B). Moreover, the SPAD value for the different treatments was the highest in the SL treatment group, and it was higher in the different treatments at 48 h than at 24 and 72 h ( C). shows the changes in three antioxidant enzyme activities in the canopy leaves. Among the three treatments, the SOD activity ( A) was higher at 48 h than that at 24 h and 72 h, but the POD and CAT activities ( B,C) decreased with time. As shown in the figures, there were significant differences in the changes in the SOD and POD activities at 24 h and CAT activities at 24 h and 48 h. The differences in other time periods were significant, and the enzyme activities of SL were the highest in the different treatments, which showed that compared with the control, the SL treatment could effectively improve the antioxidant enzyme activities of canopy leaves, while Tis could not, and the effects of the agents were mostly concentrated within 48 h. shows the correlation of different RNA-seq analyses among different treatments at 24 h, 48 h, and 72 h. RNA-seq was performed on leaves with four biological replicates of each treatment. A total of 36 samples obtained 2.821 Gb clean reads, with a Q30 base percentage of 91.94% and above and an average GC content of 53.40%, which met the sequencing requirements. By aligning reads to the reference genome, valid bases ranged from 92.95 to 97.38%. The data quality is reliable. The clustering of groups into different periods was clear, and the clustering within groups was close. As shown in and , fold change (FC) ≥ 1 and FDR < 0.05 were used as the screening criteria for DEGs. A total of 65,078 genes were screened by comparing different treatments at different times: 2895 (SL24 vs. ck24), 1183 (SL24 vs. Tis24), 4988 (Tis24 vs. ck24), 1064 (SL48 vs. ck48), 625 (SLs48 vs. Tis48), 1681 (Tis48 vs. ck48), 1829 (SL72 vs. ck72), 627 (SL72 vs. Tis72), and 2629 (Tis72 vs. ck72). As shown in and , Go Functional Annotation was performed on all DEGs. The SL and Tis treatments had similar effects on wheat cell components, biological processes, and molecular functions. Among the different treatments, the impact on molecular function involves antioxidant metabolism, carbon metabolism, transcriptional regulation, and the activation of substance transport. In terms of cellular composition, different treatments mainly affect the stability of the membrane lipid structure and the regulation of the organelle structure and function in wheat leaf cells. It should be noted that more genes are expressed in response to external stimuli, ensuring cell stability and cellular function. In addition, according to the KEGG second-level distribution of differentially expressed genes, there are many differentially expressed genes in wheat carbon metabolism affected by exogenous SL and Tis, and there are also differences in genes involved in signal transduction. As shown in , the cluster analysis of all differential genes showed that at 24 and 48 h, there were more differential genes in signal transduction and carbon metabolism, while at 72 h, the number of differential genes in signal transduction decreased, while the number of synthetic genes involved in maintaining the stable metabolites of cells such as carbon metabolism and amino acid metabolism increased. This shows that the signal transduction effects of the two exogenous substances on drought resistance in wheat mainly appear within 24 and 48 h, after which the cells begin to enter a stable period due to the changes in the synthesis of their own substances. As shown in , through the transcription factor analysis of sequencing data, a total of 20 kinds of DOF transcription factors (DOF TFs) were found, and their localization analysis was carried out. Most of the DOF TFs were concentrated on chromosomes 1, 2, 3, 5, and 6. Among them, a, B, and D of chromosome 2 were well distributed, and the number was large. In this study, the research observed notable differences in the morphological development and physiological metabolism of wheat seedlings exposed to drought conditions, particularly when their leaves were sprayed with SL (salicylic acid) or Tis (an inhibitor of SL signaling). Compared to the control group (CK), wheat canopies treated with SL exhibited enhanced water retention capacity, reduced leaf bending, neater top leaves, and less canopy disorganization. Conversely, wheat treated with Tis displayed inhibited canopy growth and more dispersed parietal leaves. The analysis of physiological metabolism revealed that the SL-treated leaves showed heightened antioxidant enzyme activity and photosynthetic capacity, indicating that SL application can effectively bolster wheat seedlings’ drought tolerance. As an inhibitor of SL, Tis exerted a pronounced influence on wheat’s drought resistance. Furthermore, utilizing second-generation sequencing technology, this study identified a multitude of genes expressed across various temporal stages and biological processes, with signal transduction genes primarily activated within 48 h. The transcription factor analysis pinpointed 20 factors, including single zinc finger binding proteins, that are abundantly enriched on chromosome 2. In conjunction with metabolome sequencing, the impacts of SL and Tis were primarily observed on glucosides, indoleacetic acid (IAA), betaine, mannose, and other compounds during the middle to late stages of wheat growth. These findings suggest that SL and Tis can influence wheat drought resistance, with particularly significant effects on IAA and brassinosteroid (BR) hormone metabolism, as well as osmoregulatory sugars. However, the mechanisms by which these transcription factors regulate wheat carbohydrate and hormone metabolism require further elucidation. 4.1. Study Settings 4.2. Methods The test material was Zheng Mai 1860 which is a variety cultivated by the Wheat Research Institute of Henan Academy of Agricultural Sciences. Henan Academy of Agricultural Sciences is located in Zhengzhou, Henan Province, China. Henan Province is the most important agricultural province in northern China. Zheng Mai 1860 is a semi-winter variety with a relatively stable yield and good adaptability in most ecological regions in northern China. It is widely cultivated. The seeds were sterilized and germinated to a plant height of about 10 cm, and uniform seedlings were selected for hydroponic culture. Hoagland nutrient solution was used for hydroponic culture, and it was changed every 2 days. Hydroponic seedlings were grown in an artificial climate box under environmental conditions (light/dark 12/12, brightness 20,000 lux, temperature 25/22 °C, humidity 50%). The seedlings were treated with drought stress when they grew to 2 leaves and 1 heart, that is, when the length of the top leaf was half of that of the middle leaf. An 18% PEG-6000 Hogland nutrient solution was used to simulate drought stress treatment (Hoagland nutrient solution was provided by coolebo company, Beijing, China). After 48 h of stress treatment, the leaves were sprayed with Strigolactone (SLS; 3 μm/L), inhibitor (TIS; 10 μm/L), and acetone solution as the control (CK; 4.08 mm/L). The leaves were evenly covered with a layer of water film, and each treatment was repeated four times. The samples were taken at 24 h, 48 h and 72 h after spraying to determine the indicators. The experimental settings and codes are shown in . Root and canopy material accumulation: at 72 h, two plants were randomly sampled from each pot, and the fresh weight of the root and canopy were weighed, respectively. After being killed at 105 °C for 30 min and dried at 85 °C to constant weight, they were weighed and the dry-to-fresh ratio was calculated. Photosynthesis: SPAD and photosynthesis were measured at 24 h, 48 h, and 72 h. Osmotic adjustment: the soluble sugar, protein, and free proline at 24 h, 48 h, and 72 h were determined. Membrane lipid antioxidation: CAT, POD, SOD, and MDA at 24 h, 48 h, and 72 h were determined. Joint analysis of transcriptome and metabolome: omics sequencing was completed by Shanghai OE Biotech Co., Ltd., China. 4.2.1. Metabolome Screening and Analysis 4.2.2. Screening and Analysis of DOF Transcription Factors RNA extraction and library construction: Trizol reagent was used to extract total RNA according to the instructions. RNA purity and quantification were identified using a nanodrop 2000 spectrophotometer (Thermo Scientific, USA), anduniversal V6 RNA SEQ library prep kit according to the instructions. Transcriptome sequencing and analysis were performed by Shanghai Ouyi Biotechnology Co., Ltd., (Shanghai, China). RNA sequencing and differential expression gene analysis: The library was sequenced using the Illumina novaseq 6000 sequencing platform (Illumina, Inc., San Diego, CA, USA), and 150 BP double-ended reads were generated. Approximately 69.54–86.24 raw reads were obtained for each sample. Fastp software version 0.20.1 was used to process fastq format raw reads, and clean reads were obtained after removing low-quality reads for subsequent data analysis. Hisat2 software version 2.2.1 was used for reference genome alignment, gene expression (fpkm) was calculated, and the read counts of each gene were obtained by htseq count . The PCA analysis and mapping of genes (counts) were performed using R (V 3.2.0) to evaluate sample biological replicates. The differentially expressed genes were analyzed by deseq2 software version 1.30.1, and the genes that met the threshold of Q value < 0.05 and FC > 2 or FC < 0.5 were defined as differentially expressed genes (DEGs). A hierarchical clustering analysis of DEGs was performed using R (V 3.2.0) to show the expression patterns of genes in different groups and samples. The R package ggradar was used to draw a radar map of the top 30 genes to show the expression changes of upregulated or downregulated genes. Subsequently, GO pathway , KEGG pathway , reactome, and wikipathways enrichment analyses of differentially expressed genes based on a hypergeometric distribution algorithm were used to screen significant enrichment function items. R (V 3.2.0) was used to draw a column chart, chord chart, or enrichment analysis circle chart for significant enrichment function items. First, 60 mg of the sample was weighed and put into a 1.5 mL centrifuge tube. Two small steel balls and 600 μL of methanol water ( v : v = 7:3, containing l-2-chlorophenylalanine, 4 μg/mL) were then added. It was placed into a −40 °C refrigerator for 2 min of pre-cooling and then put into the grinder for grinding (60 Hz, 2 min). Ultrasonic extraction was carried out in an ice water bath for 30 min, and it was left to stand at −40 °C overnight. It was centrifuged at a low temperature for 10 min (12,000 rpm, 4 °C), 150 μL of supernatant was removed with a syringe, and then it was filtered with a 0.22 μm organic-phase pinhole filter, transferred to an LC injection vial, and stored at −80 °C until LC-MS analysis. Quality control samples (QC) were prepared by mixing the extract of all samples in equal volume. Remarks: All extraction reagents were pre-cooled at −20 °C before use. A liquid chromatography–mass spectrometry system composed of a Dionex U3000 UHPLC ultra-performance liquid chromatography tandem QE plus high-resolution mass spectrometer was used (Thermo Fisher Scientific, Waltham, MA, USA). Chromatographic column (Waters Corporation, Milford, MA, USA): ACQUITY UPLC HSS T3 (100 mm × 2.1 mm, 1.8 UM); column temperature: 45 °C; mobile phase: a-water (containing 0.1% formic acid), b-acetonitrile; flow rate: 0.35 mL/min; injection volume: 5 μL. Ion source: ESI. The sample mass spectrum signal was collected in positive and negative ion scanning mode, respectively.
Is inadequate anatomical knowledge on the part of physicians hazardous for successful clinical practice?
fbfda62f-f161-436e-827c-4b3d079d8091
8739686
Anatomy[mh]
Human Anatomy teaching/learning cultivates the language of medicine as many of the terms, used in medicine, originate from anatomic parts and their functions/orientations, such as anterior–posterior , palmar–plantar , proximal–distal , external–internal , abduction–adduction , elevation–depression , and protraction–retraction . Beahrs has commented “ How then can a student of medicine not know Anatomy and claim to understand the language of medicine ?” Medical education/science has evolved from Anatomy , for instance, pathology is morbid Anatomy; embryology is science of developmental processes and genetic anomalies; cytology and histology deals with study of microanatomy; physiology describes functions of structures, organs and systems . In the clinical domain, Radiology is the analysis of images of anatomical structures; Microbiology is anatomy of micro-organisms causing diseases while surgery is the science and art of manipulation, removal and replacement of morbid structures (Fig. ) and medicine in turn encompasses manipulation, restoration, activation, or dissolutionpertaining to musculoskeletal anatomy suppression of uncomfortable exigent signs and symptoms (Fig. ). Thus, none of these subjects can be comprehended fully well, if the learners ignore elements of gross and microanatomy. In addition to this, the knowledge of Anatomy is directly needed in the diagnosis and treatment of disease through clinical practice. In daily medical practice, “anatomy is important for physical examination, symptom interpretation and interpretation of radiological images. Knowledge of anatomy is essential for understanding neurological or musculoskeletal disorders .” Even the students realized the relevance of Anatomy in clinical practice as they emphasized in a feedback study of Bergman et al. , “ You need it for diagnosis; you need it for physical examination, for hand-over to colleagues, for record keeping, for writing letters, in fact for understanding how certain processes work, why patients are ill and what should be done about it .” However, full realization of the relevance of Anatomy and its intrinsic value in medicine comes only after extensive clinical experience. Senior students commented: “in the clerkships, I suddenly, they thought, hey! At this point it would have been really useful if I had studied a bit more Anatomy .” Moreover, the students during their neurology clerkship mentioned: “I am doing neurology now and there you discover that every diagnosis, everything comes down to Anatomy in the end and how things run and work, that is really awfully important ” . In the study of Priyadharshini et al. , “the 4th semester students , interns , and clinicians perceived Anatomy to be highly relevant to their day-to-day practice . Clinicians in our survey perceived the role of anatomy in the clinic as important, particularly during the physical examination, interpreting radiological images, and communication with colleagues. Physicians become less aware of the foundational knowledge required for their clinical reasoning skills .” In addition to this, the relevance of Anatomy in clinical practice has been highlighted in detail in a very good editorial, recently, by Duparc et al. Apart from this, there are many grey areas in Anatomy which are directly related to clinical practice together with the clinical grey areas completely dependent on Anatomy are to be discovered by standalone and/or collaborative research to strengthen the medicinal practice. The ever-expanding array of newer diagnostic methodologies, including the innovations in the way the body can be visualized (e.g., computed tomography, magnetic resonance imaging scans), require a specific level of anatomical knowledge . In an earlier study, Barlow commented that endoscopic and laparoscopic procedures demand a clinically oriented Anatomy . Many authors have drawn attention to the way in which anatomy has been ignored or at the least neglected by medical school owners and officials, but also by clinicians and students [ , , , , , , , , , , , , , ] leading to acquisition/delivery of inadequate Anatomy to pursue successful clinical practice. The result has been a fall in the standard of medical education, with in some cases, an increase in litigations due to anatomical ignorance in diagnosis and treatment by physicians and surgeons [ , , , , , ]. Authors have repeatedly put this down to the drastic reductions in anatomical teaching schedules [ , , ], pruning of anatomical curricula , closure Anatomy departments [ , , ] including dissection halls and non-recruitment of medically qualified faculties [ , , , , , ]. The context, within which these developments have taken place, has been that only limited amounts of Anatomy are adequate for clinical practice. This concept has to be reviewed. Inevitably, these expectations have influenced medical students who have been brought up to be content with what we would regard as an inadequate core content of Anatomy. In our estimation this is a pseudo-concept, in which small non-cohesive segments of Anatomy have been regarded as an acceptable fundamental base for the many diverse areas of clinical practice. All this put together loudly speak that though it is most important constituent of clinical practice yet there is silent murmur among physicians practicing medicinal treatment that least anatomy is required in their practice. Therefore, this study has been designed to evaluate the intimate interwoven inter-relationship between Anatomy, a range of diseases, functions and activities of organs and systems of human body and the diagnosis and analysis of available medicinal treatments for safe clinical practice . Consequently, the evaluation of these parameters on the basis of feedback sought from students, faculties/medical practitioners through a questionnaire based on the demand, necessity, importance, usefulness and applicability of human Anatomy in clinical practice and review of literature including our own analysis. Thus, the objective of study is to explore the validity of concept, ‘ Is inadequate anatomical knowledge on the part of physicians hazardous for successful clinical practice ? ’ in clinical practice evaluating feedback data from students and faculty. To meet the objective of study, an experiment has been planned to seek opinion of 386 undergraduate students (UGs) and 6 interns, 105 postgraduate medical students (PGs), 10 Post-PG Super-specialty students (P PGs), 31 Non-Clinical Faculties (NCFs) and 60 Clinical Faculties (CFs) in UP University of Medical Sciences Saifai Etawah UP India, regarding. So, during Aug.–Nov. 2020, a questionnaire consisting of 15 questions was designed to cover the application and need of Anatomy in clinical practice as analyzed by authors and illustrated by [ , , , ] Bearhs, Standring, Bergman et al., Priyadarshini et al., etc., consisting of the diagnosis and analysis of treatment of disease in clinical practice by physicians through medicinal treatment . Such as, “ Do the variations in anatomical macro/microstructures require to analyze the functions, activities and configuration of organs and systems?” A) Yes, B) No. The questions have been so designed that the populations were to answer these questions in yes (Y) or no (N) . Ys represent need and Ns no need. Thus, varying numbers of Ys represent viewpoints of various populations regarding the degree of need of Anatomy in clinical practice. For example, the responders have been asked in one of the questions, “Is precise knowledge of surface Anatomy required for observation, palpation, percussion and auscultation at accurate location?” A) Yes, B) No. The degrees have been defined in hypothesis 1 on a three-point scale. Hypothesis 1 Hypothesis 2 degree of need of Anatomy in clinical practice has been conceptualized as the knowledge of Anatomy to be “Most essential ”, if the means of viewpoints of group of populations in terms of Ys fall in the range of 10–15 (67–100%) , “ Essential”, in the range, 6–9 (40–60%) and “ Least essential” in the range 1–5 (6–33%) (Table ) on a three-point scale. A similar pattern has been defined for the diagnosis and analysis of treatment (Table ). Here, the generalized means of viewpoints of all the groups of the subjects have been computed irrespective of knowledge and experience of both Anatomy and clinical practice whereas the sample space possess diversified populations starting from fresh medical entrants to expert teaching faculties. So, the pattern of answers is highly divergent due to variation in knowledge and experience of populations in Anatomy and Clinical practice both. Therefore, weighted means of viewpoints were also calculated for more focused opinion of population considering their knowledge and experience in Anatomy and clinical practice. If the means of viewpoints are qualified by knowledge and experience of Anatomy, its application and analysis of clinical care together with knowledge and experience in clinical care, the mean viewpoints of the total population is modified/refined by applying weightage/importance of knowledge and experience. These weightages have been defined in hypothesis 2. The Fresh entrants, the students of first semester of MBBS Ist year, have very limited exposure to both Anatomy and clinical practice therefore, questions pertaining to distorted macro/micro-anatomical structures, due to external/internal pathogens, toxins, drugs, environmental hazards, external traumas/internal lesions due to iatrogenic causes and misuse of organs/limbs and congenital anomalies or otherwise, generating diseases through impairment of functions/activities will be almost incomprehensible to them not to speak of knowing the interwoven interrelationship among these parameters to analyze the diagnosis and treatment. So, their viewpoints will certainly influence the mean viewpoints of the students of IInd, IIIrd, IVth and Final year as all of them constitute the UG group. The other populations too have variation in their knowledge and experience in Anatomy and clinical analysis but not to that extent so the following weightage has been conceptualized. Thus, hypothesis 2 has been propounded which states that the weightage/importance has been given 100% to viewpoints of clinical faculties (CFs) and Post Post-graduate superspecialists (P PGs) because these group of populations face constraints of anatomical knowledge in their day today clinical practice providing real feedback, 65% to non-clinical medical faculties (NCFs) and post-graduating students (PGs) because the NCFs have no clinical exposure together with forgotten Anatomy and PGs include PGs from clinical side and non-clinical students so have variant knowledge and experience in Anatomy and clinical practice both.60% to graduating students, UGs consisting of the students from Ist year to final-year and Interns (Table ). As illustrated above, the groups have diversified knowledge and experience so the mean viewpoints are divergent. So, to achieve more focused viewpoints, the weighted means of total population have been calculated by applying proper weightages as defined in Hypothesis 2. This mean again gives overall weighted mean viewpoints of total population. To further sharpen and making more comprehensible inference, the total % of Ys emphasizing Anatomy to be most essential, essential and least essential have been computed. In addition to this, the % of responders in various groups, who expressed that Anatomy is most essential, essential and least essential, has also been calculated. This analysis, together with the literature pertaining to, was reviewed. The dependence of diagnosis and treatments on human Anatomy was analyzed. The hazards of inadequate knowledge of Anatomy in clinical practice in general and medicinal treatment in particular were also explored and discussed. The effects of unknown variations in interwoven relations among shapes, sizes, locations and orientations of structures, organs and limbs, their configuration, pathways, functions, and antigens to be detected through physical examination, radiological images and their interpretation including the signatures of diseases on these morphological elements have been explored and assessed for successful clinical practice. The permission to seek views of UGs, PGs, PPGs and faculties of our institute was taken from Dean (Faculty of Medicine). The participants were verbally informed that their views were used for research purpose. Ethical clearance for the study was obtained from Institutional Ethical Committee vide no. 142/2020–21. The computed generalized and weighted means of viewpoints of all the groups individually are shown in the tables pertaining separately, to total clinical practice (Table ), diagnosis (Table ) and treatment (Table ). It can be clearly seen from the generalized means that particularly, the UGs studying medicine in various years and semesters have limited exposure to both Anatomy and clinical practice so their viewpoints, on questions like, “while understanding malfunctions of organs/systems, is the knowledge of organization and shape, size, location and orientation of structures not required?” A) Yes, B) No., are not only highly divergent but also having skew values. The range of Ys (viewpoint to express degree of need of Anatomy in clinical Analysis) from UGs spreads in the range from 5 to 15 having maximum divergence in the questions pertaining to overall clinical practice whereas, these ranges of Ys expressed by PGs, Interns, P PGs, NCFs and CFs are 9–15, 11–13, 11–15, 10–15 and 9–15, respectively having less diverged data than UGs. Similar is the case for questions in diagnosis and treatment. In addition to this, the data of mean statistics have divergence as large as from 11.1 ± 2.2 to 13.2 ± 1.71in clinical practice, from 8.4 ± 1.5 to 10.2 ± 0.9 in diagnosis and from 3.8 ± 0.1 to 4.4 ± 0.9 for analysis of treatment (Tables , , ). Here the lowest degree of need of Anatomy for most essential is from UGs and highest degree of need of Anatomy among most essential is from CFs with divergence as large as 2.2 in standard deviation for UGs. This loudly speaks that maximum divergence of Ys is from UGs and minimum deviation in CFs. These results support our Hypothesis 2. However, ‘most essential’ degree of need has been expressed by 77.7% UGs, 91% by PGs, 100% by Interns, P PGs, NCFs and 98% by CFs for clinical practice, similarly for diagnosis and treatment (Tables , , ). Out of total Ys (contributing to the 3 degrees of need of Anatomy), majority ranging from 61.4 to 87%, 74 to 93% and 72.5 to 86% contribute to ‘most essential’, 0 to 12.4%, 0 to 2% and 0 to 2.2% to essential and almost no Ys have been given by responders for ‘least essential’ (Tables , , ). The weighted means of total population improve the degree of need of Anatomy in clinical practice together with diagnosis and treatment from 11.5 to 12.2, 8.9 to 9.5 and 3.9 to 4.1 remaining in the range of ‘most essential’ Ys. The individual means of Ys from all the groups separately can be seen in detail from the tables which again fall in the range of Ys for ‘most essential’. 83.6 ~ 84% of the responders from total population express the degree of Anatomy to be ‘most essential’, 16.2 ~ 16% ‘essential’ and negligible 0.2% least essential in clinical practice; 97% of responders from total population express ‘most essential’, 3% ‘essential’ and none expresses ‘least essential’ in diagnosis whereas 94% of responders from total population express ‘most essential’, 4.7% ‘essential’ and 1.3% express ‘least essential’ in treatment. These are extremely beautiful results strongly supporting the degree of need of anatomical knowledge to be ‘most essential’ in clinical practice. However, the presence of skew values was found in the feedback from chiefly UGs and mildly from PGs due to variant knowledge and experience of the sciences. These deteriorated the statistical means and % analysis to some extent. However, this was tried to be annulled by computing weighted mean. There are two basic pillars of this study. First is the statistical analysis of feedback survey regarding opinions of population groups, such as UGs, Interns, PGs, P PGs, NCFs and CFs and total population . Second, the review and analysis of research literature regarding the degree of need of anatomical knowledge for successful clinical practice have also proved foundational. Statistical analysis of opinion feedback survey Analysis of literature survey Diagnosis Treatment Research All too often Anatomy is regarded by some as being moribund and stuck in an unchanging past . This is seriously misguided and detrimental to anatomical education, since it pits Anatomy against the excitement of the rapidly moving disciplines, such as virology, genetics and molecular biology. They are rapidly moving but to place all the stress on them as the bases for clinical disciplines is hazardous. No matter how much they contribute to contemporary medicine and to its bioscience foundations, they cannot survive without a wealth of fundamental knowledge across the broad spectrum of the pre-clinical and clinical sciences. Not to speak of this, clinical Anatomy itself is moving. A glance at any of the earlier editions of Gray’s Anatomy and an edition produced in the twenty-first century is enough to demonstrate that the details of the Anatomy of the first part of the twentieth century and the first part of this century are separated by a gulf of enormous proportions. And this gulf is down to the research undertaken across all branches of Anatomy. While many of these details may not be required by every undergraduate medical student, the concepts that drive the clinical sciences are being transformed by the vibrancy of contemporary Anatomy. This is why it is appropriate to regard anatomical education as being essential for much in medicine. The students’ opinion is important because they pursue medical education getting ready to enter into the medical profession . Whatever difficulties, they were facing in comprehending medical education and practically in clinical practice, will be revealed. This will add immense importance to study but with the constraint that they neither have enough experience in clinical analysis nor remember the clinically important anatomical variations. Though these students have been divided into four groups, namely UGs, PGs, Interns and P PGS, yet these groups too have diversified knowledge and experience. So it is pertinent to mention here that the first-year students of MBBS are exposed to clinical practice and anatomy in a very limited manner. However, IInd, IIrd, IVth and final-year students are conversant with pre-clinical Anatomy and clinical practice to some extent, so the mean viewpoints of UGs might have been distorted. PGs exercise the residual Anatomy after forgetting part of it together with they being from basic, para-clinical and clinical sciences. So, their viewpoints are also divergent. However, clinical PGs have acquired enough clinical experience to reveal the need of Anatomy. The basic sciences faculties consist of a spectrum of backgrounds, some are not medically qualified so devoid of clinical exposure and some might have forgotten Anatomy during their post-graduation and teaching career in different disciplines. This segment of population contributed to distorted mean viewpoints . The Post PGs and clinical faculties are mature and experienced clinicians with residual Anatomy studied, during preclinical phase and during self-studied clinical Anatomy during clinical practice, have clear concept regarding degree of need of anatomy. Therefore, their feedback has been regarded as most valuable . Since the number of UGs/PGs dominates the numbers of other groups so even after applying the weightage, the weighted mean of viewpoints is only slightly corrected. To accomplice the objective, three-tier analyses have been carried out on the degree of need of Anatomy in clinical practice consisting of diagnosis and treatment . The means viewpoints of UGs in clinical practice, diagnosis, and analysis of treatment, respectively, (11.1 ± 2.2, 8.4 ± 1.5, 3.8 ± 0.1), Interns (12 ± 0.6, 9.0 ± 0.6, 3.8 ± 0.4), PGs (11.6 ± 1.74, 9.5 ± 1, 4 ± 0.9), P PGs (13.1 ± 1.5, 9.7 ± 1.3, 4.2 ± 0.9), NCFs (12.55 ± 2.11, 9.7 ± 1.2, 4.3 ± 0.8) and) CFs (13.2 ± 1.71, 10.2 ± 0.9, 4.4 ± 0.9) which though fall well within the range of ‘most essential’ (10–15, 7–11, 3–5). This clearly states that the degree of need of Anatomy is ‘most essential’ for all the groups as defined in Hypothesis 1. The application of weighted mean concept further enhances the degree of need of anatomical knowledge within the range of ‘most essential’ from 11.5, 8.9, 3.9 to 12.2, 9.5, 4.1 in clinical practice, diagnosis, and analysis of treatment, respectively. Further percentage analysis of responders, group-wise and total population firmly establish that the degree of need of anatomical knowledge is ‘most essential’ by majority of individual groups, such as UGs (77, 95, 92.2) %, Interns (100,100,100)%, PGs (91, 97, 96 ) %, P PGs (100,100, 100)%, NCFs (100,100, 100)% and CFs (98,100, 96)% and total population(83.6, 97, 94)% in clinical practice, diagnosis, and analysis of treatment. In a survey carried out by Ahmed et al. revealed that all participants consisting of medical students, trainees and specialists agreed that knowledge of anatomy is important for medical practice and is perceived to be important for safe clinical practice . Our study confirms that Anatomy is most essential for clinical practice and is also supported by previous studies . As the three-tier statistical analysis has been carried out from feedback survey on following heads, (1) Diagnosis, (2) Analysis treatment and (3) Overall clinical analysis so let us examine the same through literature survey. The diagnosis of disease starts from the patient’s input in form of signs and symptoms of discomfort of disease at body location/feelings and ends with the complete investigation of disease through laboratory tests. In this process, clinicians firstly, analyze the signs and symptoms in relation to anatomical causatives like distortions in shapes, sizes, locations, orientations, pathways and configurations of anatomical macro/microstructures forming organs and systems of human body by physical examination through inspection, palpation, percussion and auscultation , and then investigate the location of causative factors. This completely depends on a precise knowledge of surface Anatomy with respect to relevant anatomical landmarks in relation to concealed internal distorted morphology of structures, organs, limbs and systems. Then, advanced investigations are done by mapping of signs and symptoms of discomforts, disease and anatomical structural anomalies like distortions or variations in structures causing impairment of activities and/or functions (Fig. A, B) along with changes of sensation in structures, organs, limbs, and systems. These diagnostic parameters are having very complexly interwoven, interdependent and interrelationship among them so these can be assessed only with help of necessary and sufficient Anatomy to correctly diagnose the diseases for administering the right treatment. The anatomical structural distortions are examined by interpretation of radiological images of extra growth in bones, hard masses, stones, injury, extra-development of bodies and variant attachment of tendon/ligaments/fibers in organs/systems and degeneration of macro/microstructures due to interaction of two structures and histological slides including other pathological tests for changes at micro-level. With advances in science and technology, new techniques and equipment have been innovated to facilitate diagnosis and treatment. The use of these tools requires detailed and precise knowledge of Anatomy for its efficient application and analysis. Therefore, physicians are required to have proficiency and skill to compare defective/injured macro/microstructures with normal ones. This is only possible when clinicians have a sound knowledge of most essential Anatomy . It has been found that clinicians either say they require least Anatomy or they diagnose with overconfidence of without knowing necessary and sufficient Anatomy. The clinicians guiding/exercising clinical Anatomy for clinical analysis often have neither the time to update anatomical knowledge nor remember it to the required level as “ Perhaps we don’t know what we thought we knew: Why clinicians need to re-visit and re-engage with clinical anatomy ” . Specialty practitioners dealing with more critical patients suffering from more complex and advanced diseases, certainly, require more precise diagnosis to explore macro/microstructural distortion/deformation to be manipulated by medicines. This will involve most essential and more detailed Anatomy far beyond that provided by undergraduate teaching. As elaborated in the preceding section, the generalized/weighted mean viewpoints from all populations established that Anatomy is most essential in clinical practice (Tables , , ). This applies at all stages of diagnosis and assessment of the medicinal or surgical treatment. All these stages have an underpinning of anatomical knowledge and understanding. In other words, not only diagnosis but also analysis of treatment depends on anatomical structural deformations as the new drugs/ medicinal molecules/antibiotic medicines, for manipulation, restoration, activation, or dissolution are location, structure, organs, system, impairment and disease specific, so these cannot be administered without detailed knowledge of most essential Anatomy for such clinical analysis . For example , (1) the treatment through specific drug or medicines of stones in the gall bladder or kidney or infection in lungs and other organs together with (2) extra growth in bone due to cancer or clinical complications like ischemia, nerve irritation and/or degeneration of microstructure etc., including (3) if there is swelling in a structure/organ, are followed by monitoring of images and imagery interpretation dependent on knowledge of Anatomy. Thus, Pathology and Radiology cannot be comprehended without sound knowledge of Anatomy as “a mechanic does not know the parts of the machine, automobile, or television set, he is repairing, it is unlikely that it will work in the end. So, physicians, regardless of their specialty, must know and appreciate gross anatomy ” . In illustrations like these, the role of anatomical knowledge can be overlooked since by the treatment stages, it has become thoroughly integrated into the clinicians’ vocabulary. However, any inadequacies in that knowledge may appear all too starkly in misdiagnosis and inappropriate treatment. In such circumstances, the possibility of misdiagnosis and mistreatment may be high. However, the pseudo-concept of requiring inadequate levels of Anatomy may lead to such hazards as instant suppression of signs and symptoms to the detriment of patients’ welfare and health. Such treatment not only produces no relief, side-effects/reactions to drugs, may weaken the immune system of the body and may also lead to additional costs in the health care system including the litigation in failure cases. Therefore, in case of litigation in medicolegal complications, the defense of physician and the claim from insurance companies , in relation to the manipulation, restoration, activation, or dissolution of structural deformations in shapes, sizes, locations, orientations, pathways and configurations of known/unknown millions of external/internal anatomical macro/microstructures forming organs and systems of human body, can be successfully pleaded on the basis of clear and in-depth anatomical knowledge. se data, bringing out overall generalized mean viewpoints of all the groups and total populations, reveal that the Anatomy is most essential for successful clinical practice consisting of diagnosis and treatment supporting the concept, ‘ inadequate anatomical knowledge for physicians, is hazardous for successful clinical practice’. Diseases, distorting anatomical structures/organs/limbs or systems are responsible for impairment of functions and activities. These dysfunctions create signs and symptoms of diseases. The accurate diagnosis and analysis of treatment can be carried out by exploring the interwoven interrelation between diseases, signs and symptoms and distortions or injuries in the anatomical structures/organs/systems. It clearly establishes that the knowledge of Anatomy is ‘most essential’. The diagnosis and treatment can further be refined by intensive research in grey areas of medical sciences consisting of standalone Human Anatomy and collaborative multidisciplinary research by medically qualified anatomists and others to enhance healthcare. The eruption of COVID-19 pandemic interrupted the collection of data. The varied knowledge and experience in Anatomy and clinical analysis of various populations influenced the feedback. The knowledge of English of populations was variable and so some questions might have been misunderstood. Certain participating populations had no exposure to clinical care.
Selective targeting of genes regulated by zinc finger proteins in endometriosis and endometrioid adenocarcinoma by zinc niflumato complex with neocuproine
e2f8fb27-7108-4c99-aeb3-b82c6011d2f4
11933352
Pathologic Processes[mh]
Pathologies in the pelvis primarily arise from the attachment and growth of endometriotic implants in a non-physiological environment – outside the uterine cavity. This process is altered by uncontrolled angiogenesis, which bypasses the physiological immune response and elevates hypoxia-inflammatory conditions, driving the transition of eutopic endometrium to ectopic endometriosis and its potential malignant transformation. The transition of endometriosis to a malignant state is regulated by molecular mechanisms with key factors such as HIF-1α (hypoxia-inducible factor 1α), COX2 (cyclooxygenase 2), VEGF-A (vascular endothelial growth factor A), zinc-fingers (ZNFs) , Nrf2-ARE (nuclear factor erythroid 2-related factor 2 – antioxidant response element) and microRNAs (miRs) . All these transcription factors (TFs) are necessary for the physiological regulation of uterine lining renewal during the menstrual cycle and embryo implantation. For instance, hypoxia induced by a physiological decrease in progesterone (P4) levels increases the activity of the transcription factor HIF-1α together with COX2, which regulates the synthesis of prostaglandins (PGs) . The rapid transition to severe hypoxia due to the interruption of blood supply in the uterine endothelium by vesselcoiling activates apoptotic signals, allowing immune cells to remove epithelial cells during menstruation. Inadequate, prolonged local hypoxia significantly stimulates pathological angiogenesis by suppressing apoptosis while promoting an inflammatory microenvironment . Hypoxia-accelerated implantation of viable endometriotic cells occurs through the production of pro-angiogenic factors (VEGF, PGF, TGF, Ang-1) , which are released into the surrounding tissue and bind to capillaries and arterioles, promoting the formation of new blood vessels . This hypoxia-mediated angiogenesis is also targeted by miR-206 , , -376a , and let-7c , which acts as an inhibitor of angiogenesis, whereas miR-133b or miR-23a serve as promoters of angiogenesis . MicroRNAs can further regulate the expression of ZNFs, as they contain many seed-matched sequences predominantly localized to the ZNF regions coding the C2H2 domain . The absence of the specific ZNF3 domain suppresses auto-ADP-ribosylation of PARP , which is involved in DNA repair, angiogenesis, and chemoresistance of gynecological pathologies . ZNF3 selectively inhibits PARP1 (poly(ADP-ribose) polymerase 1), which could serve as a potential therapeutic target for tumor treatment . In endometriosis, elevated reactive oxygen species (ROS) and mitochondrial dysfunction cause DNA strand breaks and activate DNA repair via PARP. Hyperactivation of PARP by ROS leads to depletion of NAD + and ATP and can disrupt calcium homeostasis (increasing intracellular Ca + levels), exacerbating cellular stress and ultimately leading to cell death . The Nrf2 plays a pivotal role in cellular defense against oxidative stress by regulating antioxidant response elements and has been linked to endoplasmic reticulum (ER) oxidative protein folding and calcium homeostasis . Impaired ER redox signaling can decrease Nrf2 nuclear translocation, resulting in ER calcium overload and increased calcium-dependent cell secretion . In endometriosis, Nrf2 activity is often compromised, resulting in increased oxidative stress and mitochondrial dysfunction . The Nrf2-ARE pathway directly affects neoangiogenesis through the ANG2/ANG1 axis. Nrf2 activation induces antioxidant enzymes (GPx, SOD) that lower reactive oxygen species (ROS) levels, thereby modulating inflammation and ensuring vascular stability . In both physiological and pathological conditions, such as endometriosis and endometrial carcinoma, the balance between angiopoietins ANG1 and ANG2 is essential for vascular homeostasis. ANG1 stabilizes blood vessels via the Tie2 receptor, while ANG2 antagonizes this effect, promoting vascular remodeling and increased permeability, particularly in the presence of VEGF , . Oxidative stress and inflammatory cytokines can upset this balance by increasing ANG2 expression, leading to abnormal angiogenesis in gynecological disorders. The Nrf2-ARE pathway, through antioxidant responses, helps counteract these effects by reducing ROS levels and restoring the ANG1/ANG2 ratio, promoting normal vascular function . Evidence suggests that interactions between the Nrf2-ARE pathway and the ANG2/ANG1 axis influence endometrial lesion progression by regulating oxidative stress and inflammation. Prolonged hypoxia may trigger persistent Nrf2 activation, contributing to vascular dysfunction and increased permeability. Zinc-finger proteins (ZnFs) have been identified as potential regulators of the Nrf2-ARE pathway and angiopoietin expression, offering new therapeutic insights. ZnFs maintain redox balance and transcriptional regulation of genes related to endometriotic cell survival and apoptosis resistance . In endometriosis and endometrial carcinoma, dysregulated ZnFs may impair Nrf2 function, exacerbating oxidative stress and promoting pathological angiogenesis via the ANG1/ANG2 axis . Furthermore, ZNFs regulate the activity of tumor growth factor β (TGFβ), which contributes to TIEG overexpression and induces apoptosis . Upregulation of ZNFs is associated with apoptosis resistance through regulating apoptotic genes such as BAX, Bcl-2, and Caspase-3 via ROS-induced oxidative damage . Additionally, ZNFs initiate an inflammatory response, support the implantation and survival of endometriotic lesions on the peritoneal surface, and contribute to the worsening course and development of endometriosis . The integrity of the newly formed vascular system is further regulated by ANG-1 (angiopoietin 1) and ANG-2 (angiopoietin 2), which may interact in the progression of endometriosis and represent potential therapeutic targets for non-steroidal anti-inflammatory drugs (NSAIDs) that influence angiopoietin expression of , . Suppression of the inflammatory mediator COX2 by NSAIDs poses a challenge in treating chronic inflammatory diseases, as prolonged use of NSAIDs has been shown to increase oxidative stress and disrupt the sensitive antioxidant status of patients . Metal complexes with NSAIDs represent an innovative approach to treating inflammatory diseases, as their effect is not limited to COX2 inhibition but also involves interaction with nucleic acids and direct modulation of the enzyme activity, such as MMPs . Our previous studies demonstrated the potential therapeutic effects of NSAID-biometal complex [Zn(neo)(nif) 2 ] (neo = 2,9-dimethyl-1,10-phenathroline; nif = 2-[3-(trifluoromethyl)anilino]nicotinato), as it exhibited higher cytotoxicity on cells with high inflammatory metabolism . Continuing our investigation of the mechanism of action of this complex, we analyzed the expression of targets involved in angiogenesis activated by hypoxia-inflammatory stimuli. DNA intercalation Angiogenic, inflammatory, antioxidant, and apoptotic gene expression Expression of angiogenic and inflammatory microRNAs Angiogenic, inflammatory, and antioxidant protein expression Mitochondrial Ca Our previous DNA binding studies performed on samples isolated from endometriotic 12Z and control HME1 cell lines indicated the binding specificity of [Zn(neo)(nif) 2 ] . To further investigate the binding specificity of the studied complex, short double-stranded DNA (dsDNA) oligonucleotide sequences were selected for the standard competitive fluorescence binding studies with ethidium bromide. The selected sequences were CCCTC-binding factor zinc-finger protein 3 (ZnF3-7) (5'-TAGCGCCCCCTGCTGGC-3'/3'-ATCGCGGGGGACGACCG-5’) and CCAAT/enhancer-binding proteins (C/EBP) (5'-ATTGCGCAAT-3'/3'-TAACGCGTTA-5’). Both sequences are located in the regulatory regions of their respective genes and are recognized by transcription factors during transcription. Experimental results showed that the studied complex displaces ethidium bromide and binds to both sequences via intercalation, as indicated by quenching of fluorescence in the DNA-EB complex (Figure ). A more thorough evaluation of the results revealed a higher affinity of the complex to the ZnF3-7 sequence (K SV = 2.17(2) × 10 5 M -1 ) in comparison with the EBP sequence (K SV = 1.10(3) × 10 5 M -1 ) (Fig. ). We determined the relative gene expression of angiogenic factors (VEGF-A, TGF-β1, ANG1, ANG2) (Table ) as we hypothesize that the [Zn(neo)(nif) 2 ] complex preferentially binds to a ZnF-like sequence, such as ZNF3-7 or other ZNFs. The intercalation of [Zn(neo)(nif) 2 ] into DNA can potentially influence the activity of angiogenic transcription factors. We proved this hypothesis by analyzing the expression of genes that regulate vascular formation in a spheroid model of HME1, 12Z, and A2780 cells. A non-significant increase in the ANG2/ANG1 gene expression ratio was observed in HME1 cells treated with [Zn(neo)(nif) 2 ] (P = 0.9812), as well as in HME1 treated with cisPt (P = 0.7468). In the 12Z model, a significant elevation of ANG2/ANG1 ratio was found in the cells treated with cisPt (P < 0.0001), as well as in those treated with [Zn(neo)(nif) 2 ] (P < 0.0001), compared to untreated control 12Z 3D model cells (Fig. A). To compare the efficiency of our compound with standard treatment (cisPt), the ANG2/ANG1 ratio showed a non-significant increase in A2780 cells with cisPt (P = 0.1243). In contrast, a significant increase in the ANG2/ANG1 ratio was observed in A2780 cells treated with [Zn(neo)(nif) 2 ] (P < 0.0001) (Fig. A). Next, we analyzed the VEGF-A/TGF-β1 ratio as an indicator of angiogenic activity in the samples. Our experiments revealed a significant change in the spheroid cell model of HME1 under tested conditions. In the HME1 model treated with cisPt, a significant increase in the VEGF-A/TGF-β1 ratio was observed (P < 0.0001), as well as in the HME1 model treated with [Zn(neo)(nif) 2 ] (P < 0.0001). In 12Z cells treated with cisPt, a significant increase in the VEGF-A/TGF-β1 ratio was found (P = 0.0019), whereas a non-significant change was observed in the 12Z model treated with [Zn(neo)(nif) 2 ]. In A2780 cells, treatment with both cisPt and [Zn(neo)(nif) 2 ] resulted in significant increases in the VEGF-A/TGF-β1 ratio (P = 0.0044 and P < 0.0001, respectively) (Fig. B). We analyzed the relative gene expression of angiogenic factors (VEGF-A, TGF-b1, ANG1, ANG2), predicting that the studied complex preferentially binds to a ZnF-like sequence. Although significant changes in the individual expression levels of the calculated ratios were observed in specific groups (Table ), their gene expression did not reach significance across all conditions. To evaluate the effects of our compound on inflammation and antioxidant activity, the Nrf2/COX2 gene expression showed a non-significant increase in the HME1 model under bothconditions: cisPt (P = 0.8172) and [Zn(neo)(nif) 2 ] (P = 0.4868). In 12Z cells treated with cisPt showed non-significant change (P = 0.6750), whereas a significant elevation of Nrf2/COX2 ratio was observed in the 12Z model treated with [Zn(neo)(nif) 2 ] (P < 0.0001) (Fig. C). In the A2780 model, the Nrf2/COX2 ratio significantly increased under both treated conditions (P < 0.0001). Furthermore, the COX2/HIF-1α ratio was analyzed to evaluate another aspect of the inflammatory impact of our compound. A significant increase in the COX2/HIF-1α gene expression ratio was observed in HME1 cells treated with both cisPt and [Zn(neo)(nif) 2 ] (P < 0.0001). The change in the COX2/HIF-1α ratio was also significant in 12Z cells treated with cisPt (P = 0.0001) and [Zn(neo)(nif) 2 ] (P = 0.0044). In contrast, the change in the COX2/HIF-1α ratio in A2780 cells treated with cisPt (P = 0.9844), as well as in those treated with [Zn(neo)(nif) 2 ] (P = 0.9998), was non-significant (Fig. D). To evaluate the effect of our test compound on apoptosis-associated gene expression, we selected CAS3 and BAX. A non-significant decrease in the CAS3/BAX gene expression ratio was observed in HME1 cells treated with cisPt (P = 0.4299) and in the HME1 model treated with [Zn(neo)(nif) 2 ] (P = 0.8530) (Fig. E). Similarly, a decrease in the CAS3/BAX ratio was noted in 12Z cells treated with cisPt (P = 0.8766), while a significant reduction was observed in the 12Z model treated with [Zn(neo)(nif) 2 ] (P < 0.0001). In the A2780 model, the CAS3/BAX ratio remained non-significantly changed under both treatment conditions (cisPt P = 0.9997; [Zn(neo)(nif) 2 ] P = 0.1528). Table summarizes the individual changes in the gene expression of the monitored inflammatory (COX2, HIF-1α), antioxidant (Nrf2), and apoptotic (CAS3, BAX) factors. Based on the detected changes in Nrf2 gene expression levels, we analyzed the gene expression of its two selected target gene products, GPx1 and SOD1 (Fig. ). The relative gene expression of GPx1 significantly increased in all studied cell lines under both tested conditions (P < 0.001). Different conclusions were drawn for the relative gene expression of SOD1, where we observed a significant increase in expression in the HME1 cell model following treatment with [Zn(neo)(nif) 2 ] (P = 0.0188). In the 12Z cell model, a significant increase in expression was determined under the influence of both tested compounds (P < 0.0001), and in the A2780 cell model, we observed a significant increase in expression following treatment with cisPt (P < 0.0001). Table summarizes the individual changes in the gene expression of the monitored antioxidant factors (GPx1 and SOD1). To further support our findings on the potential effects of our compounds on inflammatory and angiogenic pathways, we analyzed the delicate balance between pro-angiogenic (miR-23a, -133b, let-7c) and anti-angiogenic (miR-206, -376a) microRNA levels, which play a crucial role in the physiological regulation of vascular network formation and immune response. MicroRNAs are small yet highly significant molecules that regulate gene expression and control cellular metabolism. Our focus was on determining the ratio of target miRNAs to selected angiogenic (VEGF-A, TGF-β1), antioxidant (Nrf2), and inflammatory factors (COX2, HIF-1α). The miRNA/mRNA ratio provides insight into the extent of miRNA influence on the expression of the corresponding mRNA, as miRNAs can promote degradation and inhibit its translation into protein. This ratio can, therefore, help predict the direction of cellular metabolism. We determined the ratio of miR-206, -23a, -376a, -133b, and let-7c against VEGF-A, as it has previously described that these miRs influence the expression of VEGF-A and angiogenesis. We observed a significant increase in the ratio of miR-206/VEGF-A (Fig. A) in the HME1 model treated with cisPt (P < 0.0001). A significant decrease in miR-206/VEGF-A ratio was observed in 12Z cells treated with both cisPt (P < 0.00001) and [Zn(neo)(nif) 2 ] (P < 0.0001). In A2780 cells, we observed a significant decrease in mi-206/VEGF-A ratio only in the group treated with [Zn(neo)(nif) 2 ] (P = 0.0021). An insignificant increase in the miR-23a/VEGF-A ratio (Fig. B) was observed in HME1 cells treated with cisPt (P < 0.0001). In the 12Z model, a significant decrease was observed under both treatment conditions, cisPt (P < 0.0001) and [Zn(neo)(nif) 2 ] (P < 0.0001), while the A2780 cell model showed no significant changes. The miR-133b/VEGF-A ratio (Fig. D) showed a significant increase in HME1 cells treated again with cisPt (P < 0.0001). In 12Z cells, a significant decrease in miR-376a/VEGF-A ratio. The A2780 cells did not show significant changes in the miR-376a/VEGF-A ratio. A significant increase in the miR-133b/VEGF-A ratio was observed in HME1 cells (P < 0.0001), while in 12Z cells, a significant decrease was determined under both treatments (P < 0.0001). In the A2780 model, a significant decrease was observed with cisPt treatment (P = 0.0358) and [Zn(neo)(nif) 2 ] (P = 0.0015). We identified a significant increase in the let-7c/VEGF-A ratio in the HME1 model treated with cisPt (P < 0.0001). The 12Z cell model showed a significant decrease in the let-7c/VEGF-A ratio under both treatment conditions (P < 0.0001), as did the A2780 cells, which exhibited the same significant decrease (P < 0.0001) under both tested conditions (Fig. E). The expression of miR-133b and let-7c significantly impacts the expression of TGF-β1, as previously described , . The levels of TGF-β1 and miR-133b, along with let-7c, influence the epithelial-mesenchymal transition, which is characteristic of promoted endometriosis. The calculated ratio of miR-133b/TGF-β1 showed a significant increase in the HME1 model under both treatment conditions: cisPt (0.0009) and [Zn(neo)(nif) 2 ] (P < 0.0001). The 12Z model showed a significant decrease under [Zn(neo)(nif) 2 ] treatment (P = 0.0045), and the A2780 model exhibited a significant decrease with cisPt treatment (P = 0.0301) and [Zn(neo)(nif) 2 ] treatment (0.0100) as well (Fig. A). We observed a significant increase in the let-7c/TGF-β1 ratio in HME1 cells treated with cisPt (P = 0.004) and [Zn(neo)(nif) 2 ] (P = 0.0002). In the 12Z model, the let-7c/TGF-β1 ratio significantly decreased under both treatment conditions (cisPt P = 0.0201; [Zn(neo)(nif) 2 ] P = 0.0043). The A2780 model also showed a significant decrease in the let-7c/TGF-β1 ratio under both treatment conditions (cisPt P = 0.0066; [Zn(neo)(nif) 2 ] P = 0.0041) (Fig. B). MiR-206 recognizes the binding site of HIF-1α and can regulate the HIF transcription factor. It can inhibit cell proliferation and extracellular matrix accumulation by targeting HIF-1α. Based on the direct effect of miR-206 on HIF-1α, we performed an additional calculation of the miR-206/HIF-1α ratio, which showed a significant increase in HME1 cells treated with cisPt (P < 0.0001) and [Zn(neo)(nif) 2 ] (P = 0.0039). In contrast, 12Z cells exhibited a significant decrease under treatmentwith [Zn(neo)(nif)2] (P = 0.0002), while A2780 cells treated with cisPt showed a significant increase (P = 0.0023) (Fig. C). Nrf2-dependent miR-206 plays an essential role in cell metabolism by targeting the pentose phosphate pathway, leading to the inhibition of proliferation. We observed a significant decrease in the miR-206/Nrf2 ratio in the 12Z model under cisPt treatment (P = 0.0013) and [Zn(neo)(nif) 2 ] treatment (P = 0.0003). Similarly, A2780 cells exhibited a significant decrease under treatment with both compounds (cisPt P = 0.0047; [Zn(neo)(nif) 2 ] P = 0.0021) (Fig. D). In contrast, HME1 cells showed a slight, non-significant reduction in miR-206/Nrf2 under treatment of both tested conditions. This reduction may reflect elevated antioxidant activity, leading to increased Nrf2 levels, which could, in turn, decrease miR-206 expression under studied conditions. In the HME1 spheroids, we observed a significant elevation of the angiogenesis-promoting miR-133b (cisPt P = 0.0005; [Zn(neo)(nif) 2 ] P = 0.0030) (Figure A), along with a considerable upregulation of the angiogenesis-inhibiting miR-206 (cisPt P = 0.0070; [Zn(neo)(nif) 2 ] P = 0.0187) (Figure B). The levels of the other miRs did not show considerable changes in either treatment group. In the 3D model of the 12Z cell line (Figure C), a significant decrease was observed in the level of the angiogenesis-promoting miR-23a (cisPt P = 0.0471; [Zn(neo)(nif) 2 ] P = 0.0149) and let-7c (cisPt P < 0.0001; [Zn(neo)(nif) 2 ] P = 0.0001) (Figure D). The expression levels of the remaining target miRs did not show significant changes in either treatment group. Spheroids of A2780 (Figure E) exhibited a significant downregulation of the angiogenesis-promoting miR-133b ([Zn(neo)(nif) 2 ] P = 0.0470), a decrease in the proangiogenic let-7c (cisPt P = 0.0019; [Zn(neo)(nif) 2 ] P = 0.0002), and a significant upregulation of the angiogenesis-inhibiting miR-376 ([Zn(neo)(nif) 2 ] P = 0.0006) (Figure F). The levels of other miRs did not show considerable changes in either treatment group. Gene expression typically predicts the corresponding protein levels; however, these levels may be influenced by post-transcriptional and post-translational modifications, potentially leading to unexpected protein levels. We analyzed the protein levels of angiogenic proteins VEGF-A and TGF-β1, the inflammatory marker COX2, and the antioxidant marker Nrf2 (both in its total and phosphorylated (active) form) (Table ). To evaluate protein levels under tested conditions, we calculated the VEGF-A/TGF-β1 ratio, the Nrf2 active/COX2 ratio, and the Nrf2 active/Nrf2 ratio. In the control model of HME1 cells, the VEGF-A/TGF-β1 ratio showed a non-significant decrease (Fig. A). In the 12Z cell model, a significant increase was observed following treatment with cisPt (P < 0.0001), while treatment with [Zn(neo)(nif) 2 ] resulted in a non-significant decrease (P = 0.1781). In A2780 cells, a significant increase was observed with cisPt (P = 0.0487), whereas a significant decrease was noted with [Zn(neo)(nif) 2 ] treatment (P = 0.0451). The Nrf2 active/COX2 ratio showed a significant increase only in the A2780 modeltreated with cisPt (P = 0.0045) (Fig. B), while no considerable changes were observed in the other tested models (HME1 and 12Z). The regulatory action of the Nrf2 protein is exerted only in its phosphorylated form . Therefore, we analyzed the ratio of phosphorylated (active) Nrf2 to its total level in the tested groups (Fig. C). The results revealed a significant decrease in the Nrf2 active/Nrf2 ratio in A2780 cells treated with cisPt (P = 0.0366). No considerable change was observed under any tested conditions, including treatment with cisPt or [Zn(neo)(nif) 2 ]. The key regulatory factors COX2 and TGF-β1 cooperate in the development of inflammation. The COX2/TGF-β1 ratio showed a significant increase in 12Z cells treated with cisPt (P < 0.0001) (Fig. D) and in A2780 cells treated with [Zn(neo)(nif) 2 ] (P = 0.0432). VEGF-A and Nrf2 are pivotal in regulating angiogenesis and cellular response to oxidative stress. Their reciprocal relationship is illustrated in Fig. E. In the HME1 model, the VEGF-A/Nrf2 ratio significantly decreased under treatment with [Zn(neo)(nif) 2 ] (P = 0.0254). In the 12Z model, this ratio significantly decreased under treatment with both cisPt (P = 0.0017) and [Zn(neo)(nif) 2 ] (P = 0.0123). In the A2780 model, a significant decrease was observed under treatment with [Zn(neo)(nif) 2 ] (P = 0.0225). The final protein ratio analyzed was the TGF-β1/Nrf2 ratio (Fig. F), which reflects the regulation of oxidative stress and inflammation at the cellular level. In the 12Z model, a significant decrease was observed under treatment with [Zn(neo)(nif) 2 ] (P < 0.0001). Similarly, in the A2780 model, a significant decrease was observed under treatment with cisPt (P = 0.0097). 2+ , H 2 O 2 levels, and cytosolic levels of Ca 2+ Mitochondrial calcium overload, caused by Ca 2+ influx released from the endoplasmic reticulum under stress conditions, stimulates immune responses and ultimately leads to apoptosis. For live-imaging measurements, we selected the epithelial cell lines HME1 and 12Z based on the gene expression results of angiogenic and apoptotic factors. Excessive mitochondrial calcium accumulation can trigger the opening of the mitochondrial permeability transition pore, resulting in the release of calcium from the mitochondria into the cytosol, serving as an indicator of apoptotic processes in the cell. We quantified basal mitochondrial Ca 2+ levels using the genetically encoded mitochondrial Ca 2+ biosensor 4mtD3cpv. A significant increase in mitochondrial Ca 2+ levels was observed in control HME1 cells in response to treatment with cisPt (P = 0.0069) and [Zn(neo)(nif) 2 ] (P = 0.0001) (Fig. A). Additionally, a significant increase in the mitochondrial Ca 2+ level was detected in endometriotic 12Z cells treated with [Zn(neo)(nif) 2 ] (P < 0.0001), whereas no effect was observed with cisPt treatment (Fig. D). To further investigate, cytosolic free Ca 2+ levels were measured using the Ca 2+ dye Fura-2. Significant changes were observed in both tested epithelial cell lines (HME1, 12Z). In HME1 cells, basal cytosolic Ca 2+ levels significantly changed in response to [Zn(neo)(nif) 2 ] (P < 0.0001) (Fig. B). Similarly, in 12Z cells, cytosolic Ca 2+ levels significantly increased in response to cisPt (P < 0.0001) as well as [Zn(neo)(nif) 2 ] (P < 0.0001) (Fig. E). Mitochondrial metabolism is closely associated with Ca 2+ and ROS levels. To assess mitochondrial ROS levels, we utilized the genetically encoded mitoHyPer7 biosensor. A significant increase in ROS was observed in 12Z endometriotic cells following treatment with [Zn(neo)(nif) 2 ] (P = 0.0126), whereas the increase in HME1 cells was not statistically significant (Fig. 7C, F). Angiogenesis is a physiological process that facilitates the formation of the primary vascular network necessary for tissue growth and repair . It regulates oocyte maturation, the development of functional corpora lutea, and uterine endometrial growth and decidualization . Disruption of this process due to the constant activation of angiogenic factors can lead to excessive vessel growth, contributing to the development and progression of endometriosis and its potential malignant transition . The complex interplay between the immune system, hormones, microelements, and genetic factors significantly influences the development and progression of endometriosis . Transcription factors such as ZNFs and miRs play a dual role; they can reduce inflammation via immunosuppression, thereby promoting the spread and invasiveness of the condition . Additionally, they can inhibit apoptotic cell death in endometriotic cells and wild-type tumors, such as endometrioid adenocarcinoma, ovarian cancer, or cervical squamous cell carcinoma . The regulatory gene sequences of angiogenic factors can vary depending on the specificity of the target ZNFs. For example, ZNF471 has been shown to regulate the expression of EMT-related markers and transcription factors involved in angiogenesis, cellular migration, and vasculogenic mimicry . Conversely, ZNF24 has been reported to repress VEGF transcription . ZNF3 is known to be highly expressed in colorectal carcinoma cells , where it plays a role in cellular proliferation, migration, and invasion. If the [Zn(neo)(nif) 2 ] intercalates into the ZNF3 sequence, it could exert a suppressive effect on target genes, which aligns with the gene expression changes observed in 12Z and A2780 cell lines. The obtained data indicate that the [Zn(neo)(nif) 2 ] complex may influence gene regulation, as evidenced by its impact on expression of target gene associated with the promotion or suppression of angiogenesis (e.g., mRNA of ANG1, ANG2, TGF-β1, HIF-1α, COX2, Nrf2, BAX, and CAS3), on the expression of micro-RNA (e.g., miR-133b, miR-206, miR-376, miR-376, or let-7c), as well as on protein expression related to angiogenesis (e.g., COX2, VEGF-A, TGF-β1, and Nrf2). The molecular conformation of the complex suggests the possibility of intercalation, where the aromatic neocuproine ligand may intercalate between DNA base pairs, potentially stabilized by π-π stacking interactions, hydrogen bonding, van der Waals forces, and hydrophobic interactions . Although providing a definite explanation ischallenging, the observed binding specificity towards the ZnF3-7 sequence might involve the preference of specific base pair sequences (e.g., C-G), as suggested by recent computational studies on aromatic organic molecules . Since intercalation of the [Zn(neo)(nif) 2 ] complex induces structural changes in the ZnF3-7 sequence, it may prevent ZnF3 from binding to the major groove , thereby altering gene expression. Recognizing the ZnF3-7 sequence, where zinc-finger proteins bind, plays a crucial role in regulating gene expression , which might be a key aspect of the studied complex’s mechanism of action at the cellular level. Significant changes in TGF-β1 expression were observed in the monolayer model of cell lines used in the experiment (Table ), with increased levels in endometriotic 12Z cells treated with cisPt and [Zn(neo)(nif) 2 ], and decreased levels in endometriotic adenoma A2748 cells treated with same compounds. It is well known that TGF-β1 acts as a potent immunosuppressor by regulating the proliferation and survival of immune system cells and inducing cell type-specific apoptosis . Additionally, TGF-β1 is a target of microRNA let-7c, which also regulates HIF-1α, estrogen receptor α, and several other genes involved in angiogenesis, cell cycle regulation, and signaling pathways . Let-7c can also exhibit oncogenic effects, as it is highly expressed in ovarian cancers with poor prognosis and decreased overall survival . Both VEGF-A and TGF-β1 play crucial roles in angiogenesis but have opposing effects on endothelial cells. We observed a decreasing trend in the let-7c/TGF-β1 ratio in the 12Z and A2780 cell models (Fig. B), which could be attributed to the apoptosis-inducing properties of TGF-β1 . Conversely, an increased let-7c/TGF-β1 ratio was observed in HME1 cells. It is well known that let-7c has the ability to inhibit the TGF-β1 expression. A decreased level of let-7c may lead to TGF-β1-mediated induction of fibrosis effectors (e.g., collagen I), potentially predicting disease progression. On the other hand, studies demonstrated that microRNAs of the let7-family can affect angiogenesis by modulating TGF-β1 signaling. This could reflect a similar effect to that of let-7f., which has been linked to the activation of the anti-angiogenic TGF-β1/ALK5 pathway . Additionally, the observed elevation in the miR-133b/TGF-β1 ratio (Fig. A) in HME1 cells is significant, as miR-133b functions as an oncogene suppressor by regulating TGF-β1 receptor I and II . Interestingly, the miR-133b/TGF-β1 ratio decreased, which could be explained by the fact that TGF-β1 can act as both an oncogenic and a tumor-suppressive agent, depending on the tumor stage and type . TGF-β1 can upregulate COX2 expression, leading to increased production of prostaglandin E2. This, in turn, influences the COX2 pathway and may induce invasiveness in cooperation with oncogenic signals . This phenomenon could explain the increased COX2/TGF-β11 protein ratio (Fig. D). Further research has demonstrated that TGF-β1 can elicit an Nrf2-mediated antioxidant response, contributing to its anti-inflammatory properties. For instance, the TGF-β1’s ability to induce Nrf2 activity has been associated with protection against vascular wall rupture . On the other hand, Nrf2 has been shown to counteract TGF-β1-mediated growth inhibition, suggesting that Nrf2 may influence the pro-tumorigenic functions of TGF-β1 . We analyzed the decreased TGF-β1/Nrf2 ratio in 12Z and A2780 cells under treatment with cisPt (Fig. F), which could represent its tumorigenic action in cooperation with COX2 and VEG-A levels. Another significant target, VEGF-A, which protects endothelial cells from apoptosis , was unexpectedly elevated in the control HME1 cell line and showed an insignificant downregulation in the 12Z and A2780 cell lines (Table ). Given that the simultaneous overexpression of VEGF-A and TGF-β1 are associated with poorer cancer prognoses , we analyzed the gene expression ratio of those two markers. The VEGF-A/TGF-β1 ratio (Figs. B, A) decreased only in the 12Z cell line after treatment with [Zn(neo)(nif) 2 ], suggesting a potentially better prognosis. However, it has been reported that TGF-β1 suppresses VEGF-A-mediated angiogenesis in colon cancer metastasis , despite the fact that aberrant TGF-β1 expression is critical in the development of endometriosis, which shares several parallels with tumorigenesis . We observed an increase in the VEGF-A/TGF-β1 ratio in both the A2780 and HME1 models, which may indicate the suppression of VEGF-A-mediated angiogenesis. The reciprocal interaction between VEGF-A and Nrf2 can drive a positive feedback loop that promotes angiogenesis . The decreased VEGF-A/Nrf2 ratio (Fig. E) may indicate a potential reduction in angiogenic signals. The role of ncRNA in cellular, tissue, and systemic metabolic processes is indisputable. MicroRNAs can exhibit both pro-angiogenic (miR-23a, -133b, let-7c) and anti-angiogenic (miR-206, -376a) effects. MicroRNAs known to influence VEGF-A expression, such as miR-206, negatively regulate angiogenesis by directly targeting VEGF-A . Similarly, miR-23a reduces VEGF-A levels but also downregulates Nrf2 and CAT, potentially altering ROS levels . MiR-133b, which plays an oncogenic role in the progression of cervical carcinoma and breast cancer , did not show significant changes in its ratio to VEGF-A expression (Fig. ). The HME1 cell model showed a significant increase under treatment with cisPt, while no significant change was observed with [Zn(neo)(nif) 2 ] treatment. In contrast, the 12Z and A2780 models exhibited a decrease in ratios, which was more pronounced and statistically significant in 12Z cells. Significant expression changes were observed in miR-206 in HME1 cells, miR-23a in 12Z cells, and miR-133b in HME1, 12Z, and A2780 cells (Figure ). The anti-angiogenic miR-376a inhibits VEGF-A signaling by targeting SIRT1 or neuropilin 1 in various cancer cells . In conclusion, we analyzed the let-7c/VEGF-A ratio (Fig. E) to further investigate the microRNA effect on VEGF-A expression. This pro-angiogenic microRNA showed a decreased ratio in all tested conditions in 12Z and A2780 cells. Chrishev et al. reported elevated expression of let-7c in ovarian tissue compared to endometrial tissue, suggesting that let-7c may have oncogenic effects with poor prognosis and lower overall survival , which aligns with our observations. The decreased let-7c expression observed in our tested conditions may indicate a better prognosis. Since Ang1 stabilizes blood vessels while Ang2 induces angiogenesis, the elevated Ang2/Ang1 ratio (favoring Ang2) (Fig. A) likely reflects an active angiogenesis phase . We hypothesize the observed increase in ANG2 expression alongside a simultaneous decrease in ANG1 under tested conditions may serve as an independent predictor of cell death, similar to findings reported by Ong et al. . The ANG2/ANG1 ratio may be a valuable prognostic biomarker of endothelial activation in endometriosis or endometrioid adenocarcinoma, particularly in combination with altered expression of VEGF-A and TGF-β1 . Based on gene expression changes, we hypothesize that alterations in the Nrf2/COX2 ratio may reflect shifts in the regulatory roles of HIF-1α and COX2 in theNrf2-mediated inflammatory response. The decreased CAS3/BAX ratio (Fig. E, B) suggests enhanced pro-apoptotic stimuli resulting from mitochondrial dysfunction closely linked to endoplasmic reticulum stress , potentially influenced by studied compounds in HME1 cells. The observed elevation in the CAS3/BAX ratio in 12Z and A2780 under treatment with [Zn(neo)(nif) 2 ] may indicate the activation of CAS3 in programmed cell death processes . MiR-206 has been reported to influence HIF-1α and Nrf2 expression in relation to ROS production and accumulation , as it inhibits cell growth even under high glucose metabolism conditions typical for cancer cells. We observed a decrease in the miR-206/HIF-1α ratio across all tested cell lines following cisPt treatment. In HME1 cells, treatment with [Zn(neo)(nif) 2 ] resulted in a significant increase in the miR-206/HIF-1α ratio, whereas a decrease was observed in 12Z and A2780 cells. This decrease may indicate increased resistance to apoptosis and could be indicative of disease progression . The miR-206/Nrf2 ratio suggests upregulation of Nrf2 across all tested conditions, which may enhance antioxidant defense, cytoprotection, and resistance to oxidative stress-induced apoptosis. Conversely, a decrease in the miR-206/Nrf2 ratio may indicate the promotion of tumor progression, as Nrf2 can support cancer cell survival under stressful conditions. On the other hand, in oxidative disorders, a lower miR-206/Nrf2 ratio may protect by reducing oxidative damage (Fig. C). Nrf2 is a crucial regulator of endothelial miR-206-attenuated expression and can drive tumorigenesis through dysregulation of the Krebs cycle or pentose phosphate pathway . The NRF2 pathway exhibits dual roles; it can act as a tumor suppressor by reducing ROS levels through its antioxidant function , yet it can also promote tumorigenesis by inducing ROS production and enhancing tumor growth . The precise role of Nrf2 in the studied epithelial cell lines treated with tested compounds requires further investigation. The findings suggest that the studied [Zn(neo)(if) 2 ] complex may contribute to mitochondrial calcium overload, resulting in increased ROS production. This mitochondrial Ca 2+ accumulation could be associated with the activation of apoptotic genes (BAX, CAS3) and potential involvement of the mitochondrial permeability transition pore (mPTP). Furthermore, Ca 2+ transfer through the mPTP may lead to elevated Ca 2+ levels, which, together with increased ROS levels, could play a role in the induction of apoptosis or apoptosis-like cell death , . DNA binding studies Cell lines and cultivation protocol Cell transfection and treatment with tested compounds 3D tissue models Mitochondrial H Cytosolic Ca Mitochondrial Ca Statistical analysis The double-stranded oligonucleotides ZnF3-7 (5'-TAGCGCCCCCTGCTGGC-3'/3'-ATCGCGGGGGACGACCG-5’) and EBP (5'-ATTGCGCAAT-3'/3'-TAACGCGTTA-5’) were prepared by annealing forward and reverse single-stranded oligonucleotide sequences, which were obtained from commercial suppliers (Sigma Aldrich). Competitive fluorescence binding studies were conducted following a conventional procedure. Ethidium bromide (2,5 μM) was added to the respective oligonucleotides to form the DNA-EB complex. The studied compound was gradually added to this mixture at 0 to 5 μM concentrations. The fluorescence emission spectra (λ EX = 520 nm) were recorded after each addition of the complex, and the maximum emission intensity values were used to calculate the binding constants (K SV ) using the standard Stern–Volmer equation: [12pt]{minimal} $$}_{0}}{} =1+ {}_{}[]$$ . We conducted experiments on three epithelial cell lines: HME1, 12Z, and A2780. The HME1 cell line (ExPASy htTERT-HME1) is an hTERT-immortalized cell line with epithelial morphology, derived from the breasts of a 53-year-old female patient undergoing reduction mammoplasty with no history of breast cancer. HME1 cells were used as a model of physiological angiogenesis. The 12Z cell line (a kind donation from Prof. Anna Starzinski-Powitz, Goethe-Universität Frankfurt) is an SV40 virus-immortalized cell line obtained from a 37-year-old female patient undergoing laparoscopy. This cell line exhibits expression of markers characteristic of endometriotic lesions observed in vivo. The A2780 cell line (a kind donation from Dr. Martina Šemeláková PhD., Pavol Jozef Šafárik University in Košice) is a human ovarian cancer cell line, originally established from an endometrioid adenocarcinoma of an untreated patient. The cell cultures were maintained to cell-specific protocols using appropriate culture media: Roswell Park Memorial Institute (RPMI) 1640 Medium for A2780 cells, Dulbecco’s Modified Eagle’s Medium (DMEM) for 12Z cells, Human Mammary Epithelial Cell Growth Medium (MEBM) mixed with Nutrient mixture medium F-12 Ham (1:1) for HME1 cells. All culture media were supplemented with 10% Fetal Bovine Serum (FBS) and 1% Penicillin/Streptomycin. The cells were incubated at 37°C in a humidified atmosphere containing 5% CO 2 . All microscopic experiments were performed on 30 mm glass coverslips plated with cells in 6-well plates. Cells were transfected at 50–60% confluency with organelle-targeted biosensors: mitoHyPer7 (1.5 μg/well), mtD1GoCam (1.5 μg/well), the FRET-based Ca 2+ biosensor 4mtD3cpv (1.5 μg/well), and cytosolic Ca 2+ indicator Fura-2 acetoxy-methyl-ester (Fura-2AM) (1.5 μg/well), using 3 μL of TransFast transfection reagent (Promega, Madison, WI, USA) in 1 mL of serum and antibiotic-free medium for 8–12 h. Following transfection, the medium was replaced with 2 mL of experimental EH-loading buffer (Table ), and measurements were conducted for 2–3 h at room temperature. The tested compounds, cis-platin (cisPt) and [Zn(neo)(nif) 2 ], were used at a final concentration of 10 μM (based on IC 50 – Table ) in the appropriate complete cultivation medium. The compounds were applied to adherent cells (at the confluence 50–60%) or spheroid cells for 8 h, based on the results of the Cell Viability Assay (Figure ). Our experiments utilized 3D tissue spheroids to study mRNA/miRNA expression, providing a more reliable tissue model for angiogenesis compared to the conventional 2D monolayer in vitro experiments. To form the spheroids, we used U-bottom 96 well-plates, with their surface coated with 0.8% LE agarose to create a thin film non-adhesive film. Cells were seeded as a single-cell suspension (5—120 × 10 4 cells/mL, depending on the doubling time of each experimental tissue culture) in 200 μL of complete medium per well in the microtitration plates. The morphology of the spheroids for all experimental cell lines and in the tested conditions is shown in Figure . Total RNA was extracted from the cell suspension using the RNeasy Mini Kit (Qiagen; Hilden, Germany) with a modified manufacturer’s protocol. The isolated nucleic acid was transcribed into cDNA using the ProtoScript First Strand cDNA Synthesis Kit (New England Biolabs; Ipswich, MA, United States) and a thermocycler (Techne TC-3000X). qRT-PCR amplification was performed using SensiMIX II (Bioline Meridian Bioscience; London, England) on the Rotor-Gene Q system (Qiagen; Hilden, Germany) to detect the target mRNA expression. For micro-RNA analysis, the isolated miRNA was processed using the TaqMan™ MicroRNA Reverse Transcription Kit (Applied Biosystems™) with the Techne TC-3000X thermocycler, followed by qRT-PCR amplification using the TaqMan™ Universal Master Mix II no UNG (Applied Biosystems™) on the thermocycler Rotor-Gene Q System (Qiagen; Hilden, Germany) to detect the target miRNA expression. The primer sequences and TaqMan probes used are listed in supplementary data (Table ). The obtained data were analyzed using Rotor-Gene Q 2.5.3 Software (Qiagen; Hilden, Germany), with relative mRNA expression normalized to the housekeeping gene β-Actin, and relative miRNA expression normalized to Ct40, as described by Gevaert et al. . ELISA Live-Cell Imaging The VEGF-A protein level was analyzed using the Human VEGF ELISA Kit(AB100662), while the TGF-β1 protein level was determined with the Human TGF beta 1 ELISA Kit (AB100647). The COX2 protein level was assessed using the Human COX2 ELISA Kit (AB267646), and the Nrf2 transcription factor was analyzed using the Human Nrf2 ELISA Kit (AB277397). The phosphorylated Nrf2 transcription factor was determined with the Nrf2 Transcription Factor Assay Kit (AB207223). All analyses were conducted on cell suspensions and followed the manufacturer’s instructions (Abcam, Cambridge, UK). ELISA plates were read at 450 nm using the SYNERGY HTX multi-mode reader (BioTek Instruments, Winooski, Vermont, USA), and data analysis was performed using Gen5 3.10 Software (BioTek Instruments). Quantification of the prepared samples was carried out using standard curve analysis. We conducted live-cell imaging experiments using the following equipment: A Zeiss array confocal laser scanning microscope (Axio Observer.Z1 from Zeiss, Gottingen, Germany) equipped with a 100 × objective lens (Plan-Fluor × 100/1.45 Oil, Zeiss, Germany), a motorized filter wheel (CSUX1FW, Yokogawa Electric Corporation, Tokyo, Japan) on the emission side, and an AOTF-based laser merge module for the 405, 445, 473, 488, 514, and 561 nm laser lines (Visitron Systems). The system included a Nipkow-based confocal scanning unit (CSU-X1, Yokogawa Electric Corporation). Data acquisition and fluorescence microscope control were performed using Visiview 4.2.01 (Visitron, Puchheim, Germany) . An inverted wide-field microscope Anglerfish (Observer.A1, Carl Zeiss GmbH, Vienna, Austria) with a 40 × oil immersion objective (Plan Apochromat 1,3 NA Oil DIC (UV) VISIR, Carl Zeiss GmbH, Vienna, Austria) and a standard CFP/YFP filter cube. Emission collection was facilitated by a 505dcxr beam-splitter, directing light to both sides of the camera (CCD camera, Coolsnap Dyno, Photometrics, Tucson, AZ, USA). Visualization was carried out using the NGFI AnglerFish C-Y7G imager for emission collected with Anglerfish. A constant buffer perfusion flow was maintained using the NGFI perfusion system (PS9D, NGFI, Graz, Austria). 2 O 2 measurements We measured mitochondrial H 2 O 2 levels using the genetically encoded H 2 O 2 sensors mitoHyper7. The mitoHyPer7 signals were imaged by alternately exciting the cells with a motorized dual filter system equipped with LED 480nm (excitation filter 480nm/17nm) and LED 430nm (excitation filter 433nm/24nm) beam splitters. Emissions were alternately collected using a 535/22 BrightLine HC emission filter, as previously described by Tawfik et al. . Cells were initially perfused with HEPES (4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid) to record H 2 O 2 production for the first 2 min. Subsequently, [Zn(neo)(nif) 2 ] (10 µM) was added for the following 6 min, and finally, cells were perfused again with HEPES for an additional 2 min. The acquired data were saved as image files during the experiments and analyzed using Fiji software (ImageJ2). Background and photobleaching corrections were performed in Excel, and data were analyzed further using GraphPad Prism 8.01. 2 + measurements We measured cytosolic Ca 2+ concentrations in cells incubated with the fluorescent cytosolic Ca 2+ indicator Fura-2 acetoxy-methyl-ester (Fura-2AM) (TEFLabs, Austin, TX) for 30 min in EH-loading buffer. Cells stained with Fura-2AM were illuminated at 340 nm and 380 nm, with emission captured at 515 nm, as previously described . The measurements were recorded as the F380/F340 ratio using live-acquisition software v2.0.0.12 (Till Photonics) and analyzed using GraphPad Prism 8.01. Background subtraction was performed using a designated background region of interest (ROI), and bleaching correction was applied using an exponential decay fit of the basal fluorescence extrapolated across the entire measurement. The results represent the maximal (Δmax) in cytosolic Ca 2+ levels in response to ATP or histamine (100 µmol/L) stimulation of the cells. 2 + measurements Mitochondrial Ca 2+ measurements were conducted using the genetically encoded biosensor 4mtD3cpv. The excitation wavelength for 4mtD3cpv was set at 440 nm (440AF21, Omega Optical, Brattleboro, VT, USA), and emissions were captured at 480 and 535 nm (480AF30 and 535AF26, Omega Optical, Brattleboro, VT, USA) as previously described . Data acquisition was performed using NIS-Elements AR software (Nikon, Vienna, Austria) and analyzed using GraphPad Prism 8.01. Measurements were corrected by a background region of interest (ROI), and photobleaching correction was applied using an exponential decay fit. The results represent the maximal change (Δmax) in mitochondrial Ca 2+ levels in response to ATP or histamine (100 µmol/L) stimulation of the cells. The experimental qRT-PCR mRNA data were analyzed using GraphPad Prism 8.01 (GraphPad Software, San Diego, CA, USA) and are represented as mean values ± SD of three independent measurements provided in duplicate (one independent measurement was provided in duplicate for miRNA determination and one independent measurement was provided in triplicate for GPx1 and SOD1, respectively). Cytosolic calcium measurement data were evaluated using GraphPad Prism 8.01 and are expressed as mean values ± SD of three independent measurements for untreated cells: HME1 (n = 46) and 12Z (n = 102); cells treated with 10 μM cis-platin: HME1 (n = 43) and 12Z (n = 88); and cells treated with 10 μM [Zn(neo)(nif) 2 ]: HME1 (n = 50) and 12Z (n = 99). Mitochondrial calcium measurement4) and 12Z (n = 14); cells treated with 10 μM cis-platin: HME1 (n = 6) and 12Z (n = 14); and cells treated with 10 μM [Zn(neo)(nif) 2 ]) HME1 (n = 5) and 12Z (n = 14). Mitochondrial H 2 O 2 data were analyzed using GraphPad Prism 8.01 and are represented as mean values ± SD of three independent measurements for untreated cells: HME1 (n = 3) and 12Z (n = 3); and cells treated with 10 μM [Zn(neo)(nif) 2 ]: HME1 (n = 3) and 12Z (n = 4). Statistical analysis was performed using the Student’s t-test and nonparametric analysis of variance (ANOVA), followed by Tukey’s post hoc test and Dunnett ‘s Multiple Comparison test. Statistically significant results were found as follows: P-value < 0.05 (*, significant), P-value < 0.01 (**, highly significant), and P-value < 0.001 (***, strongly significant). Supplementary Information.
When great responsibility comes with limited options: experiences and needs of older community-dwelling adults regarding accessing, understanding, appraising and using health-related information
55f33979-b3c8-47ce-89ec-cd9bf4296e78
11292880
Health Literacy[mh]
Health literacy (HL) refers to both the personal skillreasoned health-related decisions and to navigate in the healthcare system . HL is therefore considered essential to maintaining and improving quality of life throughout the life course . With advancing age, older adults may require more frequent interactions with health-related information. Consequently, the importance of addressing HL concerning older adults has been emphasised , particularly because of population projections that indicate a global increase in the number of older adults in the future and the need to enhance health promotion for this group . In Europe, an effort has been made to measure HL levels, for example, with the European Health Literacy Survey Questionnaire (HLS-EU-Q), among the general population. Limited HL is associated with social and socioeconomic conditions, particularly lower levels of education, income, low social status and older age . Acknowledging HL as an interaction of individual skills within a social context, it is essential to look beyond the personal level and include the social structures in which people live. That is, to better understand the influence of the situations in which people are required to use their HL skills and capabilities . There is, for example, a heightened focus on the organisational context of HL, the health system’s demands and the complicated information environment in a modern world . However, at the same time, it is essential to recognise the complexity of the social context of HL . This has, for example, been addressed in research focusing on experiences related to health information among socioeconomically disadvantaged adults in Switzerland , among refugees in Sweden and as part of information literacy in everyday life among people aged 47–64 and 57–70 in Australia . Research findings concerning older adults in Iceland echo this complex interaction between the ageing process, HL and both personal and environmental factors. Notably, HL has been connected to the personal factors of age in years, education level, income, resilience and depression and the environmental factors of means of transport and perceived access to healthcare and medical service . These factors seem to play an important role in HL; however, further information is needed to comprehensively understand this dynamic interaction between older adults, HL and their context. In contrast to using quantitative measurements in relation to HL as is prominent, a qualitative perspective is needed to gain a deeper understanding of the matter. Therefore, this study aimedStudy design Participants and setting Procedure Data analysis Findings Expectations for responsibility A gap between expectancy and ability/context Finding one’s own ways Bridging the gap This category describes the experience of what is needed to access, understand, appraise and use health-related information to be more able to take care of one’s own health. It is divided into two subcategories based on descriptions of different needs: “ ” and “ ”. Shared responsibility Manageable options is qualitative study within the social constructivism framework sought to understand the specific contexts in which people live . An explorative design was used to find and create knowledge of the focused and little-studied phenomena . We conducted individual semi-structured interviews to generate qualitative data, get insights into the lives of older adults and establish knowledge . The research group consisted of three Icelandic researchers (SSG, SAA and AKS), a Swedish (LM) researcher and an Icelandic senior citizen (AS). The four researchers created an interdisciplinary team of occupational therapists (SSG and LM), a physiotherapist (SAA) and a nurse (AKS) as professors/researchers (LM, SAA and AKS) and a PhD student (SSG). The group’s expertise, research and lived experience lie within ageing, daily living, gerontology, HL, health promotion, qualitative research and urban/rural settings. This. An application for ethical approval was sent to the Icelandic National Bioethics Committee. The committee deemed permission not necessary according to Icelandic law on scientific research in health (VSN-21–009 based on VSNb2016060007/03.01). Participants in this study were purposefully selected from 175 partakers in a previous quantitative cross-sectional study on HL. That study was based on a stratified random sample from the national register of community-dwelling people 65 years and older in one urban town and two rural areas in Northern Iceland . To get as broad a perspective as possible, the selection criteria for this study were based on the aim of interviewing older people with various backgrounds regarding place of living, age, gender, education, means of transport and distance from services. The selection procedure was conducted in three steps, as shown in Table . In the first step, potential participants were sorted by partakers’ numbers from the previous quantitative research. They were placed into a matrix list based on five to six determining factors, with a sixth factor being considered for those living in rural areas. Considering the amount of needed information, that some people might not be reached and some might decline participation, the matrix list included 69 from 175 previous partakers, with many categorised with the same factors. In the second step, previous partakers’ numbers and the names of potential participants were connected. Information recorded at the Register Iceland database on a) social security number, b) place of living and c) a registered telephone number accessible through an open website were matched. This information could not be paired for 21 persons, leaving 48 on the potential participants’ list. In the third and last step, 20 people on the list were contacted for participation. They all agreed, consented, and were subsequently interviewed. All participants, 11 women and nine men were born and raised in Iceland except for one individual who, despite not being native, had resided in the country for decades. Their birth years ranged from 1926 to 1952, and the median age was 76,6 years. Seven had elementary education, eight a secondary or trade school education and five a university degree. Agriculture, education, trade, healthcare and homemakers were the main occupation fields. In three interviews, the spouse was present. In one case, the participant had early-stage Alzheimer’s, so in cooperation with the couple, it was decided that the spouse would play the roles of support, memory and voice. In the other two cases, both in rural settings, the spouse of the participant was present in the kitchen, where the interview was conducted as a part of the culture in place. The spouses were not direct participants in the interview; however, they added information when, for example, asked to recall a process, names or times. Potential participants were sent an invitational letter by mail and subsequently contacted (by SSG) by telephone and invited to participate. Data were collected over one year, from January 2022 to January 2023. SSG conducted all of the interviews in Icelandic at the participants’ chosen place. The interviews were audio-recorded and lasted from 30 to 65 min, with an average of 46 min. The recordings were deleted after transcription. An interview frame designed for this study was used, which included a) opening questions about the length of time living in their current area, main occupation and preferred pseudonyms for confidentiality; b) questions about HL, which asked the participants to describe their experience of accessibility, clarity and usefulness of health-related information and services; and c) an opportunity to add any information. The interviews were transcribed verbatim and analysed using content analysis as described by Graneheim and Lundman and Graneheim, Lindgren and Lundman . The method offers researchers different epistemological positionings with various levels of abstraction and degrees of interpretation, depending on the study aim and data quality. It is, for example, applicable when knowledge is believed to be socially constructed . As reflexivity was considered an essential part of the whole process, the analysis was conducted with a team of all authors. Although SSG and LM mostly did the main work because of their pre-understanding of the research area and the method used for analysis, all the authors met in working meetings at each step of the analysis process, as described below. These meetings were used for reflection on the empirical data, the potential influence of preconceptions and the emerging findings from SSG and LM. In addition, a reflection from AS with a lived experience of the matter was sought in each step. Throughout the analysis process, work was carried out in Icelandic and English. Transcriptions were entered into the data management software NVivo 11 for data storage, organisation, and coding. However, the team encountered difficulties in sharing information using the software, which resulted in the analysis being conducted manually in a Word document. In the first step, all the authors read the interviews several times to understand the content. The three Icelandic researchers read the material in their native tongue, and the Swedish researcher used an English Google-translated version. The interviews were discussed both as a whole and in specific parts, where meaning units and potential content areas in consideration of the study’s objective were identified. In the second step, meaning units were identified according to the aim of the study and condensed into descriptions close to the text, preserving the core meaning, abstracted and labelled with codes. Further abstraction occurred as subcategories and categories emerged from the condensed content based on patterns or commonalities. Similarities, differences and connections between and within the content were reflected upon and sorted. Constant comparison was used to clarify meanings, comparing data with codes and codes with codes. In the third and last step, further analysis took place when the descriptive content of the preliminary categories was formulated by going back and forth and checking consistency between the categories, their content and the empirical data. The emerged core meanings were validated by contextualizing the meaning units in the individual interviews and the data as a whole. The 20 interviews provided insightful data to answer the purpose of the study, which was to explore the experiences and needs of older community-dwelling adults concerning accessing, understanding, appraising and using health-related information. Based on manifest content, four qualitative categories emerged from the experiences and needs of older community-dwelling adults. Each category is independent, yet interconnected with the others, as shown in Fig. . “Expectations for responsibility” describe the experience that the individual should be responsible for taking care of their own health, including accessing, understanding, appraising and using information and services, as well as showing initiative and keeping needed communications active. “A gap between expectancy and ability/context” includes experiences while taking responsibility for expectations not aligning with one’s own skills/situations. “Finding one’s own ways” comprises various adapted ways to access, understand, appraise and use information and services due to a misalignment between expectations for responsibility and the individual's ability or context. “Bridging the gap” describes experiences of needing shared responsibility and more manageable options to optimise reasoned health-related decisions and navigation through the healthcare system. Significant quotations are provided to illustrate the empirical foundations of the subcategories. Table provides information about the manifest content from the analysis. This category describes participants’ experiences regarding the predominant expectation that each individual should carry the responsibility of taking care of their own health, including accessing, understanding, appraising and using health-related information as well as showing initiative and keeping needed communications active. The category is divided into two subcategories based on how this expectation is described: directly from the person and indirectly from the information providers. The subcategories were named “ ” and “ ”. Personal expectations Environmental expectations subcategory describes the experience that the person was expected to be responsible for their health and health-related matters—that is, to be their own health manager. The person should know best what they need and therefore be responsible for achieving, understanding, appraising and using information. This expectation was accepted as part of being independent and acknowledging the increase in general knowledge, making people more educated about health matters. By not taking responsibility as one’s health manager, opportunities for health and welfare information might be lost, and then the person would be the only one to blame. “You get the information you need, you just look for it… so you have nothing to complain about but yourself” (if you have missed information) (Thorunn, 76-year- old woman). The responsibility for taking care of one’s own health was also experienced as an unspoken expectation from health-related information and service providers, who often only deliver information if requested. In these circumstances, the individual needs to take the initiative to look for the information and services that are needed and relevant on each such occasion. If opportunities to manage one’s own health were lost, this was because of a lack of responsibility of the persons rather than the information provider. “I did not know… usually it is the case that you have to look for information” (Kara, 70-year-old woman). This category describes participants’ experiences of being unable to live up to theto manage their health. The category is divided into three subcategories based on descriptions of different kinds of gaps between the expectancy and one’s own ability/context, although often interlinked, which are named “ ”, “ ”, and “ ”. Digitalisation gap Personal contact gap Navigation gap subcategory describes the experiences of being unable to access and use information as expected and navigating within and between the health and welfare systems because of the increased use of computers and the internet, that is, digitalisation. Although digital development was generally viewed positively, it was expressed that all the changes were happening so fast, leaving many unable to keep up. For those needing more than general information or not having all the proper equipment or the ability, digital technologies were creating a significant gap in information and services. “You know, I am back from ancient times. I have no computer and no phone to Google and nothing, so I am completely… so many things that you cannot do unless you have a computer… all the information” (Dora, 96-year-old woman). With the increased use of digital technology, there was also the experience of a decrease in personal and direct contact. This combination created an even wider gap between expectations of taking responsibility and one’s own ability/context. This gap consisted of being unable to use entirely the formal digital ways to access, understand, appraise and use information and, simultaneously, the conventional and valued forms of person-to-person contact being limited. Being without a key person within the health and welfare systems to contact was described as being lost and not knowing what information to look for, where and what options were current or applied to them. This contact with a key healthcare person was significant in the case of illnesses. Although generally satisfied with hands-on service, with no one knowing the health history and situation of the older person or the possibilities in the service system, there was no way to safely navigate or coordinate the necessary information and actions when needed. “… I need to get someone I trust. I do not want to end up with a new person in every conversation and say the same thing over and over and over again. After the fifth time, you think 100 times over whether to call again… Everyone wants to assist you, but can’t because they do not know you” (Hanna, 80-year-old woman). A gap in navigation while taking responsibility for accessing, understanding, appraising and using health-related information was experienced as a result of the general complexity within and between health-related services, particularly in knowing what information to find and where. This gap in navigation was fuelled by the digitalisation gap and the personal contact gap. The existence of long, complicated, and unclear communication channels, disconnection between organisations, and unclear service provision or division between entities, such as the local municipality and the state, often result in difficulties finding information or some information getting lost along the way. These were expressed as daunting, never sure of being on the right navigation course, always showing initiative and only relying on persistence not to give up. Perceived by participants as confusing, health-related service systems were described as not being made for the older service user and made to drive them away. “…this is uncomfortable because you sometimes get the impression that the system does not care… and then you think you are somehow alone if something happens. Why isn’t it better? Maybe that is why senior citizens get the impression that you are a bit set aside. It is tiring always to have to push yourself somehow through” (Sigrun, 78-year-old woman). This category describes the experiences of finding one’s own ways to live up to the expectancy of being responsible for accessing, understanding, appraising and using health-related information. It is divided into three subcategories based on descriptions of the different ways used to adapt: “ ”, “ ”, and “ ”. Rely on oneself Rely on spouse, family and friends Use personal relationships subcategory describes the experience of relying on oneself while managing health-related information. It was described as using knowledge through former work experience from the health and welfare service, watching parents age or even taking care of them and building on information and service from that time. Having some idea about what service is available and where to start looking for further information was expressed. This includes having enough knowledge to know where to look for information and how the services operate, given that little has changed. “I know the operation (at the former workplace) well enough that I would look for the service, if I needed home care or something like that, I know how to do it” (Nina, 80-year-old woman). Finding one’s own ways based on the experience of relying on the spouse, family and friends regarding health-related information was described. In situations in which the participant could not use computer technology fully or at all, but the spouse could, he or she was valued as essential and even the reason for being able to live in place. Help from grown-up children or acquaintances was also mentioned, and they considered themselves lucky to have people around to help, stating that this was not the case for everyone. “Our daughter … is extremely good at helping. I do not know what we would do if we lost touch with her” (Fannar, 72-year-old man). Experiences of getting information about available services and where to turn in need at gatherings organized by local senior non-governmental organisations (NGOs) were also described. Also, when getting together, friends shared information on where to turn in need and hands-on experiences. Finding one’s own ways by using personal relationships or acquaintances with health and welfare professionals was described by some as being, at times, necessary to access information or services by using this kind of relationship. This required using informal methods rather than formal ones when no other means seemed possible. “He (the general practitioner) was always on vacation or busy or not reachable … so I called my son (who is a medical doctor) and said now you have to help me” (Dora, 96-year-old woman). Others described it as a common way to use personal connections regarding health-related information, especially in rural areas where “everybody knows everyone”. Considerations of being very lucky to have this kind of relationship and being able to use this informal way were expressed. subcategory describes the experience of needing shared responsibility by being provided with the necessary fundamental health-related information. Although accepting the expectations of being responsible for achieving information as a part of taking care of their own health (category “Expectations for responsibility”) the experience also revealed that to do so, fundamental knowledge of what information and services exist and are current is required. To find information about services, the person first needs to know what opportunities and resources are available. “ I really expect this (information) to be handed to me when I reach the age, but not that I have to run after it” (Hanna, 80 year-old woman). Some of the fundamental information on health-related matters was described as being provided by local senior NGOs and highly valued as such; however, at the same time, it was questioned who should be responsible for providing older adults with this information. In addition to needing the provision of fundamental health-related information to bridge the gap, this subcategory describes the experience of what kind of information access is required and in what way accessed. Manageable options refer to an accessible overview of opportunities and resources that are available and current, both locally and nationwide. A clear venue for quality and reliable information is necessary, which was possible during the COVID-19 pandemic, so there is a precedent. Also, getting valuable and more relevant information is needed. “Why do you always have to be in such a terrible shape to get information and service? … I think there needs to be a little more about everyday things. If you’re taking care of yourself, advice is needed on the best way to do this” (Nina, 80-year-old woman). Access to information and services must align with diverse abilities/contexts. More options than mainstream digitalisation to access information and navigate through service systems need to be available. In rural areas, experiences of information being delivered more according to the ability and context of people were described, however, as being more the personal decisions of the staff rather than an embedded system ideology. Manageable options also include offering in-person support for those who require more introduction, instructions or assistance when accessing, understanding, appraising and using information. The findings of this study among community-dwelling older adults revealed four separate, but interconnected, qualitative categories. The category “Expectations for responsibility” describes the experience that the person, the individual, should be responsible for taking care of their health, including accessing, understanding, appraising and using information and services. However, difficulties in doing so are revealed in the category “A gap between expectancy and ability/context” and include experiences, while taking the responsibility of expectations to do so are not in line with skills/situations. The consequences are information gaps that arise. The category. Although accepting the expectations that the individual should be responsible for taking care of their health, the category “Bridging the gap” describes experiences of needing responsibility to be shared and more manageable options to optimise reasoned health-related decisions and navigation in the healthcare system. The category “Expectations for responsibility” is the base for the categories “A gap between expectancy and ability/context” and “Finding one´s own ways”. It describes the responsibility that participants experience in accessing, understanding, appraising and using health-related information and services as part of being their own health managers. This view is fuelled by and intertwined with personal expectations and messages from the environment that seem to be a part of social norms. This experience echoes, in a way, neoliberal ideology, with its economic and political focus on individualism and autonomy. It includes the idea that people should have the right and responsibility to make their own choices, which inevitably shapes healthcare delivery systems . Reflecting this upon Iceland, although generally considered a part of the Nordic welfare states, the rise and promotion of neoliberalism in the country has shaped the economy and politics of health and welfare since the late 1970s . Although the findings from this study indicated that the participants accept the expectation of being responsible for their health and value being their own health managers, they also revealed that this expectation was often not in line with their skills/situations. The result was the emergence of information gaps limiting their options to be responsible and make informed health decisions. The “A gap between expectancy and ability/context” category describes three interlinked subcategories: , and . Numerous studies have reported challenges among older adults to participate in or benefit from the growing digitalisation, known as the “digital divide” or “grey digital divide” . Research findings from Iceland also highlight this information gap. Palsdottir has reported an increased frequency of online health information seeking among people 68 years and older from 2002 to 2012. However, the usefulness of that information, including websites by the health care system or health specialists, did not increase. A study on perceived barriers to health information among people 60 years and older also reveals hindrances in the availability of information and the ability to seek and find it . This divide is considered to marginalise older adults, among other groups, who are most likely to become excluded from the benefits of digital technologies One solution to the digital transformation that healthcare is undergoing , and in line with the expectation of individual responsibility, would be to modify HL by interventions aimed at strengthening the digital skills or competencies of individuals through education. However, Bittlingmayer and Sahrai drew attention to what might happen if increased education is challenging to manage – for example, in the case of disability. Although older adults are a heterogeneous group, this perspective could be reflected, for instance, in the normal ageing process. Lifelong learning should always be an option; however, how health services meet the complex needs of people as their own health managers needs to be addressed. Another angle regarding these experienced information gaps is that limited HL has generally been related to lower levels of education . In this study, however, the community-dwelling participants’ education level is relatively high, with most having a secondary or a university degree. Perhaps this echoes the dynamic and complex interaction between various personal and environmental contextual factors acting and interacting as barriers or facilitators of HL. Because of the gaps experienced in health-related information, the participants need to adapt and find other ways to manage. They do this by, for example, relying on people close to them like spouses, children, grandchildren and friends, as described in the category “Finding one’s own ways”. Concerning this adaptation, the resilience and resourcefulness of the participants seem to play an important role; it also identifies the importance of social connections or networks. Making reasoned health-related decisions and navigating the healthcare system can, therefore, build on if you have someone in your life willing and able to help. Findings from this research indicated that, in some cases, this support is provided by healthcare staff, even when not on the job. ers. The importance of relatives as an adapted or alternative strategy while dealing with health information was one of four main themes in a study among socioeconomically disadvantaged adults in Switzerland . This supports the importance of social connection and support for those experiencing health-related information gaps. Although matters of caregiving and the share of informal/unpaid carers, often female family members or friends , are beyond the scope of this research and will not be addressed further, the effects of individualism and healthcare delivery systems’ expectations of responsibility cannot be underestimated. The local senior NGOs in Iceland also seem to play an important role in providing relevant health-related information, for example about rights and available services. However, to participate in gatherings these organisations provide or to receive most of the information, people must become members and pay an annual fee . In the category “Bridging the gap”, the participants ask for two things to make the passing of the experienced information gaps easier, presented in the subcategories “ ” and “ ”. Although the findings from this study indicated that the participants accept the expectation of responsibility for their own health, they also revealed a contradiction. Namely, without knowing what information and services exist and are current, this expectation can sometimes be hard to live up to, or even be impossible. Shared responsibility in providing older adults with more fundamental health-related information seems a vital preliminary step for them to access, understand, appraise and use information. Access to healthcare is generally considered a multifaceted concept consisting of the interaction between the accessibility of services and the abilities of people . One of the five identified dimensions of accessibility is the approachability of services. This dimension includes making services known and reachable to individuals, along with the necessary individual skills to identify the need for these services . Yet again, the complexity of HL is brought to light, and the question is raised where this line between individual versus service responsibility is drawn and, more importantly, who decides. Organisational health literacy (OHL) is an evolving concept , especially in the wake of COVID-19, which has transformed the healthcare service . It is described as an effort to transform health-related services to make it easier for people to navigate, understand and use information and services to look after their own health and address the implementation of policies, practices and systems. This concept underpins the idea that HL does not merely depend on the abilities of individuals . Neoliberal policies, with their emphasis on economic value, have often been criticised for negatively impacting access to healthcare by not addressing the structural disadvantages experienced by certain population groups . Furthermore, these policies are considered to contribute to the negative viewing of individuals who are not in the workforce, perceiving them as potentially financially burdensome . Also, older adults may experience decreased functioning over time due to the natural ageing process. This decline can affect their ability and capacity to access, understand, appraise and use health-related information. Focusing on individual responsibility can lead to compromised access to and use of information and services. Therefore, the effect of these policies on fuelling ageism in viewing older adults as a burden must be considered. Ageism has been estimated to cost societies vast amounts , and in the United Nations action plan Decade of Healthy Ageing 2021–2030 , one of the identified areas for action is connected to changing negative views and actions towards age and ageing. The findings from this study indicated that the participants not only require shared responsibility in the form of being provided with fundamental health-related information in the category of “Bridging the gap”. The subcategory “ ” reflects the request for information that is approachable, acceptable, appropriate, and available. For example, this study indicates that older adults have a generally positive view of digital development as a part of the future. This finding is also reflected in a study on technology use for health information based on a randomized sample of older Icelanders . However, it seems to be an issue of design, delivery, instructions and support, bringing us to service user participation and inclusion. One of the identified areas for action in the United Nations action plan, Decade of Healthy Ageing 2021–2030 , aims at enabling older people to continue to do the things that they value and support the inclusion of their voices not only as service beneficiaries but also as agents of change. The focus is on the abilities of older people and person-centred integrated care and primary health services. Brach et al. introduced the 10 attributes of a health-literate healthcare organisation (HLHCO). The attributes are based on the OHL concept to deliver person-centred healthcare and tackle system-level factors enabling people to access, understand, appraise and use health-related information. One of the 10 attributes emphasises the importance of including the voices of consumers in the design process, implementation and evaluation of health information and services . This specific attribute, engagement and support of service users, has been recognized as one of the most prevalent topics of OHL . Furthermore, in a framework for strengthening the health system’s capacity regarding HL, one of the eight suggested action areas focuses on people-centred services based on user engagement and enabling environments . In this study, the participants indicated that they value being their own health managers and take full responsibility for accessing, understanding, appraising and using health-related information, as expected, as part of social norms. However, the lack of options to fulfil this expectation implies that healthcare delivery systems do not always meet the needs of older adults to act on it. Strengths and limitations is qualitative exploratory study aimed to gather information aboutOne of the strengths of this study is that it gives older adults living at home a platform to be heard. By selecting potential participants purposefully with different backgrounds regarding the place of living, age, gender, education, means of transport and distance from services, variations in experiences were sought. The generalisability of the results was affected by participants being restricted to living in Northern Iceland and including individuals with similar cultural backgrounds. It should, however, be kept in mind that close similarities may exist between Iceland and other northern geographical areas of the world where the culture is labelled Western. The possible effects of having a spouse present during three of the 20 interviews must be mentioned. Their presence was considered culturally relevant in rural areas in the sense of greeting visitors at home. The spouses also acted as a support and facilitated communication, such as for one participant with early-stage Alzheimer’s disease. Memory loss is most often a reason for exclusion from research. However, gender roles and the power balance between couples must be considered, which might have affected the conversations. One interview took place via Zoom. While this may not align with our main findings, older adults’ technological skills vary. In times of often hard confinement and isolation of older people during the COVID-19 pandemic, by preparing the interview setting well, the wishes of this participant to meet on Zoom could be met. Clear categories emerged based on evident patterns, consisting of direct content, minimal interpretation and remaining close to the original text. In content analysis, the researcher must know the context. Having four interdisciplinary researchers with stated expertise partaking in the data analysis process contributed to the credibility of this research. Although two researchers conducted the primary analysis, regular meetings with all authors at every step of the process were used for reflection on possible preconceptions and consistency between empirical data and the emerging categories and their content. Including a senior citizen with lived experience on the research team further enhanced the credibility of this research. However, the involvement of an older adult in the earlier stages of the research is an aspect for consideration in future studies. Working on data in Icelandic and English can be both a strength and a limitation. A strength regarding reasonability and accuracy as a thorough evaluation of the meaning and use of words during the translation process took place. A limitation in the sense of possibly misrepresenting the participants’ expression in the translation process from Icelandic to English, although three of four researchers are fluent in both languages, should minimise that risk. The participants in this study experienced expectations of being responsible for accessing, understanding, appraising and using health-related information as part of acting as their own health managers. Although valuing and accepting these expectations, limitations regarding living up to them were revealed because such expectations were often not in line with their skills/situations, despite having a relatively high education level. Information gaps, therefore, arise due to digitalisation, limited personal contact and general navigation complexity within and between health-related services. Therefore, approachable fundamental health-related information, current and quality checked, and inclusive service opportunities are needed to bridge the resulting gaps. It is necessary to critically address the possible influences of politics regarding the views on individual responsibility at a systematic level in matters of health and HL. Such action should analyse if and how those principles shape attitudes, social norms and health services and confront structural disadvantages experienced by population groups. Access to information and services must be viewed beyond availability and include the approachability, acceptability and appropriateness of service users with various abilities and contexts. The findings from this study reflect participants’ experiences of bearing most of the responsibility as their own health managers, while simultaneously having limited choices in acting on it. Policymakers are therefore encouraged to develop services that enable older adults to make reasoned decisions about health and navigate healthcare services in an effective way.
Citation bias in otolaryngology systematic reviews
5854e3da-907f-4ec3-a10b-5c8f4ea53e74
7772969
Otolaryngology[mh]
Systematic reviews (SRs) use comprehensive methodologies to summarize a body of evidence on a clinical topic and, when meta-analysis is appropriate, produce a pooled effect estimate for the included primary studies . Well-conducted SRs are preferentially considered by guideline development panels when weighing evidence for recommendations . While many aspects of the SR process may lead to bias, among the most important steps is the systematic search to locate eligible studies, which can lead to sampling or selection bias if the studies retrieved during the search process do not represent the population of available studies . One particular practice—hand-searching reference lists for additional studies—may locate additional studies outside of the systematic search. However, according to the Cochrane Handbook for Systematic Reviews of Interventions, hand-searching reference lists of included studies may lead to the selective inclusion of statistically significant studies with effect sizes similar to other published studies retrieved from database searching . In plain terms, hand-searching reference lists may result in exaggerated SR effect estimates. Consider a hypothetical SR in which a comprehensive database search has been conducted. The SR authors may choose to conduct a supplemental search (e.g., a search that complements a database search) to identify additional studies that are relevant to the SR topic. A popular method of supplemental searching is to scan reference lists of studies that are included in the SR , despite little evidence to support the practice. Scanning reference lists for potentially relevant studies may increase the number of studies included in the SR but is associated with significant methodological concerns. For one, authors are known to cite studies in an unbalanced manner. One primary motivation for citing studies is to convince readers that one's point of view is correct . Moreover, studies with statistically significant results are more often cited than those with nonsignificant or null findings . Ravnskov reported that trials for lowering cholesterol to prevent coronary heart disease were cited six times more if their results supported lowering cholesterol . Thus, hand-searching references may bias SR summary effects in a unidirectional manner. Vassar et al. found that supplemental search methods such as a hand-search of medical journals are less biased because they are more likely to retrieve a balanced cohort of studies (e.g., a range of effect sizes and directions), although published literature is likely biased toward positive results and significant effects . However, the Cochrane Handbook recommends hand-searching as a useful adjunct to searching electronic databases because not all trial reports are included in electronic databases or include relevant or easily identifiable search terms in the title or abstracts . To date, there have been few studies examining the extent of hand-searching reference lists in SRs. To address this gap, the authors investigated a broad sample of SRs from one area of medicine—otolaryngology—and quantified the number of SRs that hand-searched references. We also examined whether additional types of supplemental searching that are less biased, such as hand-searching journal issues or trial registries, were conducted. Moreover, we compared the rates of hand-searching reference lists in SRs that mentioned adherence to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement, because PRISMA is associated with higher quality SRs . Last, we investigated whether different funding sources were associated with increased rates of hand-searching reference lists. We identified SRs and meta-analyses published from January 1, 2008, to December 31, 2017, in the top nine clinical otolaryngology journals based on their H-indexes. This time parameter was chosen to allow an analysis of a ten-year cross-section of SRs, which was deemed sufficient to draw conclusions about the rates of hand-searching. A PubMed search (which includes MEDLINE) was performed by one author using a procedure based on one that was sensitive to identifying SRs and meta-analyses but with modifications to account for recent changes to PubMed indexing. We also included search terms for “meta-regression,” which sometimes appears in titles of SRs and meta-analyses. The journals included in the PubMed search were: American Journal of Otolaryngology—Head and Neck Medicine and Surgery, Clinical Otolaryngology, Current Opinions in Otolaryngology & Head and Neck Surgery, International Journal of Otolaryngology, JAMA Otolaryngology—Head & Neck Surgery, Journal of the Association for Research in Otolaryngology, Journal of Otolaryngology—Head & Neck Surgery, The Laryngoscope, and Otolaryngology—Head and Neck Surgery. The exact search strategy, used on December 8, 2017, was: (“JAMA Otolaryngol Head Neck Surg” [Journal] OR “Otolaryngol Head Neck Surg” [Journal] OR “J Assoc Res Otolaryngol” [Journal] OR “Clin Otolaryngol” [Journal] OR “Curr Opin Otolaryngol Head Neck Surg” [Journal] OR “Am J Otolaryngol” [Journal] OR “Int J Otolaryngol” [Journal] OR “J Otolaryngol Head Neck Surg” [Journal] OR “Laryngoscope” [Journal]) AND (metaanalyses [Title/Abstract] OR meta-analysis [Title/Abstract] OR “meta analyses” [Title/Abstract] OR metaanalysis [Title/Abstract] OR “systematic review” [Title] OR meta-regression [Title] OR metaregression [Title] OR meta-analysis [Publication Type]) AND (“2008/01/01” [PDAT] : “2017/12/31” [PDAT]) In 2018, PubMed added a new feature that allowed SRs to be searched as a publication type. This was not included in our search as it predated this update . In addition to our PubMed search, we electronically searched the Cochrane Library using the EBSCOhost platform for Cochrane otolaryngology SRs on December 19, 2017. For this search, we used the same date parameter and filtered our search to only SRs published by the Cochrane Ear Nose and Throat group. Studies retrieved from the database search were imported to and housed in Rayyan , an online article screening platform designed for systematic reviewers. Two authors independently screened all references for inclusion and exclusion while remaining blinded to each other's responses. Discrepancies were resolved by group discussion, and duplicates were removed. Inclusion criteria were SRs published in the journals that we searched. We defined an SR according to the PRISMA-P definition . The following elements were extracted from each SR by two independent authors who maintained blinding to each other's responses: whether reference lists were hand-searched (yes/no), other kinds of supplemental searching (e.g., search of trial registries), mention of adherence to PRISMA guidelines (yes/no), and funding source. Following extraction, these two authors met to review discrepancies and achieve consensus. Stata 15.1 (STATAcorp) was used to fit a penalized logistic regression model, rather than maximum likelihood, as some predictor variables had low event rates. Prior to analysis, we conducted regression diagnostics, including the variance inflation factor to evaluate for multicollinearity among predictors. All variance inflation factors were in satisfactory ranges and indicated no sign of collinearity. Our regression was designed to investigate the association of adherence to PRISMA (yes/no), Cochrane SR status (yes/no), and funding source (industry, government, private, hospital/university, mixed, none) with hand-searching reference lists. The variables included in the model were chosen to answer whether reporting guidelines and more stringent methodological requirements (i.e., Cochrane and funding source) were associated with rates of hand-searching. Our search yielded 587 articles from PubMed and 39 articles from the Cochrane Database of Systematic Reviews. Of these 626 articles, 554 were included from our initial screen. A total of 15 were excluded (including 2 duplicates), and 539 were included for analysis: 502 from clinical otolaryngology journals and 37 from the Cochrane library ( ). Of the 539 included SRs, 208 (38.6%) mentioned adherence to PRISMA guidelines. The majority of SRs were either not funded or did not provide a funding disclosure statement (433/539, 80.3%). Of the SRs that mentioned a funding source, the most common source of funding was public entities (e.g., government) (49/106, 46.2%). Overall, 72.4% (390/539) of SRs hand-searched reference lists, including 97.3% (36/37) of Cochrane reviews. For 228 (58.5%) of the SRs that hand-searched reference lists, no other supplemental search (e.g., search of trial registries) was conducted. There were 162 studies (30.1%) that searched a database, conducted hand-searching, and used other supplementary search methods. No SRs listed the exact articles that were retrieved from a hand-search of reference lists. Logistic regression did not reveal any reliable, statistically significant associations between trial characteristics and the practice of hand-searching reference lists ( ). Our results indicate that including studies from reference lists is a methodologically accepted and common practice in otolaryngology SRs, including Cochrane SRs. For the majority of SRs in which reference lists were hand-searched, no other supplemental search was conducted. Many SRs did not specify the articles whose reference lists were searched, which might inhibit the reproducibility of their findings. The implications of these findings were that the summary effects of otolaryngology SRs might be biased toward statistically significant findings. Similar findings exist in the field of dermatology . Hand-searching reference lists is a known source of bias for SRs . This form of bias is easily mitigated by adjusting supplemental search strategies. A previous study looking at complex interventions described the time-intensiveness of SR searching . In that study, the database search took 2 weeks and returned only 35% of the articles included in the final SR sample. Comparing the time invested and the number of articles returned by hand-searching references, by which 41% of the included articles were identified, the authors concluded that database searches might yield fewer results and required significantly more time investment. The authors further stated that hand-searching reference lists was “especially powerful for identifying high quality sources in obscure locations,” which might be true. However, they did not discuss the quality of articles included from hand-searches of reference lists, nor did they discuss the results of the articles that were identified from both database sources and hand-searches of reference lists. Given the baseline knowledge that studies are cited most often to reinforce a study's findings and that studies with statistically significant findings are more likely to be cited , it is possible that these authors could influence future readers to insert citation bias in their SRs, despite that they have used numerous search methods—database, hand-search of journals, hand-search of references, and others—to collect a sample of articles. Thus, we would have preferred to see the authors recommend that readers emulate their methods, because their methods are likely to gather a diverse set of articles, with diverse effect sizes, in multiple directions. Given our findings, we recommend reevaluation of standard search methods in otolaryngology SRs. The predominant search combination was an electronic database search and a hand-search of included article reference lists. Employing robust search strategies can be time-intensive. Moreover, a Cochrane SR investigating the effectiveness of hand-searching references found that all included studies had a high risk of bias, indicating that no robust data existed to support the practice . Despite that, the Cochrane review authors concluded that hand-searching reference lists might be appropriate in specific circumstances, although these circumstances might be difficult to identify. While the Cochrane handbook mentions the practice of hand-searching references, a Cochrane review questions this practice and instead recommends multiple kinds of supplemental searching . Based on our findings, we encourage systematic reviewers to move away from hand-searching of reference lists due to the potential bias that this creates. However, hand-searching is not necessarily an ineffective method and may be used in concordance with other search methods. Furthermore, we build upon previous work by providing the following recommendations. First, a complete SR search strategy should be established a priori . Second, if supplemental searches are deemed necessary, we recommend authors carefully weigh the benefits and risks of all possible supplemental search methods (e.g., search of trial registries, hand-search of references, hand-search of journals) . Third, we recommend that when authors weigh the pros and cons of supplemental search methods, they adhere to robust guidance, like the Cochrane Handbook, rather than experience and popular or known methods . Last, if a supplemental search is conducted, regardless of its type, we recommend authors disclose which articles were retrieved using these supplemental methods and conduct a sensitivity analysis that removes these articles to quantitatively demonstrate the influence of articles retrieved from a supplemental search on the summary effect .
Novel duck reovirus exhibits pathogenicity to specific pathogen-free chickens by the subcutaneous route
22c01183-0fc5-4590-881e-088f5dce40b3
8175558
Histology[mh]
Avian reovirus (ARV), belonging to Orthoreovirus, family Reoviridae, infects chickens, turkeys, ducks, geese and other birds , . The virus exists all over the world, and the current ARV mainly causes three different pathogenic types in China . Chicken viral arthritis and short syndrome caused by chicken reovirus (CRV) mainly results in arthritis, tenosynovitis and retarded growth . Muscovy ducks infected by Muscovy duck reovirus (MDRV) mainly manifested many yellowish-white or white necrotic foci in the liver, and sometimes bleeding points accompanied them. Therefore, the disease is commonly known as flower liver disease , . The spleen necrosis disease of ducks and geese caused by a new type of duck reovirus (NDRV) mainly presented as haemorrhage and necrosis in the liver and spleen – . The diseases caused by the three types of ARV manifested much greater differences either from the epidemiology, clinical symptoms and pathological changes or from the gene sequence and serology of the three pathogens . NDRV mainly causes the death of ducklings. No clinical symptoms have been observed in infected adult ducks. The disease originally appeared in ducks from some Chinese southern provinces, such as Fujian, Guangdong, Zhejiang and so on, in 2005 , . Soon the disease spread throughout the main duck farms of China, and now has already became a common and frequently occurring disease. The onset age of the disease was generally 5–25 days old, especially 7–14 days, the morbidity was 5–35%, and the mortality was 2–20%. In general, the younger that the onset ages of the ducks were, the higher that the morbidity and mortality were , . The virus can cause spleen necrosis and pathological injury of the bursae, which can result in immunosuppression of the body and tends to cause secondary bacterial infections – . Thus, the disease is much more difficult to control and frequently resulted in a higher mortality rate of diseased ducks. The disease caused great economic losses for the Chinese duck industry. NDRV can infect the Cherry Valley duck, Shelduck, Muscovy duck, mule duck, duck, goose and other waterfowl species . It was reported that NDRV can infect chicken embryos and cause obvious lesions of the liver, spleen and bursae . However, the pathogenicity of NDRV to chickens has not been reported. In this study, we explored the pathogenicity of the virus to chickens by inoculating NDRV in 3-day-old SPF chickens subcutaneously, which could lay the foundation for better preventing and controlling the disease. Clinical symptoms and body weight changes Gross lesions Microscopic lesions Immunohistochemical examinations Viral loads in different tissues Detection of the serum neutralizing antibody As seen in Fig. , chickens infected with NDRV induced neutralizing antibodies at 7 dpi (SN antibody titre > 10). From 7 dpi, the neutralizing antibody titres were continuously ascending, and the level at 14 dpi reached a much higher level. No positive neutralizing antibody titres were detected in chickens in the control group (SN antibody titre < 10). Forty 3-day-old SPF chickens in the experimental group were inoculated with 100 μL of allantoic fluid (10 5.00 ELD 50 /0.1 mL) of NDRV by the s.c. route. The chickens in the experimental group began to exhibit depression, reluctant activities, introflexion of claws (Fig. A), and performing of splits (Fig. B) at 3 dpi. Five chickens died from 3 to 5 days after infection, and the other chickens presented with stunting syndrome (Fig. ). Ten chickens were randomly selected from each group and weighed every four days for 14 days. The body weight of the experimental group was statistically significant lighter than that of the control group and there were significant differences between them. The mean body weight decreased to 7.73–25.64% compared to the control group (Fig. ). From 10 dpi, the body weight gain started to recover, and drinking and eating began to return gradually in the experimental group. Chickens in the control group did not show any clinical signs. At 3, 6, 9 dpi, five chickens from each group were euthanized by carbon dioxide. Gross lesions are summarized in Table . In detail, all infected chickens at 3 and 6 dpi, as well as 4/5 chickens at 9 dpi showed severe lesions in the liver and spleen. Hepatomegaly and brittleness were seen in the liver, and there were many yellowish-white focal necroses of variable size, on the surface or in the parenchyma of the liver (Fig. C). The spleen was swollen, showed haemorrhages, and multifocal yellowish-white necroses, up to 3 mm in diameter, present on the surface or in the parenchyma (Fig. D). No gross lesions were found in other organs of experimental group or in any organ of the control group. Microscopic lesions are summarized in Table . At 3 dpi, four out five (4/5) chickens showed mild interstitial pneumonia with inflammatory cell infiltration and haemorrhage (Fig. A). Moderate interstitial pneumonia was observed in the lungs of 4/5 infected chickens at 6 dpi (Fig. B). Mild congestion and lymphocyte infiltration were observed in 4/5 chickens at 9 dpi (Fig. C). In the control group, the detection of erythrocytes in parabronchi and air capillaries was attributed to the killing and sampling procedure. (Fig. D). Livers of all infected chickens at 3 dpi showed severe hepatocyte steatosis and necrosis (Fig. E). At 6 dpi hepatocyte degeneration, necrosis and inflammatory cell infiltration were more severe (Fig. F). Focal bleeding was obviously observed (Fig. F). At 9 dpi, inflammatory cell infiltration was mild (Fig. G). No lesions were found in livers of the control group (Fig. H). Hearts of 4/5 infected chickens showed moderate microscopic lesions at 3 dpi and lesions began to improve at 6 dpi and 9 dpi. At 3 dpi, there was moderate inflammation and cardiomyocytes showed granularity of the cytoplasm (Fig. I). Mild inflammatory cell infiltration was found in 4/5 infected animals at 6 dpi (Fig. J) and 3/5 chickens at 9 dpi. (Fig. K). No microscopic lesions were observed in the hearts of control chickens (Fig. L). No obvious microscopic lesions of brains were observed in the infected chickens (Fig. M–O) or control group (Fig. P). In the spleens all infected chickens showed severe lymphocyte depletion, haemorrhage and necrosis at 3 dpi (Fig. a). Also, at 6 dpi, many splenic lymphocyte nuclei became pyknotic, and severe focal necrosis was observed (Fig. b). Lesions were less pronounced at 9 dpi, with moderate lymphocyte necrosis and haemorrhage (Fig. c). No microscopic lesions of spleens were observed in the control group (Fig. d). Lesions were not obvious in the duodenum at 3 dpi (Fig. e). Duodenums of four infected chickens all showed detachment of mucosal epithelium at the tip of villi and at 6 dpi and 9 dpi. (Fig. f,g). No microscopic lesions of duodenums were observed in the control group (Fig. h). The bursa of all infected chickens showed microscopic lesions at 3 dpi, 6 dpi and 9 dpi. Lymphocyte depletion was observed at 3 dpi (Fig. i). Heterophilic granulocytes were increased and infiltrating in the cortex at 6 dpi (Fig. j). There was more heterophilic granulocytes infiltrating at 9dpi (Fig. k). Microscopic lesions in bursa were not present in the control group (Fig. l). In the kidney, tubules showed swelling and granularity of the cytoplasm at 3 dpi in 4/5 animals (Fig. m). Moderate alterations were observed in 4/5 chickens at 6 dpi, including inflammatory cell infiltration (Fig. n). At 9 dpi granularity of the cytoplasm was mild in 4/5 animals with few inflammatory cells were observed (Fig. o). No microscopic lesions of kidney were observed in the control group (Fig. p). Viral antigens in the tissue sections were stained brown by the immunohistochemical examinations at 6 dpi. In the liver, positive immunohistochemical staining signals were mainly distributed in the liver cytoplasm and nuclei. The nuclei of red blood cells within the hepatic sinus gap were also positively stained (Fig. A). In the brain, positive staining signals were widely distributed in the cytoplasm of the neurons, and part of the nucleolus was positively stained (Fig. C). In the spleen, positive signals were widely distributed in the red pulp. The concrete localization of positive signals was in the nucleus of the splenocytes (Fig. E). In the kidney, strong positive signals were mainly focused on the renal tubular epithelial cell cytoplasm and nuclei. Part of the nuclei was strongly positively stained. The glomerular podocyte cytoplasm also presented positive reactions (Fig. G). No obvious positive signals were observed in the control groups (Fig. B,D,F,H). The background staining was observed in the control of liver, brain and kidney, but it was very slight (Fig. B,D,H). Viral loads were detected in the heart, liver, spleen, lung, kidney, brain, intestines and bursae of infected chickens by a SYBR Premix EX Taq assay. It can be seen in Fig. that the viral loads in the liver, spleen, lung, kidney and intestines reached the peak at 3 dpi. In particular, the viral loads in the liver, spleen and lung were much higher than those in the other tissues. The level of viral RNA in different tissues began to decline from 6 dpi, but the viral loads in the liver and spleen always remained at a higher level before 6 dpi, and this result was consistent with the mortality of chickens infected with NDRV. The level of viral RNA in the bursae reached a peak at 6 dpi and maintained a higher expression level. Viral loads of the brain always remained at a lower level. No viral RNA was detected in the control group. NDRV infection has become a common disease in the Chinese duck industry. Although its mortality rate has not been high, the disease can cause spleen necrosis and immune suppression, which can develop into a serious secondary infection and growth retardation in ducks. More importantly, NDRV infection is a disease that can spread horizontally, as well as vertically, and the offspring of infected breed ducks can easily transmit the virus and cause disease . Therefore, NDRV infection is much more difficult to prevent and control in practical production. NDRV can infect different types of ducks and cause diseases, but research on the pathogenicity of NDRV to chickens has not been reported until now. This study was focused on the clinical symptoms, pathological changes, viral RNA expression, and serum antibodies of infected chickens to further evaluate the pathogenicity of NDRV to chickens. It was reported that NDRV can infect 10-day-old chicken embryos by allantoic cavity inoculation. Chickens were delayed in hatching and had obvious necroses in the liver and spleen , . In this study, 3-day-old chickens infected with NDRV subcutaneously exhibited body weight loss, introflexion of claws, performing of splits, death, etc. The most typical gross lesions mainly included swelling, brittleness, and yellowish-white focal necroses in the liver and spleen. These experimental data were consistent with the symptoms and lesions of ducks infected with NDRV , , . In previous studies, lymphocyte depletion could be observed in most of the tissues of ducks infected with NDRV , , . In this study, similar lesions were also found in different chicken organs. For example, inflammatory cell infiltration was severe in the liver, while lymphocyte depletion was obvious and typical in the spleen and bursa. The spleen and bursa are very important immune organs in poultry, pathological damage to which can lead to the immunosuppression of the body. In particular, the bursa plays an important role in inducing B lymphocytes to differentiate and mature, so lymphocyte depletion could lead to immune dysfunction. In addition to the liver, spleen and bursae, other tissues also demonstrated varying degrees of pathological changes, such as lymphocyte infiltration with congestion in the lung, cytoplasmic granularity in the myocardium of the heart with lymphocyte infiltration, cytoplasmic granularity in renal tubules, etc. Most studies of the pathogenicity of NDRV mainly focused on the immune organs of poultry, and other organs were rarely involved , , , , , . However, in this study most of the organs of the chickens were observed to develop severe lesions, which was in agreement with the pathogenicity study of NDRV to ducks . It was reported that the lesions of brain were slight in ducks . In this study no lesions were observed in the chicken brains. It also means that tissue tropism of MDRV to brain is very slight and there were differences between chickens and ducks. In this study, the viral RNA of most tissues reached the highest level at 3 dpi, and the level of viral RNA expression began to decline from 6 dpi. The data suggested that the developing speed of the disease was much faster, consistent with the deaths among diseased chickens being mainly focused in the period of before 6 dpi. The results were also consistent with the dynamic changes in viral loads in duck tissues . In this study, pathological changes in the spleen and liver were much more serious at 6 dpi, and they were closely related to the level of the viral loads being maintained for a longer time in these tissues. The viral loads of the liver and spleen were much higher than those in other tissues and were maintained for a longer period, consistent with the severe and obvious lesions in the two organs. Compared to other tissues, the viral loads of the bursae reached the peak level at 6 dpi and always maintained a higher level until 9 dpi. These data indicated that the liver, spleen and bursae of chickens could be the target organs of NDRV. The viral loads of the brain always maintained at a lower level which consistent with the pathological characteristics of the brain. In this study, the chickens could produce neutralizing antibodies to NDRV at 7 dpi. This finding indicated that NDRV could infect chickens subcutaneously and induce neutralizing antibodies. This study indicates that chickens can be infected subcutaneously with a virulent NDRV strain that can cause disease or even death. However in practical poultry production chickens and ducks were rarely mixed, so chickens were not easy to be exposed to the virus. Even if chickens were exposed to NDRV, it was very difficult for chickens to infect a high-dose virus a time in practical poultry production. Therefore, there have been no cases infected with NDRV in chicken production. But if the frequent exposure to NDRV always remains, which could lead to virus variation and the virus will be probably susceptible to chickens. And that reovirus is a multi-segment RNA virus , chicken-derived reovirus and duck-derived reovirus are more likely to exchange and recombine. It suggests that the virus may infect chickens naturally in practical poultry production in the future. Therefore in the actual production, chickens and ducks should avoid being mixed to breed and thus chickens will avoid being exposed to the duck-derived viruses, such as NDRV. And chickens will also avoid adapting to the virus due to long-time exposure of NDRV. Although so far chickens have not been naturally infected with the NDRV strain, there is still a great risk of infection. This study provided some experimental data for further prevention and control of infection by NDRV in poultry. Animals and virus Animal experiments RNA extraction and reverse transcription Viral load detection Histopathology and immunohistochemistry examinations Analysis of serum antibodies against NDRV Statistical analysis Ethics statement The animal experiments were approved by the Committee on the Ethics of Animal of Institute of Poultry Science, Shandong Academy of Agricultural Sciences (permit number: 2019005), according to the guidelines of the Review of Welfare and Ethics of Laboratory Animals authorized by the Shandong Municipality Administration Office of Laboratory Animals. The animal Experiments were conducted in the Biosafety Level 2 laboratory in Shandong Academy of Agricultural Sciences, in compliance with the ARRIVE guidelinesThe Ethical approval and ARRIVE guidelines are as follows. Three-day-old SPF chickens and 9-day-old SPF duck embryos were purchased from Shandong Hao Tai Experimental Animal Breeding Company Limited. The novel duck reovirus (NDRV) strain was isolated from one Cherry Valley duck farm in Shandong Province in 2012. The virus was named SD-12 (Accession number: KJ879930). It was inoculated into the allantoic cavities of 9-day-old SPF duck embryos for subculture. After three passages, the virus was used for the challenge in this study. The virus titre was 10 5.00 ELD 50 /0.1 mL (50% lethal dose for embryos) for the stock, determined according to the method of Reed and Muench . The virus was isolated by our team members. PCR detection showed that there were notin the allantoic fluid. Eighty 3-day-old SPF chickens were raised in negative pressure isolators and randomly divided into two groups, with 40 chickens in each group. The experimental group wasother group, as the controls, was inoculated with 100 μL of PBS. The virus dose was determined by a pre-test. Clinical symptoms of the two groups were observed every day. Ten chickens were randomly selected from each group and weighed every three days for 14 days. Blood of the infected chickens was collected and analysed every day. At 3, 6, 9 dpi, five chickens from each group were euthanized by carbon dioxide, and their tissues (heart, liver, spleen, lung, kidneys, brain and intestine) were collected. One part of the tissue sample was fixed in 10% neutral buffered formalin solution for histological examination. The other part of the tissue sample was stored at − 80 °C until use for RNA extraction. The test animal bodies, embryo bodies and the used test materials were disposed harmlessly. The whole animal experiments were conducted in the Biosafety Level 2 laboratory. Total RNA of different frozen tissue samples was extracted using a Total RNA Extraction Kit (Solarbio, Beijing, China). The RNA concentration of tissue samples was measured by an automatic nucleic acid analyser (Eppendorf, Germany). Complementary DNA (cDNA) of 1000 ng of RNA was synthesized with a PrimeScript RT reagent kit with gDNA eRaser (TaKaRa, DaLian, China). The volume of the reverse transcription reaction system was 20 µL. The viral loads of different tissue samples were detected by a SYBR Premix EX Taq (Perfect Real-time) assay . NDRV S3 gene and duck β-actin gene are respectively as a target gene and a reference gene. Two pairs of primers were designed according to S3 gene sequence (GenBank No. KJ879932) and β-actin gene sequence (GenBank No. NM_205518.1). The S3 gene primers were as follows, F: 5′-ATGTCGCTGTCACGGGTAA-3′ and R: 5′-TGGTAGGAACCACGCTCAA-3′. The size of amplified fragment was 196 bp. The β-actin gene primers were as follows, F: GTGCTGTGTTCCCATCTATC and R: TTTGCTCTGGGCTTCATC. The size of amplified fragment was 101 bp. The amplification system, which was 25 μL in volume, contained the following components: 1.0 μL of forward primer (10 μmol L −1 ), 1.0 μL of reverse primer (10 μmol L −1 ), 1 μL of cDNA template, 12.5 of μL 2 × SYBR Premix Ex Taq (TaKaRa, DaLian, China) and 9.5 of μL sterilized deionized water. The PCR thermal cycles comprised the following steps: 95 °C for 45 s, 40 cycles of 94 °C for 10 s, 56 °C for 10 s and 72 °C for 15 s. Each sample was detected in triplicate. Tissue samples were fixed in 10% neutral buffered formalin solution for 72 h, dehydrated, and embedded in paraffin wax. Sections that were 4 µm thick were cut. One part of the sections was stained with haematoxylin and eosin (H&E) following the standard histopathological protocols, and the pathologic results were observed under a microscope. The other part of the sections at 6 dpi was deparaffinized with xylene and hydrated with different grades of alcohol liquid (100–75%) for the immunohistochemical examination. 0.01 M sodium citrate buffer solution (pH6.0) was heated to 95 °C and then sections were immersed for 10 ~ 15 min to retrieve the antigen. After blocking with 5% goat serum albumin buffers for 1 h, the sections were incubated with rabbit sera against NDRV overnight at 4 °C. After three washes with PBS, the sections were conjugated with a diluted mouse anti-rabbit HRP-conjugated polyclonal serum for 1 h at 37 °C. Diaminobenzidine could be used as the substrate chromogen. After counterstaining with haematoxylin, the sections were sealed with neutral gum and observed with the microscope. Rabbit sera against NDRV were prepared by our own laboratory. Purified and concentrated NDRV was used as the immunological antigen. The acquired antisera on rabbit anti-NDRV didn't react with the main chicken viruses such as newcastle disease virus, avian influenza virus, infectious bronchitis virus, infectious laryngotracheitis virus, infectious bursal disease virus, avian leukosis virus and chicken reovirus by a serum neutralization test. The antisera also didn't react with the main duck viruses such as duck enteritis virus, duck hepatitis A virus, duck tembusu virus, duck parvovirus and other common duck virus by a serum neutralization test. It was diluted by 1:200 and used as the primary antibody of the immunohistochemistry examinations. Serum neutralization testing (SNT) was used to detect the serum antibody titres against NDRV. Serum samples of three chickens were randomly collected respectively at 0 dpi, 7 dpi and 14 dpi, respectively. Before starting the tests, complements in serum samples had to be inactivated at 56℃ for 30 min. SNT was performed with duck embryo fibroblasts (DEF) as previously described , . Serum neutralizing antibody titres were expressed as the reciprocal of the log2 of the highest serum dilution that inhibited 50% DEF death and was calculated by the method of Reed and Muench . Each sample was performed in triplicate. The experimental test data are expressed as the means ± standard deviations. Serum antibody titres and body weight data were analysed with Student’s two-tailed unpaired t-test. Viral loads in different tissues were evaluated by one-way analysis of variance (ANOVA) with Tukey’s post-test. Statistical significance was represented by P < 0.05 and P < 0.01.
A sociology of precision‐in‐practice: The affective and temporal complexities of everyday clinical care
0095f56f-18b8-4944-9c6f-442f18441f1c
9299761
Internal Medicine[mh]
The idea of precision medicine—of tailoring diagnostic tests and therapeutic interventions to the specific characteristics of individuals to improve patient outcomes—is now a driving imperative in biomedicine (Remon & Dienstmann, ). Like the allied terms ‘personalised medicine’, and to a lesser extent ‘stratified’ and ‘P4’ medicine, ‘precision medicine’ is used in a wide variety of ways, including: to signify cultural and clinical hopes for more efficacious, genetically tailored, treatments; to prefigure transformative therapeutic innovation; to drive investment in new pharmaceuticals; and to conjure a new era (and ethos) of intensified personalisation in clinical care (e.g. Hedgecoe, ; Juengst et al., ; Prainsack, ; Tutton, ; Vogt et al., ). Here, we consider ‘precision medicine’ broadly, paying attention to its rhetorical potency within both the cultural and the clinical imaginaries, its effects on research development and translation agendas, and the emergent complexities as therapies developed under its mantle are implemented in the clinic. We anchor our analysis in the field of oncology, where to date, precision medicine has gained its strongest foothold through the development of targeted and immunotherapies tailored to specific molecular tumour‐markers (Prasad et al., ; Schwartzberg et al., ). While precision medicine has emerged as a new promissory horizon in the management of cancer (Dobosz & Dzieciątkowski, ), it is underpinned by wide range of heterogeneous, even contradictory practices (Day et al., ). Progress has been uneven (Moscow et al., ), and precision therapies remain unaffordable and/or unavailable to significant parts of the global population (Drake et al., ). The implications of precision medicine in cancer care are only beginning to receive critical sociological analysis (e.g. Bourret et al., ; Chorev, ; Day et al., ; Kerr et al., ). Here, drawing on the experiences of those working at the nexus of therapeutic innovation and clinical practice, we seek to contribute to this emerging sociology of precision‐in‐practice, using cancer care as an illustrative case. We ask: how is precision medicine being realised and experienced, and with what consequences? Our analysis seeks to expand on the idea of precision medicine as offering unbridled hope and future possibility to explore some of its more challenging dimensions, including the affective and temporal complexities that emerge within everyday care as precision medicine is implemented in practice. Possibility, at a price Enchantment and acceleration: Precision‐in‐practice In the rich world, the advent of precision medicine in oncology—specifically the development of targeted and immune therapies guided by individual molecular biomarkers—has revolutionised cancer care, radically improving some patient outcomes. As a result, it has placed oncology at the forefront of the clinical implementation of precision medicine more broadly (Moscow et al., ). For example, the well‐known targeted cancer drug Trastuzumab (Herceptin™) has been credited with improving overall survival by almost 40% for patients with HER2‐positive breast cancer (Perez et al., ). Melanoma and non‐small cell lung cancers have also witnessed marked improvements in patient outcomes due to molecular diagnostics (Chapman et al., ; Cutler, ; Hyman et al., ; Mills & Maitra, ; Ramaswami et al., ; Skoulidis & Heymach, ). Immune checkpoint inhibitors, selected on the basis of molecular markers, have resulted in increased survival rates over the use of pre‐existing cytotoxic chemotherapy regimens, and the recent identification of the Tumour Mutational Burden (tTMB) biomarker, which may predict a patient's response to immunotherapy drugs regardless of their particular cancer type further signals the therapeutic promise of precision oncology (see, respectively, Reck et al., ; Marabelle et al., ; Schmid et al., ; Shendure et al., ). Given these dramatic successes, the field of oncology is increasingly focussed on the pursuit of molecular interventions, animated by the affective orientation of intensified hope and expectation that often underpins therapeutic innovation (see Brown & Michael, ; Novas, ; Petersen & Wilkinson, ; Sturdy, ). However, these developments are beset by accelerating costs, as well as by incongruous alignment between targets/mutations identified and efficacious therapeutic solutions. In terms of costs, the United States doubled its spending on precision therapies between 2012 and 2017 (Cutler, ; Prasad et al., ), with the median monthly cost of cancer drugs now sitting at over US$13,000 (Vokinger et al., ). Similar spending increases can be observed across OECD countries with a recent study showing that public expenditure on cancer treatments has tripled between 2005 and 2018 (Wilking et al., ). The recent (somewhat controversial) decision of the U.S. Food and Drug Administration (FDA) to approve two targeted drugs and immunotherapies across all solid tumours (so‐called ‘pan‐cancer’ or ‘tumour agnostic’ approvals) (Khasraw et al., ; Strickler et al., ) is likely to lead to a further escalation of costs. In Australia, where cancer care is financed through a complex mix of Federal and State funding, private health insurance and out‐of‐pocket funds, and delivered by both the public and private hospital systems, costs are also rising quickly across the board (AIHW, ). The above examples illustrate an emerging tension between the promissory potential of precision medicine, and the costly realities of the varied practices and therapeutics that are being developed and implemented under its mantle. The resulting dilemmas around possibility vs. affordability make precision medicine an important space for sociological analyses, especially as it is increasingly being implemented in practice. An emerging corpus of social science scholarship on precision medicine has increasingly attended to the ethics, biopolitics and (potential) injustices of genomic innovation (e.g. Feiler et al., ; Kerr et al., ; Prainsack, ; Sturdy, ; Sun, ); the changing taxonomies of knowledge and implications of molecular testing (e.g. Bourret & Cambrosio, ; Cambrosio et al., ; Chorev, ; Kuch et al., ); and commercialisation versus contested assessments of patient benefit (Gavan et al., ; Gyawali & Sullivan, ). Few have sought to explore precision as it is subjectively experienced day to day, especially by those working at the frontline of patient care (though see Bergeron et al., ; Crabu, ; Kerr et al., ). Here, we build on and expand on this growing corpus of work—drawing on both classical and contemporary social and political theory—to help widen the analytic lens and better capture some of the affective and temporal complexities that are accompanying precision medicine as it is implemented in everyday clinical settings. The first notion that we argue may assist in better understanding the lived experience of precision medicine in cancer care is enchantment . Jane Bennet uses enchantment to name a sense of wonder, presence and intensity that manifests, in part, as the alteration of chronological time (Bennett, ). Other authors have used the notion of enchantment to refer to something that emerges from a complex net of social relations (see: Bennett, ; Berman, ), including dynamics of hope and (future) possibility, as well as the socio‐material relations of investment in progress‐towards‐cure (see also Novas, ). In our own study, enchantment captures the affective and temporal complexities emergent from the allure of (future) therapeutic innovation, (current) professional ambitions (to cure, to heal) and enduring questions of meaning (especially in terms of care). But as the quotes below show, enchantment also makes visible tensions between the pursuit of cure and the provision of care, and the temporalities that inflect each of these (see Kenny et al., ). Precision oncology and the accompanying rapid proliferation of therapeutic innovation has created an acceleration of potential opportunities, creating the impression that future possibilities are materialising, in real time, if patients can hold on until the future arrives. This focus on future opportunities can distract from enduring difficulties that pervade cancer care, for example from questions around affliction, grief and mortality. The pace of precision innovation has not only led to a reimagined future (i.e. shaped ideas about where the ‘field’ will be in 5–10 years). In wealthy industrialised settings, it has also fundamentally changed how oncology is practised in the here and now. Here, enchantment intersects with social dynamics of acceleration , which have been of enduring concern to a broad range of social theorists (Adams, ; Rosa, ; Wajcman, ; Wajcman & Dodd, ). While the meanings and consequences of acceleration remain contested, sociologists such as Wajcman and Dodd ( ) point out the centrality of technological innovation in the acceleration of social life, which together have served to substantially reconfigure temporal experience—the individual and collective sense of being in time. Our dual emphasis on enchantment and acceleration, we argue, may be useful in understanding both the allure of new discoveries and the sense of the future materialising in the present through precision medicine. Drawing on the perspectives of cancer care professionals, below we focus on the context of precision oncology, offering a closer examination of the emerging affective and temporal complexities of precision medicine within (and beyond) cancer care. Data collection and sample Analysis is paper draws on 8 face‐to‐face focus groups (FG) carried out across two public hospital settings in two different states in Australia from July 2019 to March 2020. The overarching aim was to explore how the precision medicine is being understood and experienced by a range of cancer care professionals. After we obtained ethics approval from a hospital human research ethics committee (HREC), and site approval at each location, an invitation was sent via email to relevant stakeholders (working in cancer care) across the two hospitals. In total, 54 individuals were available to participate in 8 focus groups with 4 to 9 participants in each group. These groups included medical oncologists (FG 1 & 5); nurses and cancer care coordinators (FG 2 & 6); junior doctors (FG 3 & 7); clinical trial coordinators (FG 4 & 6); and clinical trial coordinators and nurses (FG 8). The focus groups were conducted by KK & AB, lasted between 50 and 70 minutes, were audio recorded and fully transcribed. There were 41 female and 13 male participants, all assigned pseudonyms to protect their anonymity. Medical staff included representation from medical oncology, radiation oncology, haematology and palliative care. The study was framed in terms of wanting to explore participants’ understandings, experiences and reflections on precision medicine and discussion was focussed around the domains of value (how it is assessed); access (how precision medicine is institutionally mediated), interests (what is perceived to influence practice); and encounters (in participants’ day‐to‐day work); and cost and benefit (in all their varied manifestations). The methodology for this project draws on the interpretive traditions within qualitative research (e.g. Charmaz, ). This involved taking an in‐depth exploratory approach to data collection, aimed at documenting the subjective and complex experiences of the participants. The aim was to achieve a detailed understanding of the varying positions adhered to, and to locate these within a spectrum of broader underlying beliefs and/or agendas. The approach used was developmental, in that knowledge generated in the early focus groups was challenged, compared with, and built on by later groups. This provided an opportunity to establish initial themes and then search for divergent cases, complicating our observations and retaining the complexity of the data. An initial thematic analysis was conducted independently by KK, AB, AP and BP, who coded the data, wrote notes and subsequently discussed potential themes together as a research team. Data analysis took place concurrently with the qualitative fieldwork over a period of months, with an initial analysis completed after each group to identify themes within the groups. Once all focus groups were completed, KK and AP again reviewed the transcripts to identify and confirm themes that ran across the different groups. Throughout this process, we continually sought to retain the richness of the respondents’ experiences, documenting the full range of perspectives, conflicts and contradictions within the data. The final step involved revisiting the literature and seeking out conceptual tools that could be employed to make sense of the themes we identified from the data. The collective affects of ‘low‐hanging fruit’, ‘spectacular winners’ and ‘lumpy landscapes’ Anticipation, benefit and participation ‘Moonshots’ and ‘rabbit holes’: Diversion of research and innovation The broad project of precision medicine has received relatively little analytic attention in terms of its overall public health impact and implications for health systems (though see Ramaswami et al., ). This, we argue, evades critical questions about the pursuit of precision vis‐à‐vis notions of a ‘rational’ investment in care. As is shown below, some participants perceived a diversion of funding away from ‘routine’ care or ‘smaller scale’ research towards the longer‐term pursuit of precision innovation through ‘moonshot’ and ‘rabbit hole’ initiatives: Leslie: …all the current research funding is being diverted towards Moonshot programs in the US… and that long‐term view is very attractive, but given scarce funding, there’s nothing on actually impacting short term or the majority of patient outcomes, and that concerns me… – FG 5, Medical Oncologists The diversion of resources away from routine cancer care (see: Marquart et al., ) was viewed as emergent from research funding guided by a cultural and clinical preoccupation with precision medicine and by the commercial interests of pharmaceutical companies. Our participants noted how such priorities were often at odds with their own sense of proportionality: Will: …there’s a lot of biology that’s understood, but it’s just been a bit of a neglected area in terms of potential targeted therapies. Focus Group Convener: And why do you think it’s being neglected? Will: It’s just not seen, I guess, from commercial pharmaceutical companies. It’s not as big a market as some of the other types of cancer. So, there’s tumour stream discrimination. There’s also been the impact of immunotherapy, and in some ways, that’s actually diverted a lot of the attention and, I guess, research activity. – FG 5, Medical Oncologists Linda: So, if we’ve committed to putting five patients on a trial, and we put nobody on a trial in six months, they’re on the phone going, ‘Why haven’t you? What’s wrong? Can we help you?’ [later] They have their timelines and agendas, and they’ve got to complete a study within a certain time to make it worthwhile for them. – FG 6, Nurses & Clinical Trial Coordinators This perceived ‘diversion’ of/to participation in trials was accompanied by a perception of higher levels of care (i.e. interpersonal, supportive and therapeutic focussed) within the trial context than in the context of publicly funded standard‐of‐care treatment, due to higher resourcing. This, in turn, increased levels of patient satisfaction from trial participation: Focus Group Convener: Why do we have so much more support in the trial space? Linda: Because we have an allocated staff member to each trial. And so that allocated staff member stays with those patients on that trial. Whereas patients having standard of care perhaps, chemotherapy, immunotherapy, down in the day treatment unit, they’ll get whichever nurse is on for the day, which might be a casual nurse, an [agency] nurse. Very rarely they would get the same nurse each time. Whereas on trial, the patient has the same nurse coordinator with them through the journey, through the trial. Narelle: And you’re also able to offer them a bit more money, funding, I suppose. If they have a side‐effect, for instance, and they need to see endocrine, we can send them to a private endocrinologist and the trial, potentially, would pay for that. Whereas patients here, if they’re in the public hospital and they don’t have health insurance, they have to go to the public clinic. So, I don’t think it makes that much of a difference– Linda: No, it does. Yeah. Narelle: –but patients kind of get that little bit more specialised, kind of– Yeah. […] Tyra: Yeah. So, they know once they’re on the trial they would be really taken care of . So, they have that comfort and trust. So, that’s a big, big, big thing for us… They know they’d be taken care of, this is a good trial, the drugs are working very well. So, that’s really pushed them [to participate]. – FG 6, Nurses & Clinical Trial Coordinators The better‐resourced trial context thus provided another dimension to the allure of participation in precision innovation, which had the potential to position translational research and standard care as somewhat neglected by comparison, diverting attention (and resources) away from the pursuit of more immediate ‘real world’ patient impacts: Ron: The hype, basically, and the expectations [for precision medicine] is so overinflated that we can’t get funding for grass roots research… because it’s gone to someone who’s actually added genomic sequencing into their study. Because it sounds great. You’re going to learn about the tumour. But what are you going to do? How are you going to impact? Whereas, ‘We’re doing real world research for immediate impact on the patient. Come fund it’. It’s not perceived as interesting or fascinating. Susan: But it’s not sexy is it? […] Leslie: …And I think that if you have that kind of bias in the evaluators, then it shifts funding towards something that is realistically unlikely to alter treatment in the majority of patients. It identifies subgroups, but is it really diverting scarce funding away from other worthy resources that will truly improve patient’s quality of life?... The word on the street is, is pretty much anything that does not have genomic in the title or as a correlative is very unlikely to attract significant interest. And, for me, that’s a concern because, well, we’ve got a scarce pot of money… diverted to other things, and if it’s all diverted to this, there’s no money left to do anything else. And I think that’s not a good thing. And we just sort of agree that this is promising, but what does that mean? Are we really making bad decisions otherwise as a result…? – FG 5, Medical Oncologists Precision medicine's affective pull was palpable across the focus group discussions, with participants repeatedly highlighting how a wide range of cultural, professional and economic influences lead to diversion away from other, also worthy, modes of cancer care. The materialisation of precision medicine in oncology may thus be forging a newly reconfigured landscape in which the relentless pursuit of (precision) cure may be diverting attention and resources away from more ‘mundane’ forms of care. The development of precision medicine in cancer care—specifically the development and implementation of targeted and immune therapies—was viewed as highly transformative by our focus group participants, regardless of the many complexities that also emerged over the course of the discussions. Novel targeted and immune therapies were viewed as a ‘win’ not only for patients, but for cancer care professionals, as well: Linda: That’s the thrill of it. Five years ago, he would’ve been dead in six months. So, he’s now three years down the track, [and doing] really well. So that’s the thrill to have a patient journey that’s just, you know [great]. Wendy’s had contact with him for these three years, every four weeks. And seeing somebody well like that is just very thrilling and rewarding, and the patient’s great. Fran: I’ve got this lovely one. A phase one trial, brand new, first time in humans…one [patient] was 32 [years old] at the time. She had a [primary site] with secondaries in her bones and her liver, barely could walk because of bone fractures…She had a little girl who was two. Anyway, we popped her on and she was on the drug… we’re now down to about four years now and she has nothing hot on her PET scan, she’s jogging, her little girl’s starting school. We see her every three months now with a scan, and still she’s doing great guns. And that’s a brand new [molecular] drug we [had] never seen before. – Focus Group (FG) 6, Nurses & Clinical Trial Coordinators The transformative potential of targeted and immune therapies was set in marked contrast to conventional therapies, as having radically changed the prognosis of patients, as well as heralding new possibilities for the field of oncology: Luke: So, I think an excellent example [of the impact of precision] would be the advancements of lung cancer… we’ve gone from… a median survival of less than a year in the metastatic setting with conventional chemo, to now it’s estimated that a newly diagnosed patient with modern targeted therapy… their median survival was probably closer to five years. And that’s with 2019 science, technology, and drugs, and we would anticipate that actually may get better. – FG 1, Medical Oncologists While the anticipation of continued increases in survival across cancer types was a core feature of participants’ understandings of precision medicine, our participants described a much more uneven landscape in practice, intimating some of the potential inequalities (in terms of access and across cancer types) within precision innovation: Georgia: I think that there’s a stall at the moment as well too. I think the science has picked off the low‐hanging fruit in terms of targeted therapies and immunotherapy and we’re at a point at the moment where the next phase is harder to predict and much more difficult to develop… – FG 1, Medical Oncologists Another participant Will: So, I think we’re in this very early stage of personalised medicine and it’s an uneven and lumpy and unfair landscape. But we want to engage with it because we see that biologically, it’s meaningful– Erica: It has to [be]. Will: –and that possibly there will be some patients who will do spectacularly better. Now, what’s sort of complicated it too, is with immunotherapy, of which there are actually very poor ways of predicting who will work, but there are these spectacular winners. – FG 5, Medical Oncologists The disconnect between the animating allure of precision innovation and the more uneven reality of precision‐in‐practice could be a source of difficulty for practising clinicians, both in terms of managing what participants described as ‘patient expectations’, care pathways and potential side effect including financial toxicities: Luke: I think there is an increasing disconnect between what the public perception is of what we can do and what we can actually do. And that’s probably compounded by the fact that we now have ready access, even if it’s not funded, to super expensive drugs. There’s a smorgasbord of things that you could potentially prescribe for someone that might have little to no benefit. And I do wonder privately, whether there’s a lot more of a push to, ‘Well, what have we got to lose?’ And I suspect that those oncologists don’t necessarily talk to the patients about the financial toxicity that they have to lose. […] Nadine: …these aren’t our pathologists in our hospital… often it’s a geneticist, someone in a lab that doesn’t have any clinical background or context, and they give you something and it basically goes from practicing medicine to doing an experiment in your patient, which I find very, very difficult. – FG 1, Medical Oncologists Caring within precision oncology, then, was articulated as juggling the diverse and sometimes contradictory imperatives of research and development, patient expectations, financial considerations and clinical care. There was recognition of the generalised attention to precision medicine and the promissory horizon of therapeutic innovation therein, as well as the general dependence of precision medicine on questions of cost and access, including the economic interests of pharmaceutical corporations. Yet, the pharmaceutical industry was also seen as central to advancing the scientific base of oncology practice, representing an interesting set of circumstances and accompanying tensions. The successes of (some) targeted therapies are here entangled with collective affective attachments to possible cures, thus creating widespread enchantment with precision innovation, but also obscuring the uneven landscape and potential toxicities for patients (and their families), financial and otherwise (see: Marchiano et al., ). For oncology professionals, then, caring with and through precision oncology is still suspended between the uneven present reality of precision‐in‐practice, and the alluring horizon of the promissory future. Another key dynamic articulated by participants in our focus group discussions was how the allure of therapeutic innovation creates a temporal pull towards the future—encouraging survival at all costs , be they affective and/or material. Of course, the affective and temporal pull of survival is not unique to precision medicine, and the question of when to transition from active treatment to palliation, even in the face of therapeutic innovation, has long antecedents in oncology (e.g. Baszanger, ). Nevertheless, as illustrated through participants’ reflections below, precision oncology was experienced as operating simultaneously across a range of timescales, including the anticipated future, where appraisals of the future benefit of (imagined) technological innovation of tomorrow governed practices (and the decisions) of today, intensifying the problematic pull of survival. Yet this presents various challenges for patients and health professionals alike: Deborah: I think it [precision innovation] might be giving them more options where there wasn’t options before. I think– Larissa: And hope. Deborah: Yeah. Yeah… they want more time… Larissa: It is very difficult. Because I think cancer care and cancer treatments and options wouldn’t be where they are today unless people had taken those risks and all of that. But balancing that up, I mean, there are those that you don’t have to be an oncologist to actually know. We all know that the patient’s dying, and we predicted that weeks before their death, but they’re still having treatments, which perhaps may have high cost to them… There’s not the resources to go around and I think it’s making sure that it’s fair and equitable to people who have hope of life or extension of a good quality life for X amount of time. – FG 2, Nurses & Cancer Care Coordinators The intertemporal dynamics of participation in precision innovation, often through clinical trials research, presented challenging questions of relative benefit across time:, Patricia: We have patients, I guess, who are desperate to try anything… our patients… will do almost anything to get on the trial. Larissa: It’s beneficial on multiple levels, right? Patricia: Yeah. Larissa: I mean, you’ve got people who are funding the research who are ultimately going to benefit. Patricia: Yep. Larissa: You’ve got the clinicians who are monitoring the trials, who are actually learning. And you’ve got the patients, who hopefully, if not those particular patients, but if not, then patients in the future who are going to be benefiting medically from that. And then hopefully us all financially. – FG 2, Nurses & Cancer Care Coordinators As detailed across many of the focus group discussions, the push for more time—driven by the dynamics of hope, which have intensified alongside the rise of precision innovation—often became the overriding imperative for patients (and their families), eliminating the space for contemplation of other facets of encountering cancer: ‘coming to terms’ with it, ‘the quiet progression’ of disease and ‘grieving’ the life that may have been. While such tensions between cultivating life and allowing death are common when caring in the context of potential mortality (e.g. Baszanger, ; Broom & Kirby, ; Broom et al., ), the affective pull of accelerated innovation was often articulated as a further distraction from discussion around the finality of mortality: Larissa: As I say, [discussing palliative care is] far easier if it comes from the patient. It’s a very frustrating part of our job because sometimes you know that somebody should be having a better death than what they’re already having. People don’t have, what I call, the quiet progression. Some, I suppose, for want of a better analogy, screech into death . The family haven’t had time to sort of come to terms with things. There’s no quietness around the bedside, there’s no time for grieving beforehand. It’s all go, go, go until maybe a couple of days before. […] Patricia: They [patients] think if they can get more time it’s worth it, rather than thinking what they will actually be like and be able to spend that time they have. But it’s not something everyone thinks about, especially if they’ve not seen it before… They think their Mum’s going to be here for an extra couple of months, and they’ll be fine, and keep going. – FG 2, Nurses & Cancer Care Coordinators The uncertain benefit of participation in precision innovation is offset here by an ethic of participation in the project of oncological progress. This is akin to an intertemporal social contract between the past subjects of biomedical research whose participation made possible the innovations of today, the future subjects who will hopefully benefit from current research, and those living‐with‐cancer, whose participation often incurs a ‘high cost to them[selves]’. In this way, the present subjects of precision innovation are temporally suspended between the legacy of past progress and the anticipation of future developments. The value of precision oncology for patients, then, often remained promissory , or, as something to be realised in the future, but which required participation in the present: Debbie: What I struggle with is that sometimes I think some of these things [precision therapies] may well be the future, but if we don’t, if we just open the doors and let everybody do all these things, you’re never going to get any evidence. Luke: But the ability to do these tests is outpacing the ability to understand, evaluate, interpret, and incorporate that treatment into standard of practice. Debbie: But it’s almost impossible to see that we’ll ever manage to do enough research to be able to understand all of it, because it’s just so vast. – FG 1, Medical Oncologists Caring through precision innovation, then, is lived in explicit reference to the future, striving towards its realisation by managing participation (and care) in the present. Despite lacking comprehensive targets yet , and an ongoing incapacity to translate the latest developments into the clinic and everyday care as per above (‘we're not quite there yet ’ / ‘it hasn't made such a big impact yet ’ / ‘we really don't know yet ’), participation in precision innovation is driven by the sense of its potential efficacy and the hope for future cures. Yet the intertemporal coherence of participation at the cutting edge of precision medicine was articulated as precarious, as the pace of innovation jarred with the immediacy of patients’ hopes for new treatments: Georgia: …something I’ve noticed, because I have a foot in the lab as well and so I understand the researchers, they’re often trying to really publicise their pre‐clinical results. Because in the world of science, there’s a lot of competition about funding. So getting publication, getting your name out there, can bring money. And so, I think probably a lot of us have had the experience where the [Australian newspaper] will publish like, ‘New Hope for Breast Cancer’ and it’ll be some– Luke: Yeah, they’ve killed two cells in the lab. Georgia: Yeah, or something. Debbie: Well, at least a mouse… Georgia: And patients will come clutching this into clinic and say– Luke: Yeah, ‘I want this’. Georgia: And I now understand why this is happening, but I also find that in some levels it can sort of shape, in my view, into irresponsibility. – FG 1, Medical Oncologists The ethics of participation here speak to the ongoing actionability gaps of precision medicine (e.g. Moscow et al., ). While the ability to participate in testing for genetic mutations and the make‐up of tumours is more readily available (though often at exorbitant cost to healthcare systems and patients), the capacity to use this information for treatment was discussed as lagging far behind cultural expectations. In this way, the allure of future possibility was articulated as distracting from the provision of care in the present. rise of precision medicine has heralded much transformative potential, with spectacular gains in particular areas of cancer care. However, these gains have been achieved within a complex social, political and economic context that has received limited sociological attention at least insofar as it inflects clinical practice. This is despite precision therapies drawing an increasing proportion of research and development funding and health system resources (Reed et al., ; Vokinger et al., ) while also encountering various challenges in being integrated into routine care (Filoche et al., ; Olstad & McIntyre, ; Ramaswami et al., ). How precision medicine is being implemented—in cancer care and beyond—and with what consequences is thus in need of sustained critical attention. Here, we have highlighted how the affective pull of innovation represents one aspect of precision‐in‐practice that has not yet been highlighted. That is, how collective affects—aspirations of progress and hopes for cure analysed here under the rubric of enchantment—can obscure more critical approaches to the institutionalisation of precision medicine in cancer care. At the same time, we have argued that precision medicine has contributed to—or perhaps joined—a temporal acceleration, in which the sheer velocity of therapeutic innovation risks leaving behind important questions of value, cost and benefit (in all their multifarious meanings). This acceleration is evident across scales from the interpersonal (e.g. of scrambling to deal with diagnosis, trials and proliferation of treatments), professional (e.g. in ‘keeping up’ with the latest molecular‐driven innovations), to the level of the cultural and clinical imaginary (e.g. where precision medicine is imagined to be a rapidly materialising source of every‐more‐promising cures). As a result of these affective and temporal complexities of precision‐in‐practice, attention can be diverted from the ultimately contestable nature of what constitutes ‘a good outcome’ in oncology and, in turn, from age‐old problems, for example, of affliction, grief and mortality (e.g. Broom et al., ). Thus, we argue that in addition to ideas of enchantment and acceleration, the idea of distraction may help make sense of the consequences of precision medicine as they are playing out in practice (see: North, ). As Taussig ( ) characterises it, distraction names a distinctly modern apperceptive mode, which contrasts with an earlier, more contemplative, experience of attention. Taussig ( ) uses the term to refer to ‘the type of flitting and barely conscious peripheral‐vision perception unleashed with great vigour by modern life at the crossroads of the city, the capitalist market, and modern technology’ (p. 148). If the ideal‐typical figure of earlier modes of attention was the lone worshiper contemplating the divine, the modern embodiment of distraction comes through mass communication, the business cycle and the perpetual motion of the everyday. Importantly, distraction is not a hostile or even intentional force, but rather is emblematic of the pace and proliferation of competing calls for attention in contemporary societies, especially amidst various forms of acceleration. Here, distraction helps make sense of the multivalent, sometimes discordant meanings of precision medicine. This includes the dissonances between the pervasive affective orientation of hope and possibility inspired by the idea of precision medicine, and the more heterogeneous and complicated realties that emerge in practice . Operating across scales from the individual to the global, distraction speaks to processes and developments that ask us to pay attention to certain things over others, for example attending to the hope inspired by precision medicine over the costs incurred by the implementation of different precision therapies in practice. Considering the distracting potentiality of precision medicine in this way requires that we ask, whether the collective affects of enchantment and acceleration turns our attention towards precision innovation, from what else do they potentially distract? We note here that innovation not only ‘advances’ the field, but also reconfigures it. On the basis of participants’ reflections in the focus group discussions analysed above, it is evident that precision medicine has already yielded considerable gains in terms of novel treatments and patient survival. Alongside these advances, though, we argue it is worth pausing to consider how precision medicine as it is being implemented in cancer care may contribute to uneven innovation and even, more controversially, structural neglect . In contrast to the everyday connotation of neglect as entailing in‐attention, irresponsibility and even deliberate interpersonal harm (see Reader et al., ), here we deploy neglect as emergent from the disjuncture between what we collectively hope precision medicine will be, and the more uneven reality of precision‐in‐practice the idea of precision medicine as a promissory horizon and its more uneven reality as it is being implemented in cancer care. Here, the affective and temporal complexities of precision medicine forge an attentional landscape in which the allure of therapeutic innovation can distract from routine care. This draws our analytic attention to the unintended, emergent voids and omissions that are arising alongside precision medicine, that is, to the paradox of precision innovation—how advances in precision medicine can offer enchanting therapeutic potential while simultaneously creating new vulnerabilities (in terms of access, in terms of care). Such paradoxes demand sustained sociological attention, including at the bedside and in the clinic, as precision medicine moves from promissory horizon to an emerging (albeit uneven) clinical reality. None. Katherine Kenny: ConceptualizWriting‐original draft (equal); Writing‐review & editing (equal). Alex Broom: Conceptualizander PageProject administration (equal); Writing‐original draft (equal); Writing‐review & editing (equal). Barbara Prainsack: Conceptualiz; Writing‐review & editing (equal). Claire Wakefield: Conceptualization (equal); Funding acquisition (equal); Writing‐review & editing (equal). Malinda Itchins: Investigation (equal); Project administration (equal); Writing‐review & editing (equal). Zarnie Lwin: Conceptualization (equal); Investigation (equal); Project administration (equal); Writing‐review & editing (equal). Mustafa Khasraw: Conceptualization (equal); Funding acquisition (equal); Writing‐review & editing (equal).
Do the patient education program and nurse-led telephone follow-up improve treatment adherence in hemodialysis patients? A randomized controlled trial
9848e21e-dbd4-4fc6-9c9d-446c80f27f0a
8028152
Patient Education as Topic[mh]
Dialysis patients usually do not adhere to their therapeutic regimen. The nurse’s educational program is not sufficient. Telephone follow-up can improve patients’ adherence to the dialysis treatment. Chronic Kidney Disease (CKD) is a general health risk factor worldwide with a higher prevalence in individuals older than 60 years old . The CKD refers to a type of kidney disease in which the kidneys lose more than 50% of normal function and is defined as the presence of kidney damage or an estimated Glomerular Filtration Rate (eGFR) below 60 ml/min/1.73 m 2 persisting at least for 3 months . Patients with CKD experience different stages of the disease (one to five), which is figured out based on the eGFR level . Stages 3 to 5 of CKD have considerable negative effects on patients’ activities of daily living, health status, nutrition, and water and electrolyte hemostasis, which can cause uremic syndrome (uremia) and result in death if not treated . End-Stage Renal Disease (ESRD) is the final stage of CKD and is defined as an irreversible decrease in kidney function as there is a need for a regular course of long-term hemodialysis or a kidney transplant to sustain life . The ESRD is one of the major public health problems worldwide, and it can cause considerable financial stress for the societies and health systems . Based on the National Health and Morbidity Survey reports, the prevalence of CKD has increased from 9.1% in 2011 to 15.5% in 2018. The incidence and prevalence of ESRD have also increased notably over the last 25 years and the number of ESRD patients is estimated to reach 51,000 in 2020 and 106,000 in 2040 . In Iran, the prevalence and incidence rate of ESRD is about 357 and 57 per million each year, respectively . According to the health statistics, the number of CKD patients in Iran was more than 55,000 in 2016, out of which 27,500 received Hemodialysis (HD) and 1600 received Peritoneal Dialysis (PD) . Further studies have shown considerable growth in the number of CKD patients in Iran, as the number of these patients increases by 15% every year . Currently, HD is the most common method of treatment for ESRD patients . However, patients receiving HD have to deal with several issues and changes in their lives. To avoid cardiovascular complications caused by HD, the patients need to adhere to a special diet . Establishing a successful HD program depends on adherence to four factors of diet, medication use, fluid restrictions, and HD attendance . Adherence to the therapeutic regimen in ESRD patients is a major factor in achieving preferred therapeutic outcomes. It decreases the hospitalization rate, debilitation, and side-effects such as nutritional disorders, muscle spasm, and blood infection . Failure to adhere to the therapeutic regimen is a major problem in chronic patients, including HD ones. More than half of the HD patients fail to observe their therapeutic regimen . Different factors affect HD patients ‘adherence to the therapeutic regimen. These factors include knowledge about the therapeutic regimen, socioeconomic status, health beliefs, attitude towards treatment, and culture. Adherence to dietary recommendations, fluid restrictions, and medication is not easy for patients, and failure to do so can cause great risks . Regarding the patients’ limited health knowledge, they have little control over the disease and its complications. Therefore, patient education can lead to a higher level of satisfaction, a better quality of life, assurance of the care continuity, reduced anxiety, less severe complications, attendance healthcare programs, client independence in doing daily activities, better provision of healthcare, and a decrease in treatment costs . In Iran, the provision of educational courses about therapeutic regimens for CKD patients is not well-performed by medical professionals because of a large number of patients, lack of time, and ward overcrowding . Therefore, quality patient education requires proper health education methods to ensure a patient-centered interaction and fulfillment of patients’ educational needs . Tele-nursing is one of the methods that rely on information technology. Understaffed wards, increased prevalence of chronic diseases, and population aging in the world entail a proper management to cut medication costs. Long-distance from health facilities and changes in health policies have soared the popularity of healthcare at home and tele-nursing. Healthcare can be shifted from hospital-centered to community-centered care and from care-centered to patient-centered care model through information technology, . Tele-nursing relies on numerous communication tools such as radio, TV, computer, smartphone, and telephone. The key factor in successfully monitoring the patients remotely is to apply an easy-to-use tool by the user. In addition, there should be no need for intensive training on how to use the tool . Tele-nursing refers to applying telecommunication technology in nursing to deliver and improve care services to the patients. The nurse-led telephone follow-up is a well-known care intervention in tele-nursing, as this technology (telephone) is now widely available . Patient education and follow-up care plays a key role in rehabilitation after hospital discharge. Over the past few years, several studies have been conducted in Iran on tele-nursing that showed patient education through booklets was not enough to improve treatment adherence in the patients, and there is a crucial need to implement follow-up methods after hospital discharge . Regarding the importance of adherence to the therapeutic regimen and the role of patient education and nurse-led follow-up intervention in patients undergoing HD, the present study was conducted to determine the effects of the patient education program and nurse-led telephone follow-up on adherence to the treatment in hemodialysis patients. The alternative hypothesis states that the mean score of treatment adherence in the intervention group differs significantly after conducting patient education program and nurse-led telephone follow-up. Study design and setting Participants Data collection Intervention Data analysis This is a single-blinded, randomized controlled trial conducted from April 2019 to May 2020 in Taleghani Hospital in Urmia, Iran. In the present study, the target population consisted of HD patients admitted to the dialysis ward of the hospital. Considering the confidence interval of 95% and the power of 80% in the study by Zamanzadeh et al. (2017), the minimum sample size was calculated to be 56 using G*Power 3.1 (Erdfelder, Faul, & Buchner, 1996). Regarding the attrition rate of 20%, the final sample size was considered to be 66 ( n = 33 per group) . Inclusion criteria were composed of the followings: (a) willingness to participate in the study, (b) being literate, (c) being enough conscious and oriented to answer the questions, (d) having no history of hearing and vision impairments, (e) having no cognitive disorder, (f) having a personal mobile phone and the ability to use it, (g) using no psychedelic drugs, (h) exact diagnosis of CKD confirmed by a nephrologist and having a medical record in dialysis ward, (i) get hemodialysis three times a week in sessions of 3 to 4 h, and (j) being in the 18–65 age group. Exclusion criteria consisted of (a) withdrawal from the study at any phase, (b) failure to receive two consecutive messages/calls, (c) patient death, and (d) being transferred to another health facility. Data were collected using a demographic questionnaire, the End-Stage Renal Disease Adherence Questionnaire (ESRD-AQ), and the laboratory results record sheet. The demographic questionnaire consisted of items on age, gender, marital status, education, residency, occupation, dialysis vintage, and the burden of comorbidities. The ESRD-AQ is a self-report tool that consisted of 46 items in four sections and was designed to evaluate treatment adherence in four dimensions of HD attendance, medication use, fluid restrictions, and diet recommendations (see Additional file ). The first section seeks general information on patients’ ESRD and history of renal replacement therapy (5 items), and the remaining four sections inquire about treatment adherence in four dimensions of HD attendance (14 items), medication use (9 items), fluid restrictions (10 items), and diet recommendations (8 items). Responses to this tool are based on a combination of Likert scale, multiple-choice, and “yes/no” answer format. The overall score ranges from 0 to 1200, and a higher score indicates higher levels of treatment adherence . Rafiee et al. (2014) confirmed the reliability of the tool using Cronbach’s alpha coefficient (α = 0.91). Moreover, the test-retest reliability coefficient was calculated to be 0.85 . Kim et al. (2010) also examined the content validity of the ESRD-AQ by calculating the Content Validity Index (CVI = 0.99) . The laboratory results record sheet includes laboratory values of serum sodium, potassium, calcium, creatinine, phosphate, albumin, iron, Blood Urea Nitrogen (BUN), hemoglobin, the normalized protein catabolic rate (nPCR), and Kt/V (K: dialyzer clearance, t: dialysis time, V: distribution volume of urea). After obtaining approval from the Ethics Committee and Vice-Chancellor for Research of Urmia University of Medical science, the researcher referred to the hospital and obtained permission from the hospital officials. The researcher then gave clear explanations of the study process to the head nurse of the dialysis ward. In this phase, convenience sampling was utilized to recruit the patients. Then, eligible patients were invited to participate in the study. The researcher provided. All participants were also assured of confidentiality and anonymity of all personal information received. Next, the written informed consent was obtained from all participants or their legally acceptable representatives. Subsequently, the patients were allocated to two groups of control ( n = 33) and intervention ( n = 33) using sealed envelope randomization. In this randomization method, the researcher used two cards (A, B) to assign the patients to two intervention and control groups randomly. The randomization was performed by having the patient pick one of the two cards inside opaque sealed envelopes. Patients who picked card (A) entered the intervention group, and those who picked card (B) entered the control group. To prevent contact between the two groups’ patients, the sampling was conducted based on the HD schedules, as the patients of the control and the intervention group were sampled on odd and even days, respectively. Before the intervention, the laboratory values were collected from patients’ medical records, and all participants filled in the demographic questionnaire and the ESRD-AQ. Data collection for each participant lasted about 30 min. The researcher continued recruitment and randomization until reaching the target sample size ( n = 66). Sampling was conducted from 9 April to 27 November 2019. Among 80 eligible patients, eight patients declined to participate in the study, three patients did not meet the inclusion criteria, and three patients were transferred to another health facility due to underlying health conditions. Flow diagram of entering subjects in the study groups has shown in Fig. . Patients in the intervention group received the patient education program on the diet, medication use, and fluid restrictions using a patient education booklet. They were also asked to provide their contact information i.e., phone number (mobile/landline number). They were then requested to change their phone language to Farsi and informed of how to use the Short Message Service (SMS). A test text message was sent to the participants’ mobile phones to ensure that the text messages are delivered to them. The message delivery report was also turned on for all participants. In addition, they were recommended to contact the researcher in the case of any question or problem. Mobile phone and landline telephone were used to perform tele-nursing and the follow-up program. The intervention lasted for 3 months. The researcher contacted the participants twice a week using the telephone. Besides, the researcher telephoned the participants at their convenience, and each phone call lasted for approximately 20 min. In the case of any problem, the participants were provided with effective solutions by the researcher. The text messages concerned patient education topics on diet, medication use, and fluid restrictions. The participants received a text message every day, so that a total of 90 text messages were sent to them during a period of 3 months. The maximum word number of each message did not exceed 160 characters and the message was marked when it was delivered to the patients. In the case that more than two messages were not delivered, the participant would be contacted using the landline number to check and receive a new mobile number. If the new mobile number also had a problem with receiving text messages, the participant would be excluded from the study. Participants in the control group only received routine care. Routine care in the dialysis unit includes dialysis treatment and answering patients’ questions during and after the treatment. We have no other educational intervention in the dialysis unit. All the participants completed the ESRD-AQ again immediately, 1 month, and 3 months after the intervention in the hospital. We prepared the same educational contents including the diet, medication use, and fluid restrictions, as a self-study booklet and handed it to the participants in the control group after intervention finished and data were collected. The laboratory values were examined and collected again from the medical records. The CONSORT 2010 checklist was used to ensure quality reporting in the present study (see Additional file ). All data obtained from 66 participants were entered into the analysis. The Kolmogorov–Smirnov test was used to examine the normality of data distribution. A researcher, who was blinded to the data, conducted the analysis. All data were entered into IBM SPSS Statistics for Windows, version 25.0 (IBM Corp., Armonk, N.Y., USA). Data were analyzed using descriptive and inferential statistics. In descriptive statistics, we used the frequency and percentage for analyzing qualitative variables and the mean and standard deviation for analyzing normal quantitative variables. We used the Kolmogorov–Smirnov test and confirmed the normal distribution of data. In inferential statistics, we utilized the chi-squared (χ2) test and Fisher’s exact test to assess the homogeneity of the groups. The independent-samples t-test was used to compare the data between the two groups. We also used the paired-samples t-test to compare the mean score within the groups. We used repeated measures ANOVA for studying the changes in the mean score of the treatment adherence during the four-time points of measurement. Analysis of covariance (ANCOVA) was used to adjust confounding variables. Demographic characteristics Treatment adherence to HD Medication adherence Fluid restrictions Diet recommendations Total Laboratory values he results showed that the overall mean age of the participants in the control and intervention groups were 30 ± 9/5 and 27 ± 11/5 years, respectively. Moreover, in the control group (57.6%), as in the intervention group (54.5%), the majority of participants were male, and also in the control and intervention groups, 87.9 and 97% of the participants were married, respectively. In addition, in both groups, 69.7% of the participants lived in the city. In terms of education level, more than half of the participants (54.5%) had a high school diploma and lower in the control group, but only 39.4% had a high school diploma and lower in the intervention group. In addition, in both groups, the most dialysis vintage was related to 2–5 years, and in the control and intervention groups, the highest comorbidity was related to hypertension (21.2%) and diabetes (21.2%), respectively (Table ). The results also indicated that there was no statistically significant difference between the two groups in terms of gender, marital status, education, residency, and occupation ( p > .05 ). However, in terms of age, the difference was statistically significant between the two groups ( p = .038 ) (Table ). Based on ANCOVA results, after adjusting the effect of age, education level, occupational status, and the dialysis vintage, the difference between treatment adherence and its dimensions score was significant between the two groups. Thus, the increase in the mean score of treatment adherence was due to the intervention’s effect. The results of the independent-sample t-test showed that there was no statistically significant difference in the mean score of HD attendance between the two groups before the intervention ( P = 0.269the mean score of HD attenda113). Despite a slight reduction in the mean score of HD attenda671) (Table ). Based on the results of the independent-sample t-test, the mean score of medication adherence was not significantly different between the two groups before the intervention ( P = 0.466). However, thisThe results of repeated measures ANOVA also indicated a significant difference in medication32). Despite a slight decrease in the mean score of medication adherence before the intervention and immediately after the intervention, the Bonferroni post-hoc test indicated that the difference was not significant ( p = 0.541) (Table ). The independent-sample t-test results also showed that the mean scores of the adherence to fluid restrictions were not significantly different between the two groups before the intervention ( P = 0.247). However, the difference was statistically significant immediately, 1 month, and 3 months after the intervention ( P < 0.001). Based on the results of repeated measures ANOVA, there was a significant difference in adherence to fluid restrictions26). Despite a slight reduction in the mean score of adherence to fluid restrictions before the intervention and the immediately after the intervention, the Bonferroni post-hoc test revealed that the difference was not significant ( p = 0.643) (Table ). The results of the independent-sample t-test revealed that the mean scores of the adherence to diet recommendations were not significantly different between the two groups before the intervention ( P = 0.136). However, in this term, a significant difference was found immediately, 1 month, and 3 months after the intervention ( P < 0.001). In terms of adherence to diet recommendations, the results of repeated measures ANOVA indicated a significant diff344). Despite a slight decrease in the mean score of the adherence to diet recommendations before the intervention and immediately after the intervention, the Bonferroni post-hoc test showed that the difference was not significant ( p = 0.431) (Table ). treatment adherence Eventually, toverall treatment adherence between the two groups before the intervention ( P = 0.436). However, the difference showed to beIn this regard, repeated measures ANOVA results indicated that there was a significant difference in the mean score of overall treatment adherence during the four-time points of measurement in the intervention group ( P < 0.0005) compared to the control group ( P = 0.076) (Table ). Based on the results of the independent-samples t-test, there was a significant difference in the mean score of laboratory values between the two groups after the intervention, except for the level of serum sodium ( P = 0.130). The paired-samples t-test also showed a significant difference in all laboratory values before and after the intervention in the intervention group, although in the control group, the only significant difference was found in the mean score of serum calcium ( P = 0.005) and iron ( P < 0.001) (Table ). In the present study, no ESRD side-effects were reported. Tele-nursing is becoming a new method for providing nursing care, and it has been increasingly used as an effective approach to chronic disease care. The results of this study indicated that the patient education program and nurse-led telephone follow-up could improve treatment adherence in the four dimensions of HD attendance, medication use, fluid restrictions, and diet recommendations in HD patients. Furthermore, it was found that the intervention program and the telephone follow-up made an improvement in the mean scores of laboratory values, i.e., serum potassium, calcium, creatinine, phosphate, albumin, iron, BUN, hemoglobin, nPCR, and Kt/V. Therefore, we need to make changes in terms of patient education strategies and utilize effective methods in this regard in ESRD patients. Overall, there are different factors affecting the adherence to the therapeutic regimen in patients undergoing HD. In this regard, factors such as socioeconomic status, health beliefs, patients’ attitudes towards treatment, and cultural differences are notable. Adherence to dietary recommendations, fluid restrictions, and medication regimen are not easy. On the other hand, neglecting the therapeutic regimen may lead to serious consequences, so that low patient education level can cause the patients to lack knowledge about the disease process and related therapeutic regimen. In line with the results of our study, Kamrani et al. (2015) showed that patient education and nurse-led telephone follow-up (tele-nursing) could improve treatment adherence in patients with acute coronary syndrome . Alikari et al. (2015) used counseling intervention and active participation in clinical decision-making to improve of treatment adherence. They revealed that patients’ active involvement in the educational program improved their awareness and perception, which led to a higher level of treatment adherence in HD patients . The improvement in treatment adherence results in a better physical condition and treatment process in HD patients. Based on the results of a study by Durose et al. (2004), it was found that the use of patient education techniques can motivate HD patients to comply with dietary restrictions, which in return can lead to weight loss in these patients. Patient education about dietary restrictions can reduce the risk of physical complications, improve patients’ quality of life, and increase life expectancy by 20 years or more . Some studies have shown that educational intervention increases patients’ knowledge and alters patients’ attitudes towards the disease, through which patients can adopt a better cooperative attitude towards adherence to the therapeutic regimen . The nurse-led telephone follow-up not only improved the efficiency of patient education but also increased the duration of adherence to the therapeutic regimen. Since the patients easily forget the medical advice, repetition of the orders and key points, and the nurse-led telephone follow-up enable them to better memorize the therapeutic regimen. Nurses can examine the patient’s needs and then refer him/her to a health professional, if necessary. Therefore, care services can be provided based on the patients’ needs . In line with our results, the previous studies have reported that there was a moderate adherence level for fluid and dietary restrictions in HD patients , which indicates the educational need of HD patients. Therefore, nurses can play an important role in improving HD adherence by providing training in new methods, especially tele-nursing. Kreps et al. (2011) Showed that motivational messages increased medication adherence in patients with chronic diseases , which is consistent with the results of the present study. Sending a message by the nurse to remind patients of their daily medication intake is important and thus improves adherence to the medication regimen. Karam et al. (2017) showed that there was no statistically significant relationship between treatment adherence and serum phosphorus in patients with HD . Our study also showed that there was no statistical relationship between patient education and nurse-led telephone follow-up (tele-nursing) and serum sodium level. This indicates that the ESRD-AQ may not be an appropriate tool to assess the relationship between treatment adherence and more biochemical indexes in patients with HD. Previous studies have indicated that the patient education program with telephone follow-up has positive effects on patients with chronic conditions so that it improves and alters health behavior and promotes cooperative attitudes in these groups of the patients. Furthermore, patients who received nurse-led telephone follow-up showed better adherence to therapeutic regimens and had more physical activity and notable changes in their behavior compared to those who only received routine care . The results of the above studies are consistent with the results of our study. Beaver et al. (2009), in a study on patients with breast cancer, concluded that the ineffectiveness of telephone follow-up might be due to the effects of the disease on patient’s ability to accept his/her condition . Hung et al. (2014) conducted a study on the impact of telephone-delivered nutrition and exercise counseling on nutritional status, body composition, and quality of life in patients undergoing peripheral blood stem cell transplantation. Based on the results of their study, it was indicated that telephone-delivered counseling alone might not decrease hospital readmission and there is a need for other intervention methods of patient education . The results of the studies by Beaver et al. (2009) and Hung et al. (2014) are inconsistent with the results of our study. In addition to the nurse-led telephone follow-up, we applied patient education booklets to provide further education in this study. It is recommended to provide a complete education for patients using educational pamphlets, booklets, videos, and other tools before follow-up. All the points that the patient needs to know, including disease progress, complications, treatment process, medication, and diet, should be educated. The patient should acquire in-depth knowledge on his/her condition. In this regard, telephone follow-up and home visits are recommended to be used along with patient education. This study highlights the importance of patient education with nurse-led follow-up program in patients with chronic diseases, especially HD patients. Some patients have poor treatment adherence due to the lack of knowledge and perception of the disease. Accordingly, the patient education program and nurse-led follow-up are recommended to improve patients’ perception and knowledge on their chronic conditions. Moreover, frequent follow-ups can increase treatment adherence and provoke patients to follow the medical advice at home, although the necessity of follow-up decreases as patients gain more knowledge on their condition. Besides, the patients should not be left on their own, and regular follow-ups should be a part of home care as even well-informed patients have periods of neglecting medical advice when they are on long-term medication for their condition . One of the limitations of this study was the short-term follow-up period due to grant limitations and large groups of the study. Another limitation was the small sample size that could affect the results because there was a dialysis center located throughout Urmia city. Therefore, it is suggested that future studies in this field be done with larger sample size. The use of a convenience sampling method in this study was also a major limitation because it is associated with a significant risk of selection bias. It is recommended that future studies include other sampling methods. Since this study was not a multi-center trial, leading to low generalizability of the study results. Moreover, some patients failed to attend the HD sessions as scheduled due to financial difficulties, and this issue could affect treatment adherence in these patients. Differences in mental and spiritual characteristics, motivations, and the personality traits of the participants might have affected their perception and knowledge and their treatment adherence. The above limitations were beyond the researchers’ control. Another limitation was that some patients did not answer the telephone in the first call, so that the researchers tried to call them again, which made the intervention time-consuming. We concluded that the patient education program and nurse-led telephone follow-up could improve HD adherence and modify health behavior in patients with ESRD by increasing their knowledge on their chronic conditions. The mean score of overall treatment adherence in the intervention group increased to the highest level at the final assessment. This indicates that continuous follow-up improves treatment adherence in ESRD patients. Therefore, the results support the necessity of continuous care and nurse-led follow-up for HD patients. Additional file 1. The list of items related to the four dimensions of ESRD-AQ including hemodialysis attendance, medication use, fluid restrictions, and diet recommendation. Additional file 2. CONSORT 2010 checklist of information to include when reporting a randomised trial*.
Taste preference changes throughout different life stages in male rats
80c87c3c-de3c-4d3c-ab39-90f61b4d1a99
5526549
Physiology[mh]
Humans and animals normally prefer sweet, salty, and umami tastes to sour and bitter tastes. However, taste preferences are easily changed by postnatal factors, including learning, environment, and nutritional status. Furthermore, aging itself may result in alterations in taste preference, as aging is generally accompanied by certain changes in bodily tissues and functions. Evidence suggests that taste sensitivity to sucrose is lower in older people [ – ]. Dietary and energy requirements change throughout the various life stages [ – ], and basal metabolic rate decreases with age in a near-linear manner . The observed decrease in taste sensitivity with aging could result in older people eating more foods with stronger flavor and, possibly, higher calories; this could contribute to the development of lifestyle-related diseases. A better understanding of age-related changes in taste preference may be important for disease prevention. Aged (90 weeks old) Fischer-344 rats showed significantly lower intake of food and sucrose than those aged 20–35 weeks . Decreases in taste sensitivity were observed in male Sprague-Dawley rats aged 28 months (112 weeks) . These studies suggest the possibility of altered taste thresholds or taste preferences with aging in rodents. The animals in these studies were given taste stimuli in both younger and older age periods as part of the within-subject experimental design. Therefore, it is possible that not only alterations in physiological function but also in consumption experience caused the differences in taste preferences across age groups. On the other hand, two reports investigated the differences in taste preference between two separate groups (Sprague-Dawley rats aged 5–12 weeks and 21–22 months; B6C3F1/J mice aged 10 and 18 weeks), in which the older animals were naïve to the taste stimuli until reaching the experimental age . These studies revealed a significant reduction of umami preference in the older rats , and of sucrose preference in the older mice . Thus, it is likely that taste preferences decrease with aging independent of consumption experience. Humans and animals have several major stages in their lifetimes, including weaning, reproduction, and old age. Transitions between stages are accompanied by changes in dietary and energy requirements, as well as alterations in hormone secretion. These provide the possibility that age-related changes in taste preferences are stepwise rather than abrupt. In order to elucidate whether aging induces graded shifts in taste preference, we assessed differences in the consumption of taste solutions (sucrose, saccharin, NaCl, HCl, quinine HCl, and monosodium glutamate) among five age-separated groups (juvenile, young adult, adult, middle-aged, and old-aged), which were naïve to the taste stimuli before reaching the experimental age. Although many studies have examined the alteration of taste preferences with aging in humans and animals, its underlying mechanisms remain unclear. Gustatory information is transmitted from the tongue to the central nervous system via the taste nerves, including the chorda tympani nerve, the glossopharyngeal nerve, the superior laryngeal nerve, and the greater superficial petrosal nerves in the oral cavity. Previous studies have shown that taste experience during development influences the function of the chorda tympani nerve [ – ]. These developmental changes in the function of the peripheral gustatory system suggest that aging results in altered taste nerve activity during different life stages. Therefore, using electrophysiology, we examined the effect of aging on the responses of the chorda tympani nerve (which transmits gustatory information from the anterior tongue to the brainstem) to taste stimuli. Age-related changes in taste preference using a 48-h two-bottle test Age-related changes in preference for low and high concentrations of taste solutions using a 48-h two-bottle test Electrophysiological measurements of the responses of the chorda tympani nerve to taste solutions Statistical analysis Normalized food intake was calculated by dividing the 24-h food intake by BW (per 100 g) and analyzed using one-way analysis of variance (ANOVA) and post-hoc Tukey HSD tests. The data from the electrophysiological experiments were also analyzed using one-way ANOVA. The previous studies investigating drinking behavior generally used 24-h intake as the behavioral index. To enable a comparison of our results with the prior studies, we calculated the 24-h intake volume as half of the 48-h intake volume. The taste solution preference ratios in the first experiment were calculated by dividing the volume of taste solution ingested by the total intake of DW + taste solution, and analyzed using one-way ANOVA with Tukey's HSD post-hoc tests. The preference ratios for higher concentration taste solutions were calculated by dividing the intake of the higher concentration solution by the total intake of higher + lower concentration solutions, and analyzed using one-way ANOVA with Duncan’s post-hoc tests. We also analyzed whether the preference ratios were significantly different from chance level (0.5) using an independent t-test. The difference between net intake of DW and each taste solution for each age group was analyzed by paired t-test. All statistical analyses were performed using Statistica software (StatSoft, Inc., Tulsa, OK, USA). A P value < 0.05 was considered significant. A 48-h two-bottle test, a standard behavioral test for taste preference, was conducted on 46 male Sprague-Dawley rats (CLEA Japan, Inc., Japan) between 3 and 72 weeks of age and weighing 95–1050 g. Rats generally have a mean lifespan of 2–3 years that includes two critical time-points: the end of weaning and reproduction. Therefore, we divided the rats into five groups: juvenile (3–6 weeks, just after weaning, n = 9), young adult (8–11 weeks, early reproductive phase, n = 8), adult (17–20 weeks, late reproductive phase, n = 9), middle-aged (34–37 weeks, end of reproduction, n = 10), and old-aged (69–72 weeks, n = 10). We did not use rats over 74 weeks of age because they carry a high risk of spontaneous disease. The experience of consuming taste solution is likely to have an effect on subsequent ingestive behaviors. To avoid this, a different set of rats was used for each age group. The juvenile group was purchased at 3 weeks. The young adult and the adult groups were purchased one week before the behavioral experiments. As rats of more than 30 weeks old were not available for purchase, the middle-aged and the old-aged groups were purchased at 30 weeks and raised until the appropriate age for testing in the animal-breeding facilities of the faculty. All rats were allowed food pellets (MF, Oriental Yeast, Osaka, Japan) and distilled water (DW) ad libitum, and handled by the experimenters every day before attaining the appropriate age. Animals were housed individually in plastic cages suitable for their body mass: 225 × 338 × 140 mm for rats 3–11 weeks old, and 345 × 403 × 177 mm for rats 17–72 weeks old. Cages were changed once a week. Since environmental changes could alter animals’ consumption behavior, the taste stimulus was presented after at least 60 hours of acclimation in the new plastic cage. The ambient temperature was maintained at 23°C in a 12:12 h light/dark cycle (lights on between 8:00 and 20:00). All animal care and experimental guidelines conformed to those established by the National Institutes of Health and were approved by “Guide for the Care and Use of Laboratory Animals” in the Osaka Dental University Animal Care and Use Committee (Permit Number: 12–02045). After the acclimation, all rats were presented with two bottles in their home cages: one containing DW and the other containing a taste solution. The bottle consisted of a 100-ml plastic syringe (JS-S00S, JMS Co., Ltd, Tokyo, Japan) and a stainless steel spout (TV-25, CLEA, Tokyo, Japan). The rats could freely access both bottles and chow for 48 h. To avoid positional preference, the positions of the bottles were switched 24 h after the start of the presentation. We recorded 48-h fluid consumption by measuring the weight of the bottles. The taste solutions were sucrose (0.3 and 0.5 M), sodium saccharin (saccharin, 5 mM), NaCl (0.1 and 0.3 M), QHCl (0.03 and 0.3 mM), MSG (0.1 M), and HCl (10 and 50 mM). To exclude the possibility of order effects, the taste solutions were presented in pseudorandom order, without grouping similar solutions by concentration. In addition, the presentation order was different among different rats. The order was one of the following: 1) 0.3 M sucrose, 0.1 M NaCl, 0.3 mM QHCl, 5 mM saccharin, 50 mM HCl, 0.1 M MSG, 0.03 mM QHCl, 0.5 M sucrose, 0.3 M NaCl and 10 mM HCl; 2) reverse order of 1); 3) 0.1 M MSG, 50 mM HCl, 5 mM saccharin, 0.3 mM QHCl, 0.1 M NaCl, 0.3 M sucrose, 0.03 mM QHCl, 0.5 M sucrose, 0.3 M NaCl and 10 mM HCl. We spent 4 weeks (e.g., 3–6 weeks of age in the juvenile group) completing the presentation of all 10 taste solutions. It was possible that the differences in the consumption of the taste solutions and water across the life stages were due to the changes in the taste thresholds. In order to answer to this question, we investigated the intake of low and high concentrations of taste solutions in the second experiment. It included a new series of 38 male Sprague-Dawley rats (CLEA Japan, Inc., Japan) aged 3–72 weeks and weighing 125–980 g. We divided the rats into five groups as in the first experiment: juvenile (n = 8), young adult (n = 8), adult (n = 7), middle-aged (n = 7), and old-aged (n = 8). The housing conditions were the same as described above. All rats were presented with two bottles containing the same taste solution for 48 h, one at a high concentration and the other a low concentration. The taste solutions were 0.3 M vs. 0.5 M sucrose, 5 mM vs. 50 mM saccharin, 0.03 mM vs. 0.3 mM QHCl, 0.1 M vs. 0.3 M NaCl, 0.1 M vs. 0.3 M MSG, and 10 mM vs. 50 mM HCl. The taste solutions were presented in pseudorandom order. The order was one of the following: 1) 0.3 and 0.5 M sucrose, 0.1 and 0.3 M NaCl, 5 mM and 50 mM saccharin, 0.1 M and 0.3 M MSG, 0.03 mM and 0.3 mM QHCl, and 10 mM and 50 mM HCl; 2) reverse order of 1); 3) 5 mM and 50 mM saccharin, 0.1 and 0.3 M NaCl, 0.3 and 0.5 M sucrose, 0.1 M and 0.3 M MSG, 0.03 mM and 0.3 mM QHCl, and 10 mM and 50 mM HCl. Rats from this test were subsequently used in the electrophysiological experiments. The rats were anesthetized with an intraperitoneal injection of 60 mg/kg sodium pentobarbital (Somnopentyl ® ; Kyoritsu Seiyaku, Tokyo, Japan). Supplementary injections of 0.3 g/kg urethane were administered as needed to maintain a surgical level of anesthesia. A tracheal cannula was implanted and the animal properly secured within a head holder. The chorda tympani nerve was cut near its entrance into the tympanic bulla and dissected free from the underlying tissues. An indifferent electrode was positioned nearby in the wound. The whole-nerve activity was amplified, displayed on an oscilloscope, and monitored using an audio amplifier. The amplified signals were passed through an integrator with a time constant of 0.3 s and displayed on a slip chart recorder. After confirmation of stable recording, we applied 5 ml of taste solution to the rat’s tongue for 30 s. The rat’s tongue was rinsed with DW after completion of taste stimulation. We measured the entire integrated response during the simulation as the whole nerve response. In electrophysiological experiments it is possible that any endogenous or exogenous factors may produce individual differences in the recording of neural activities. Therefore, we normalized the taste responses by dividing the magnitudes of the responses to each taste stimuli by the response to 0.1 M NH 4 Cl, which is generally used as a standard stimulus in electrophysiological recordings of peripheral taste nerves. ge-related changes in taste preference using a 48-h two-bottle test Age-related changes in taste preference for low or high concentrations of taste solutions Electrophysiological experiments with the chorda tympani nerve Rats gradually grow larger from post-weaning to before the end of reproduction. Because body size is closely related to nutritional requirements, we assessed differences in BW and food and fluid consumption among the different age groups ( ). Rats in the old-aged group weighed significantly more than rats in other groups ( P < 0.05). The net food intake of the old-aged group was less than that of the young adult and adult groups ( P < 0.001 for both) but was significantly greater than that of the juvenile group ( P < 0.05). However, the normalized food intake values revealed lower food intake relative to BW in the old-aged rats than in the juvenile rats ( P < 0.001). shows how the preference ratio for taste stimuli differed among age groups. The juvenile and young adult groups exhibited similar preferences for several taste stimuli, each drinking much more sweet and umami solutions than DW and avoiding bitter and strongly sour tastes. In contrast, the middle-aged group demonstrated different preferences, with lower preference ratios for sweet and umami tastes and higher preference ratios for bitter tastes compared to the younger three groups. The old-aged group showed a lower preference ratio for 0.3 M sucrose and 0.1 M MSG (but not 0.5 M sucrose), and higher preference ratio for 0.03 mM QHCl compared to the juvenile group. One-way ANOVA revealed significant main effects of age with regard to the following taste solutions: 0.3 M sucrose (F(4, 35) = 2.91, P < 0.05), 0.5 M sucrose (F(4, 35) = 3.19, P < 0.05), saccharin (F(4, 35) = 3.16, P < 0.02), 0.03 mM QHCl (F(4, 35) = 5.64, P < 0.01), and MSG (F(4, 35) = 3.70, P < 0.05). Post-hoc analyses demonstrated a significantly lower preference ratio for 0.3 M sucrose in the old-aged group than the young adult group ( P < 0.05). Preferences for 0.5 M sucrose in the middle-aged group were significantly lower than that in the young adult and the old-aged groups ( P < 0.05). The older two groups (middle-aged and old-aged) exhibited significantly lower preference ratios for 0.1 M MSG than the juvenile group ( P < 0.05 for both). The middle-aged group had significantly greater preference ratios for 0.03 mM QHCl compared to the three younger groups (middle-aged vs. juvenile, P < 0.01; vs. young adult and adult, P < 0.05). The old-aged group also had it compared to the juvenile group (old-aged vs. juvenile, P < 0.05). The preference ratios indicate which solution the animals preferred but do not indicate the net intake of solutions. As shown in , the rats exhibited age-dependent decreases in food intake based on BW. As solution intake volumes typically correlate with pellet intake, it seemed likely that the different pellet consumption behaviors between age groups influenced the age-related differences in preference ratios. Therefore, in , we show the net intake of DW and taste solutions. The juvenile, young adult, and adult groups consumed significantly more of the sucrose, saccharin, and MSG solutions (juvenile, P < 0.001 for all solutions; young adult, P < 0.001 for sucrose and saccharin, P < 0.01 for MSG; adult, P < 0.001 for 0.5 M sucrose and saccharin, P < 0.01 for 0.3 M sucrose, P < 0.05 for MSG). The young adult group also consumed more 0.1 M NaCl than DW ( P < 0.01). The adult group consumed significantly less 0.3 M NaCl than DW. On the other hand, the younger three age groups (juvenile, young adult, and adult) consumed less QHCl and 50 mM HCl than DW (juvenile, P < 0.001 for all solutions; young adult, P < 0.001 for 0.3 mM QHCl and 50 mM HCl, P < 0.01 for 0.03 mM QHCl; adult, P < 0.001 for 0.3 mM QHCl and 50 mM HCl, P < 0.05 for 0.03 mM QHCl). The juvenile group also consumed significantly less 10 mM HCl than DW ( P < 0.01). In contrast to the younger three groups, the middle-aged and old-aged groups did not consume significantly more 0.3 M sucrose and MSG or less 0.3 mM QHCl than DW. In this behavioral experiment, the rats were simultaneously presented with taste solutions at low and high concentrations. Even when a taste is appetitive (e.g. sweet, salty, or umami), a taste stimulus that is too strong will not be pleasant. It was hypothesized that older animals would exhibit a greater preference for higher concentrations if aging elevated the taste threshold. Our results demonstrated age-related differences in the preference for higher concentration taste solutions ( ). Only the oldest age group exhibited a much greater preference for the higher concentration solutions than the lower concentration solutions. One-way ANOVA showed a main effect of group for the following taste solutions: sucrose (F (4, 29) = 2.86, P < 0.05), QHCl (F (4, 26) = 4.12, P < 0.05), NaCl (F (4, 27) = 4.38, P < 0.01), and MSG (F (4, 27) = 3.25, P < 0.05). Post-hoc analysis revealed that the old-aged group significantly preferred the higher concentration solutions of sucrose (vs. juvenile, adult, and middle-aged: P < 0.05), QHCl (vs. juvenile, adult, and young adult: P < 0.01, P < 0.01, and P < 0.05, respectively), NaCl (vs. juvenile, young adult, and middle-aged: P < 0.01, P < 0.01, and P < 0.05, respectively), and MSG (vs. juvenile, young adult, and middle-aged: P < 0.01, P < 0.05, and P < 0.05, respectively). The preference ratio of the old-aged group did not differ from the young adult group for sucrose, or from the adult group for NaCl and MSG. However, the preference ratios of the young adult and adult groups were approximately 0.5 (chance level). These findings indicate that only the old-aged group preferred the high concentrations of sucrose, QHCl, NaCl, and MSG to the low concentration solutions. Based on previous studies, we used 0.1 M NaCl, 0.1 M MSG, 50 mM saccharin, 0.3 M sucrose, 0.3 mM QHCl, 20 mM QHCl, and 50 mM HCl in the electrophysiological experiments. We used the higher concentration of QHCl because the responses of the chorda tympani nerve to the lower concentration of QHCl have been reported to be very small . presents examples of the chorda tympani nerve gustatory responses, clearly showing how the waveforms differed among the taste stimuli. However, significant group differences are difficult to observe. To compare the magnitude of the responses, we show the normalized response magnitude in . One-way ANOVA revealed no main effects of group for any of the taste stimuli. These results suggest that aging does not affect gustatory processing in the peripheral nervous system. In the present study, we examined the taste preferences and gustatory responses of the chorda tympani nerve in male Sprague-Dawley rats at different life stages. The behavioral experiments revealed that the old-aged group showed significantly lower preference for 0.3 M sucrose than the young adult group, and for 0.1 M MSG than the juvenile group. In contrast, the old-aged group demonstrated significantly higher preference for 0.03 mM QHCl than the juvenile group. The preference ratio for 0.1 M MSG in the middle-aged group was significantly lower than that in the juvenile group. The middle-aged group also displayed significantly higher preference for 0.03 mM QHCl than younger groups (juvenile, young adult and adult groups). When simultaneously presented with different concentrations of the same taste solution, only the old-aged rats drank larger volumes of the higher concentrations of sucrose, QHCl, NaCl, and MSG than the lower concentration solutions. However, the electrophysiological experiments revealed no significant differences between the different age groups with regard to the responses of the chorda tympani nerve, which is one of the peripheral taste nerves. With simultaneous presentation of taste solution and DW, the old-aged group had a preference ratio of approximately 0.5 for 0.3 M sucrose, indicating that these rats drank similar volumes of 0.3 M sucrose and DW ( ). The old-aged group also tended to have a lower preference ratio for saccharin. On the other hand, the old-aged group drank a much greater volume of 0.5 M sucrose than DW. These results suggest that the old-aged group had difficulty discriminating between DW and low-concentration solutions of sucrose or saccharin, indicating an age-related decline in sensitivities for the sweet taste. As the old-aged group had a decreased preference for normally palatable (sweet and umami) taste and greater preference for aversive (bitter) taste compared to the other groups, the results indicate the possibility that the older rats could not detect the taste substances in the fluids. As shown in Figs and , there were clear differences in food and water consumption among the groups, indicating that aging affects water and energy requirements. The lower food intake in the middle-aged (34–37 weeks) and old-aged (69–72 weeks) rats suggests an alteration in ingestive behavior by the end of reproduction. Therefore, we adopted another technique in which the rats were simultaneously presented with high and low concentrations of taste solutions. If aging causes deficits in taste detection, the old-aged rats may not be able to discriminate between high and low concentrations. The comparison between low and high concentrations of the same taste solutions revealed that the juvenile, adult, and middle-aged groups drank less 0.5 M sucrose than 0.3 M sucrose. Though sucrose is normally a palatable taste stimulus, animals exhibit a decreased preference for sucrose at high concentrations . However, in the present study the old-aged rats showed a higher preference ratio to 0.5 M than 0.3M sucrose. These results further support the reduced sweet taste sensitivity in the older rats compared to the younger rats. As shown in , the old-aged group had a lower preference ratio for 0.1 M MSG than the younger groups. Furthermore, only the old-aged group preferred the high (0.3 M) concentration of MSG to the low (0.1 M) concentration. Miura et al. reported reduced umami preference in aged Sprague-Dawley rats (21–22 months) compared to a young adult group (5–12 weeks). The umami receptor is a heterodimer of taste receptor type 1 member 1 and 3 (T1R1/T1R3), whereas the sweet receptor comprises taste receptor type 1 member 2 and 3 (T1R2/T1R3) . Although there are similarities in the peripheral transduction, Sprague-Dawley rats can reportedly discriminate between umami taste and some sweet tastes . These results suggest that the old-aged group also had a reduced sensitivity for MSG compared to the other age groups. In addition, it is suggested that amino acid receptors exist in the duodenum and intestine. Niijima showed the increased discharge rate of the gastric branch of the vagus nerve by intestinal stimulation with isotonic MSG, but not NaCl, solution. These results suggest that a long-term presentation of MSG likely produce postingestive effects. Since it seems that aging cause decreased functions of the duodenum and intestine, the low MSG preference in the old aged group might be due to aging-induced changes in postingestive effects. We also found that the old aged rats preferred higher concentrations of NaCl and MSG than lower ones. If only the declined postingestive effects lower taste preferences of NaCl and MSG, the old aged rats should avoid the intake of the higher concentration of NaCl and MSG. Therefore, it is likely that the aging-induced changes in not only the postingestive effects but also the functions of the gustatory nerves other than chorda timpani and central nervous system result in the alteration in the taste preferences of NaCl and MSG. Animals and humans normally dislike bitter tastes. However, the middle-aged and old-aged groups in our study had higher preference ratios (> 0.5) for the 0.03 mM QHCl solution compared to the other age groups ( ), indicating that the two oldest groups drank substantially more QHCl than DW. The total intake volumes of 0.03 mM QHCl and DW in the older groups did not significantly differ from the intake volumes in the younger three groups ( ), suggesting that the higher preference ratios for the low concentration QHCl solution in the older groups were not due to abnormal fluid consumption. When the older groups were presented with the high concentration QHCl solution, they had preference ratios of < 0.5. Our findings suggest that the older rats had a low preference for bitter tastes, as well as sweet and umami tastes. Even with a reduced preference for bitter tastes, the older rats were still expected to drink less high concentration QHCl solution, which is normally an aversive taste stimulus . Surprisingly, the preference ratio for the higher to lower concentration solution of QHCl in the old-aged group was > 0.5 ( ). This indicates that rats in the old-aged group preferred 0.3 mM to 0.03 mM QHCl solution. On the other hand, although the middle-aged rats preferred 0.03 mM QHCl to DW ( ), they did not show preference for higher concentration QHCl. These results suggest that the rats of 34 weeks or older (middle- and old-aged rats) have less ability to detect bitterness or feel unpleasantness for quinine. Moreover, it is assumed that the old-aged rats are unable to detect the differences in the concentration of the taste solutions. We expected our results to show age-related changes in the preference for HCl, which is normally an aversive taste stimulus. A previous study reported a tendency for aged (28 months old) Sprague-Dawley rats to have less preference for citric acid than younger rats . However, we found no significant age-related differences in the preference ratios for HCl, both vs. DW ( ) and when two concentrations were presented simultaneously ( ). The sour taste is thought to cause the action of protons on taste receptor cells, impacting taste transduction , though the precise mechanism is controversial. The HCl concentration used in our study may have been too strong of a stimulus to detect age-related differences. A prior study using Fischer 344 rats demonstrated substantial chorda tympani nerve responses to taste stimuli even at 30 months of age . The number of taste buds and taste bud diameter did not correlate with age in rats . These reports support the idea that the altered taste preference in old-aged rats in the present study was not due to the function of the chorda tympani nerve. A study in 18-month-old mice, however, reported a significant reduction in taste bud size, the number of taste cells per bud, the number of taste cells expressing the sweet taste receptor, and the sweet taste-modulating hormone glucagon-like peptide-1 . We have not yet examined whether the functions of other taste nerve components, such as the greater superficial petrosal nerve, superior laryngeal nerve, and glossopharyngeal nerve, are influenced by aging. Therefore, the results of the present study are not sufficient to rule out the possibility that aging induce the changes in the function of the peripheral taste system. The present study evaluated taste preference using a 48-h two-bottle test method. The ingestion of taste substances affects subsequent consumption behaviors, referred to as the post-ingestive effects, via the gastrointestinal tract [ – ]. The perception of taste substances in the gastrointestinal tract causes changes in the level of feeding-related hormones, such as insulin and leptin . Gastrointestinal motility [ – ] and levels of feeding-related hormones in the central nervous system are altered with aging. These results might indicate that aging affects systemic physiological functions. Taste and olfactory information are thought to converge in the central nervous system. Aging reportedly leads to alterations in the spontaneous function of the central nervous system . It is known that there are separable brain substrates underlying “wanting” (closely related to appetite) and “liking” (closely related to palatability) . Another study reported that older rats exhibited decreased “wanting” and “liking” for a sweet reward in an incentive motivation task . The central nervous system is heavily involved in the parameters of “wanting” and “liking” a sweet reward . These data suggest that the age-related behavioral differences observed in the present study may be due to age-related changes in the central nervous system functions involved in ingestive behaviors. The future studies will identify the brain regions and neural circuits involved in the behavioral alterations observed among older rats. The present study revealed that the preference for sucrose and MSG decreases with age, whereas the preference for QHCl increases. We also found that old-aged rats consumed higher concentrations of sucrose, NaCl, MSG, and QHCl than younger rats, indicating that aging causes changes in taste preference. Although the aging-induced changes in taste preference are likely a result of alterations in the functions of peripheral and central organs, we found no age differences in the electrophysiological responses of the chorda tympani nerve, a peripheral taste nerve. This ruled out the possibility that the differences in taste preference among age groups are a result of altered function of the chorda tympani nerve. Therefore, to clarify the mechanisms underlying age-related changes in taste preference, future studies should investigate the effects of aging on the activities of other peripheral taste nerves, including the glossopharyngeal, greater superficial petrosal, and superior laryngeal nerves, and the function of the brain reward system involved in taste hedonics. S1 File The ARRIVE guidelines checklist. (PDF) Click here for additional data file. S2 File Raw data. (XLSX) Click here for additional data file.
Dental care for persons with disabilities: discretion on the frontline
f7eb9206-cf28-4e86-986f-3c26a6faf74d
10547384
Dental[mh]
Over the past three decades, several strategies of decentralization and regionalization of healthcare services have gained traction . The implementation of health policies has become contingent upon diverse organizations and a multitude of public-private agreements at the local level. Within this context, discretion—a pivotal concept in the theory of street-level bureaucracy —, has been scrutinized across various governing dimensions of decision-making, encompassing ethical values, rules, normative provisions, as well as professional and organizational aspects that translate into the expected roles of frontline professionals and organizations . The ways in which frontline organizations/professionals provide benefits and sanctions contribute to shaping and circumscribing individuals’ lives, either by broadening or restricting opportunities. Enlisted as public utility services providers and public policy implementers, frontline professionals, along with the organizations dependent upon their operational costs, often find themselves at the strained core between the demands of service recipients seeking greater effectiveness and responsiveness, and the expectations of citizens, urging for heightened efficiency and efficacy from the organizations tasked with provisioning public services . Agents’ discretion in the process of implementing a given public policy may vary in scope, contingent upon the degree of policy structure/detail and the comprehensiveness and ambiguity of rules upheld within the organizations where they belong . While the exercise of political discretion may weaken or bolster the general interest and legitimacy of a program, administrative discretion is perceived as a spectrum of choices existing within a set of parameters delineated through organizational rules. Frontline organizations/professionals utilize their discretion to form arrangements blending different dimensions aimed at attaining politically and socially desired outcomes within a legitimately defined direction. These arrangements could elucidate disparities in the frontline implementation of public policy . Such arrangements are not solely born from interpersonal interactions but can result from the interplay of a network comprising diverse professionals and frontline organizations operating either in isolation or collaboratively . The Brazilian literature lacks works that incorporate analytical models of health policy implementation and street-level bureaucracy. Some studies delve into the discretionary power of community health agents in implementing primary care policy actions , as well as the role of street-level bureaucrats in executing the policy of water fluoridation at public supply plants in small Brazilian municipalities . Studies investigating the discretion of street-level bureaucracy in the implementation of redes de atenção à saúde (RAS – health network policies) remain largely unexplored, making it opportune to investigate specialized dental care access within the framework of the Care Network for Persons with Disabilities ( Rede de Cuidados à Pessoa com Deficiência – RCPD). Access can be defined by the various strategies adopted by organizations to facilitate users’ utilization of needed services . In the first decade of the 21st century, the enactment of the National Policy for Persons with Disabilities ( Política Nacional da Pessoa com Deficiência – PNPD) and the National Oral Health Policy ( Política Nacional de Saúde Bucal – PNSB) in Brazil propelled the integration of basic dental care structures, with oral health teams working in the primary health care (PHC) network, as well as specialized care, via dental specialty centers (DSC), to provide more complex ambulatory treatments for all in need, including persons with disabilities (PwD) . The RCPD, established within the Brazilian Unified Health System (SUS) in 2012, stimulates intergovernmental coordination to ensure the access of PwD to various specialized treatments, via regional agreements and governance involving state/municipal authorities of a specific health region. Safeguarding the right to dental treatment and comprehensive care is no simple task and calls for harmonious coordination among different points of care within the RCPD , wherein the PHC should stand out as the healthcare coordinator . A study encompassing 930 DSCs revealed that 85% of the units were municipally managed and 10% of them did not provide care for PwD, despite it being mandatory. User access was referred by the PHC in just over half of the specialized units. A significant share (42.7%) allowed for the scheduling through spontaneous demand , but no study has explored this contrasting situation. It is conceivable that the implementation of this public policy hinges on the actions of frontline professionals and organizations. Producing scientific information on the interplay between PHC and specialized services aimed at PwD could enhance the understanding of progress and challenges in the development of integrated healthcare networks. This study aims to depict the role of frontline professionals and organizations concerning the PHC and planning/assessment activities to grasp the exercise of discretion in implementing different means of specialized dental care access in the RCPD across two Brazilian health regions. A case study was conducted in two regions with disparate access to specialized dental care. They were selected based on documentary data and interviews performed in a broader study approved by the Research Ethics Committee Faculdade de Saúde Pública of Universidade de Sá0 Paulo, nº. 3,441,243. Access to specialized care was referenced in the healthcare region of São José do Rio Preto (Region A), while access was mixed in Salvador (Region B). In Region B, a significant share of users accessed the specialized unit freely without the need for formal referral, subject to its own rules. The two regions were intentionally chosen due to their similar socioeconomic characteristics and services provided. In a typology conducted by Viana et al. , they were classified as regions with high socioeconomic status, as well as a high supply and complexity of health services. Moreover, both regions had specialized rehabilitation centers (SRC), dental specialty centers, municipal and/or philanthropic specialty services with rehabilitation actions for users (such as the Associação de Pais e Amigos dos Excepcionais , the Associação de Assistência à Criança Deficiente , both NGOs), and educational institutions (universities, colleges, and medical and/or multi-professional residency programs). Data collection took place between July and December 2019 and was conducted by trained professionals. Five specialized dental care units for PwD and four primary care units under municipal management in the two regions agreed to participate in the research. Of the five specialized units, four were municipally managed (one in Region A and three in Region B), and one was state-managed (Region B). Nine frontline professionals were interviewed, represented in this study by their managers, and considered key actors within the organizations. When a unit did not have a manager, the key informant was the dentist responsible for the care of PwD. Structured (questionnaires) and semi-structured (scripts) instruments were used for data collection. In this study, respondents’ answers regarding the frequency of PwD access to specialized units through the PHC, the prioritization criteria adopted, challenges for the PHC to act as the main gateway to services, PHC initiatives to coordinate care, and the use of monitoring/assessment instruments, their frequency, and participating actors were analyzed. Technical reports and minutes of collegial meetings regarding content related to the implementation of the Care Network for Persons with Disabilities at the local-regional level, spanning events from 1988 to 2019, were examined to identify normative provisions and aspects related to PHC organization and planning/assessment activities. To achieve this, decrees, laws, and regulations related to the topic were consulted on the institutional websites of the regions (Municipal and State Secretariats, State Planning and Management Secretariat), technical reports produced within the health regions, and minutes of meetings of the Comissão Intergestores Bipartite (CIB – Bipartite Intermanagement Committee) and Comissão Intergestores Regional (CIR – Regional Intermanagement Committee), made publicly available or provided by service technicians. The selection of document excerpts was based on keywords identifying nuclei of meaning related to the following categories: type of access; attribute of PHC related to care coordination; planning, evaluation, and monitoring. Care coordination refers to the clinical management of cases through the integration of actions and services provided by different units in the care network to meet users’ health needs and aim to reorient the care model. Planning, evaluation, and monitoring concern specific guiding instruments and indicators for the frontline professionals’ work process, as well as the quality of the provided actions . The results are presented according to theme categories and regions. , , and present excerpts from the interviews, grouped into the categories “Access to Specialized Dental Care for PwD,” “The PHC as a Care Coordinator,” and “Planning, Evaluation, and Monitoring,” respectively. provides a synthesis of the results by theme category in Regions A and B. Interviewees were identified as E R1 , E R2 , and E R3 for Region A, and from E R4 to E R9 for Region B. The discussions between different levels of government within the CIB on the management and organization of RCPD started at the end of 2011 in Region A, while in Region B, they were only added to the agenda in 2013. In Region A, where access to specialized dental care only occurred through formal requests issued by a service unit in the network, it was observed that this reference happened when the PHC dentist lacked the necessary conditions to provide adequate treatment. This was also the case in Region B, where access was both through open demand and referral. In this mixed type of access, users could enter freely, without the need for a formal request, and according to the specialized unit’s own rules. The UBS was the preferred access point in Region A, to access other specialized rehabilitation services included in the RCPD (such as SRC and the Ambulatório Médico de Especialidade – AME [Specialty Outpatient Clinics]), being responsible for referring users to specialized services. Access to specialized units located in the main municipality was through the PHC, obligatorily. Cases that did not follow this criterion were admitted for guidance and redirected to the PHC. Excerpts from documents pointed out that ensuring RCPD accessibility was a recurring theme in meetings of the Comissão Intersetorial de Atenção à Pessoa com Deficiência (Intersectoral Committee for the Care of Persons with Disabilities) in Region A. This effort involved the expansion of the availability of sanitary transport and assistance offering for the use of regular buses. In Region B, there were specialized dental care units managed either by the state or municipalities. It was observed that professionals from the state-affiliated unit did not communicate or interact with dentists working in PHC and specialized units belonging to the municipal network. In practice, that service had an entry point following pre-established rules. On the other hand, the relevance of networking in Region A was recognized to enhance care for PwD and, consequently, RCPD ( ). The cost of transportation means in Region B and how hearing-impaired PwD were admitted to specialized units in Region A were identified as potential access barriers to services. The main municipality in Region A had a professional interpreter of Libras (Brazilian Sign Language) in place, who acted as a translator during specialized dental consultations through prior scheduling. It was noticeable that UBSs were not the preferred access point for health services for users in Region B. Their role as care coordinators was weakened by the low potential for population coverage and the fragmentation of care, pointing to the significant challenge of integrating the PHC with other specialized services. During CIB meetings, strategies were proposed for the development of the State Plan, such as establishing care pathways, organizing the flow of services, enrolling family health teams with oral health expertise, and expanding the PHC. However, no changes were observed at the frontline. The counter-referral of users back to the PHC by specialized unit professionals in Region B also did not take place ( ). The PHC played a central role in health care across Region A, being primarily responsible for the care for PwD. Professionals participated in matrix support on the subject, providing rehabilitation actions and managing to schedule timely rehabilitation appointments for referred users. The DSC’s matrix support agenda with the PHC was planned annually through team meetings. The UBSs which carried out oral health care support in both regions also provided care for PwD. The more complex cases, such as non-collaborative users needing restraint or more individualized intervention, were referred to DSCs. Planning, evaluation, and monitoring of actions were not a part of the institutional service routine in Region B. As a consequence, the work process was neither monitored nor guided by any pre-established criteria. Only a few units held regular team meetings and monitored indicators. It should be noted that the State Plan of the RCPD envisaged the production, follow-up, and monitoring of information, as well as professional qualification, as a management qualification guideline ( ). Planning, evaluation, and monitoring were part of the institutional routine of services in Region A, and some managers (such as the DSC’s) developed their own instrument for evaluating and monitoring specific indicators regarding user satisfaction, number of restorative dental procedures, absenteeism proportion, percentage of completed treatments, number of extractions, and amount of PwD served in all specialties, among others. The meetings of the RCPD Steering Group also recognized the importance of both action evaluation and monitoring for the consolidation of the RCPD through the situational diagnosis (number of PwD, main health demands, regional plan development), and the role of management in using action monitoring tools and sharing information in meetings with the entire team. This case study described the role of professionals and frontline organizations in understanding the exercise of discretion in implementing different forms of access to specialized dental care in the RCPD across two Brazilian health regions. The findings highlighted distinct characteristics regarding the PHC, planning, and evaluation activities, as well as the access strategy in place. The PHC played a care coordination role in Region A, where access took place through referral, and planning/evaluation were part of the institutional routine of services. In Region B, which worked with mixed access, there was the occasional information exchange between PHCs and specialized units, and the care coordination role was not attributed to the PHC teams. There, planning/evaluation activities were not incorporated into the organizational routine, as guided by PNSB, as each service decided, through its implementers, whether to conduct them or not. As a result, the exercise of discretion in the way user access was regulated in each region was closely related to the PHC’s role within the care network and the characteristics of specialized units concerning planning/evaluation activities. The distinct outcomes of implementation found in this case study confirmed the idea that effective implementation is usually based on the references implementers embrace to perform their functions and corroborate the notion that the exercise of discretion encompasses different dimensions aimed at achieving politically and socially desired outcomes in a direction legitimately defined by the relational and institutional environment present in the local-regional context, emphasizing both the administrative and political dimensions . Administrative discretion is understood as the use of strategies to introduce procedural changes . This case study refers to the use of communication channels and shared flows with the help of protocols and other common instruments to promote coordination between PHC units and specialized dental services aspiring to achieve increasing levels of care integration. It also includes the decision to produce both shared planning and evaluation spaces among managers of different programs and services within the same organization, or involving different units of equal or distinct technological density, as the use of discretionary action by public actors calls for governance skills developments . The implementation of protocols and common flows throughout Region A’s network was a characteristic that assisted in the coordination of primary and specialized level organizations, allowing for the adoption of referred access by specialized dental services. The political dimension concerns the values and references at play in the interaction of actors and the competence to combine them during the exercise of power to make them effective in the achievement of the desired ends . Although we can identify a component related to the individual trajectory of actors, those references are not only produced by individual choices but also engendered by influences derived from relationships established during implementation in specific institutional and relational contexts , in which collaboration among stakeholders can help shorten the path to achieving the intended outcomes . A policy of general interest related to the construction of an integrated healthcare network was underway in Region A and was mentioned by interviewees from both PHCs and specialized units. On the other hand, in Region B, this construction was not from the actors’ perspective. The material obtained evidenced that communication and interaction between professionals from the units were not structuring resources for care actions. In this relational environment, units operated in an isolated and fragmented logic of providing basic and specialized services. After the approval of the PNSB guidelines in 2004, over a thousand DSCs were created in Brazil to enable referred access for PwD and the general population to complex care by adhering to the principle of care comprehensiveness . Although the strategy of open demand is generally regarded as negative as a means of accessing specialized services within a care network focused on integrated care delivery, it can be interpreted positively by users who do not resort to the PHC for subsequent consultations. In a relational and institutional context where units do not operate collaboratively or establish reciprocal commitments, and fragmented practices are the rule, resorting to another unit may lead to additional costs, uncertainty, and stress for the user. This gives way to distortions in the implementation of DSCs from the perspective of the public policy analyzed, allowing for their operation to respond to “counter” logic focused on achieving politically and socially desired results, related to specific short-term interests that may lead to a client-focused perspective . It is worth noting that, since the late 1990s in Brazil, the local level has been responsible for providing primary health care services, while the provision of specialized services depends on regional arrangements between the municipal and state levels. Decentralized systems in federal republics, similar to countries such as Canada and Australia, can represent a significant advantage by allowing cost containment at the central level and granting greater autonomy and responsibility to local governments in addressing the health needs of the population. However, it is also a challenge because municipalities and regions are subject to jurisdictional variations, and fragmentation in coordination, cooperation, and information sharing. . Examining the performance of community health agents , it was found that activities varied considerably, although they performed the same function and were governed by the same policy. In addition to individual factors, organizational and contextual aspects would also influence the type of activity carried out by the agents in their routine. In this study, despite the services being provided by frontline professionals with backgrounds in the health field and governed by general interests defined by the guidelines of a public policy focused on RCPD implementation, the strategies for accessing specialized dental care were different. Despite similarities in socioeconomic conditions and service availability, the implementation outcomes differed, indicating the significant influence of the relational and institutional environments. In terms of the limitations of this study, it’s important to highlight that implementation at the frontline is influenced by multiple forces, often competing in the implementation system . The analysis of this study did not encompass the viewpoint of service users. Further research that integrates the perspectives of both professionals and users could enhance and delve deeper into the insights presented here, aiming to investigate the consequences of discretion that may result in inclusion, equity, exclusion, and inequality. Sociocultural differences, the prevalence of PwD, and the number of users in each region were also not considered. Despite these points and the results being derived from a case study, the investigation is innovative in its use of the theory of public policy implementation from a bottom-up perspective. It explores analytical domains that are less investigated concerning the relational and institutional aspects that influence the frontline of the investigated public policy. The implementation of a specialized dental care policy for persons with disabilities is subject to the discretionary power of frontline professionals and organizations. This implies that the relational and institutional environment plays a significant role in the process of implementing public policies within a decentralized and regionalized healthcare system. In such a system, diverse strategies for accessing specialized services are linked to the coordinating role of PHC and the execution of planning and evaluation activities aimed at constructing an integrated healthcare network.
Impact of a personal learning plan supported by an induction meeting on academic performance in undergraduate Obstetrics and Gynaecology: a cluster randomised controlled trial
ce1c9864-b300-4abc-abee-7649a54a3466
4363344
Gynaecology[mh]
The use of a personal learning plan (PLP) in postgraduate medical education is well established as an integral part of developing and maintaining professional competence. However, the role of PLPs in undergraduate medical education is less clear. The Association for Medical Education in Europe (AMEE) published a comprehensive review of PLPs in medical education and highlighted the potential application within undergraduate medical education . There are a small number of studies that have found a benefit to goal setting by medical students within clinical specialties [ - ]. The use of PLPs within the clinical learning environment is particularly attractive as medical students often struggle with adapting to learn within this less familiar and structured environment . PLPs may offer an approach to assist students in developing the ‘adult’ learning approaches required within the clinical setting . This study is based on the hypothesis that the creation of a PLP by medical students supported by an induction education meeting, similar to the approach taken in postgraduate training, improves their academic performance. The primary aim of this study was to investigate whether medical students who created a PLP supported by an individual ‘one-to-one’ induction education meeting had an improved academic performance within an undergraduate clinical rotation in Obstetrics and Gynaecology (O&G). There were a number of unique elements to the study in comparison with the existing published studies on PLPs [ - ]. Firstly, the study used a ‘one-to-one’ meeting with a faculty member to support the creation of the PLP. Secondly, in addition to setting specific learning goals, the PLP addressed the learning approaches by medical students to the clinical rotation as a whole. Study setting Study design Participants Intervention Outcomes Data collection Data analysis The study was conducted within the Department of Obstetrics and Gynaecology, Trinity College, University of Dublin, Ireland. The undergraduate programme in O&G is completed over 8 weeks in the penultimate year of the 5-year degree course in medicine. There are 4 rotations during the academic year. The programme consists of a combination of clinical and tutorial-based learning activities. The assessment modalities used to determine the overall examination score are an end-of-rotation 11 station objective structured clinical examination (OSCE) (20%) and an end-of-year examination consisting of 50 single best answer questions (SBAs) (20%), 6 modified essay questions (MEQs) (20%) and a long case clinical examination (40%). Students require an overall examination score of 50% to pass and 60% to be awarded a distinction. The study was designed as a 4-group cluster randomised controlled trial (RCT) during the 2012/13 academic year. Cluster randomisation was adopted as individual randomisation within each rotation may have led to contamination between students. The class of 145 students was divided by administrative staff within the medical school into 4 groups ensuring that the rotations were of similar demographic distribution. Each of the 4 rotations during the year was defined as a separate cluster. Each cluster was randomised to either the intervention group (received the PLP and induction meeting in addition to the routine introductory presentation) or the control group (received the routine introductory presentation alone). Simple randomisation was used in a 1:1 allocation ratio using computer random number generation. Rotations 2 (Nov/Jan) and 3 (Feb/Mar) were allocated to the intervention group and rotations 1 (Sep/Oct) and 4 (Apr/May) were allocated to the control group. Only the academic staff member conducting all of the induction meetings and the student participating in the induction meeting were aware of the student’s participation. Institutional Research Ethics Board approval for the study was obtained (TCD Research Ethics Committee, approval October 2012). The study was conducted among the entire class completing their O&G rotation during the 2012/13 academic year. There was no restriction of participants with all rotations during the year and all students within each rotation eligible for inclusion. Although the sample size was dictated by the class size, a power calculation indicated that this sample size was sufficient to detect a difference of 5 percent in overall examination score between the groups (assuming power of 0.8 and significance level of 0.05). An information leaflet on the study was sent to students allocated to the intervention group by email 1 week prior to starting their rotation. Students were asked to complete and submit a consent form to indicate whether they wished to participate in the study on the first day of the rotation. Following receipt of the consent forms, the supervisor sent each student who consented a time for his/her meeting. The intervention was the creation of a PLP by the medical student supported through an individual ‘one-to-one’ induction education meeting between the student and an academic staff member during the first 2 weeks of the clinical rotation. The control group received a group presentation lasting approximately 20 minutes on how to optimise their learning experience during the rotation. The teaching programme within each rotation was otherwise the same. The PLP was an 8-page handbook divided into 6 sections identified as imperative for the construction of a PLP by the AMEE Guide . The PLP was constructed using a variety of open-ended and closed questions. Part 1 (Importance of the O&G Rotation) required students to reflect on the importance of the O&G rotation. Part 2 (Relevance of the O&G Rotation for Future Careers) required students to formulate a specific and practical learning outcome to be achieved that would be beneficial in his/her future career. Part 3 (Academic Targets) required students to define a specific academic target by considering typical self-reported academic performance. Part 4 (Learning Resources) required students to identify their main learning resource by rating a series of commonly used resources during the rotation and selecting a resource for the rotation. Part 5 (Study Schedule) required students to create a study plan documenting the topics to be covered each week. Part 6 (Learning Activities) required students to identify strategies to maximise their learning experiences in the clinical learning environment. The booklets were piloted in August 2012 among students from the previous academic year. The purpose of the induction meeting was to support the creation of the PLP. The PLP created by the student was reviewed at the meeting (as each student was asked to complete the PLP in advance of the induction meeting). The supervisor explored and clarified each section of the PLP with student. In addition, the supervisor suggested a range of possible learning strategies for the student to consider. The supervisor adopted a ‘questioning’ style (rather than a ‘didactic’ style) in order to allow the student to create his/her own PLP consistent with the principles of good supervision technique . The benefits and drawbacks of the various approaches suggested by students were discussed and students were free to add to or amend the learning plan during the induction meeting. The same staff member (RPD), an experienced medical educator familiar with the course programme, conducted each meeting to ensure consistency. At the end of the induction meeting each student had a completed PLP for the clinical rotation. The primary outcome was the overall examination score obtained by the student. The secondary outcomes were student attendance (defined as a percentage of the total possible attendance at all scheduled clinical and classroom-based activities) and student evaluation of the supported PLP. Data related to student demographics and academic performance were obtained from the departmental records linked by code to ensure confidentiality. The data relating to student attendance were based on staff signatures in the student logbook. The student evaluation survey was completed using an online survey tool (SurveyMonkey®) approximately 1 month following the rotation. Student demographics were compared using chi-squared tests and Fisher’s exact tests. Examination scores and attendance rates between the intervention group and the control group were compared using an unpaired t -test. Students who responded to the survey were compared with non-responders using chi-squared tests. Content analysis was used to identify themes from the free text questions on the student survey. SPSS version 19 was used for statistical analysis. A significance level of 0.05 was used. Participation Academic performance and attendance Student survey The response rate for the student evaluation survey was 85% (n = 60/71). There was no significant difference between responders and non-responders in terms of demographic profile, academic performance or attendance. Tables and provide a summary of the responses to the quantitative questions. The majority of students recommended that the best way for staff to support students in developing their learning plans was a ‘one-to-one’ meeting with a supervisor and a follow-up meeting in the rotation (n = 39, 65%). A number of students recommended an interactive discussion amongst a small group of students (n = 10, 16%) or single ‘one-to-one’ meeting (n = 9, 15%). Only a single student recommended a large group presentation (n = 1, 2%) or no support activity (n = 1, 2%). Student opinion regarding the most appropriate time to introduce these meeting was divided with 32 students (54%) advising introduction during the clinical years (year 4-5), 23 students (38%) preferring introduction during the pre-clinical years (year 1-3) and 5 students (8%) had no opinion. The main themes from the qualitative analysis of the free text questions are highlighted in Table . Students identified the aspects of the intervention that worked well: use of an induction meeting to create the PLP (Student 11); ‘one-to-one’ nature of the induction meeting (Student 60); provision of the meeting early in the rotation (Student 50); identification of expectations and goals (Student 40); positive nature of the meeting (Student 6). Students identified the aspects of the intervention that could be improved: provision of an interim or exit meeting (Student 21); incorporation of more advice from lecturers and past students (Student 66); use of a small group format with other students (Student 51). Students identified the main difficulties encountered in adhering to the PLP including lack of familiarity with the course material (Student 62) and mismatching of classroom-based tutorials and clinical activities (Student 15). A total of 145 students completed the undergraduate programme in O&G during the 2012/13 academic year. There were 72 students allocated to the control group and 73 students allocated to the intervention group (Figure ). There were 2 students allocated to the intervention group who declined to participate leaving 71 students. A demographic profile of the students is provided in Table . A total of 71 PLPs were completed and 71 meetings were conducted. The total face-to-face meeting time was 37 hours with a mean duration for each meeting of 31 minutes ± 9 minutes (ranging from 15 to 60 minutes). There was no significant difference in mean overall examination score between the groups; 56.3 ± 4.8% in the intervention group and 56.7 ± 5.6% in the control group (p = 0.64) (Table ). The mean total attendance rate was 86.7 ± 9.0% in the intervention group and 88.4 ± 8.7% in the control group (Table ). There was no significant difference in overall attendance between the groups (p = 0.25) although attendance at clinical activities was more likely in the intervention group (p = 0.03) and attendance at classroom-based activities more likely in the control group (p = 0.01). The creation of a PLP supported by an individual ‘one-to-one’ induction education meeting was rated highly by students as an approach to enhance their learning experience but did not improve their academic performance or attendance. Students reported that an interim or exit meeting might have helped in the application of the learning plan. There are a number of key questions that arise from the literature for medical educators considering the introduction of PLPs for undergraduate medical students: What should PLPs target? Who should PLPs target? When should PLPs be introduced? What faculty supports are required for PLPs? Are PLPs beneficial? The discussion will consider the current evidence and the findings of this study for each of these key questions. What should PLPs target? The need for PLPs among postgraduate trainees and specialists is intuitive as they usually self-define learning goals depending on their learning needs and opportunities. In contrast, the need for PLPs among medical students following a structured programme of learning activities to achieve pre-defined learning goals in a specific time frame is less clear. Previous studies have shown that medical students can create specific learning goals within clinical rotations and that PLPs are helpful in assisting students achieve these learning goals [ - ]. However, this study has showed that supported PLPs can be used successfully to plan each student’s overall approach to learning within clinical rotations and not only to achieve specific learning goals. The positive student rating of each of the PLP components, particularly the learning resources and study schedule, highlights this finding. Who should PLPs target? PLPs can target all students or specific student groups e.g. students in difficulty. The use of remedial teaching programmes for medical students in difficulty is well established . These programmes often involve the identification of specific deficits and the planning of strategies to address these deficits. Therefore, individual educational direction within clinical rotations is often provided for a small number of students at the extremes of academic ability but not for the majority of students. The approach in this study was to offer the PLP and induction meeting to all students. The student survey supported this approach with 71% (n = 43) responding that it is reasonable to expect students to prepare and submit a learning plan. In addition, student participation was voluntary and the high participation rate (97%, n = 71) suggests that students of all backgrounds are willing and wish to engage with PLPs. When should PLPs be introduced? The optimal timing for the introduction of PLPs requires consideration: where within the medical course as a whole and within specific clinical rotations? This study suggests that students are divided on whether the supported PLPs should be introduced during pre-clinical years (n = 23, 38%) or clinical years (n = 32, 54%). However, the small majority in favour of clinical years likely reflects the well-documented challenges many students encounter adapting to the clinical learning environment . Within the clinical rotation itself, students recommended that the PLP and induction meeting should take place early in the rotation. However, students also acknowledged that some of the difficulties experienced in following the PLPs were due creating the PLP early in the rotation and the consequent lack of familiarity with the course material. Therefore, the introduction of PLPs during clinical years and early within clinical rotations appears optimal. However, students need to be provided with a clear identification of the knowledge and skills that must be acquired in order to create viable and effective PLPs. What faculty supports are required for PLPs? The AMEE review suggests that PLPs should be created with supervisory support . In contrast to previous PLP studies with variable amounts of supervisory support, a unique feature of this study was the explicit use of an induction meeting to support the creation of the PLP [ - ]. The student survey identified that the ‘one-to-one’ meeting was a critical and welcome element. However, many students also identified that the use of an interim or exit meeting may have enhanced the intervention. Although this is an attractive prospect, educators need to consider the significant time investment by faculty members to provide this level of individualised educational support. Therefore, the provision of ‘one-to-one’ supervisory support is an important element in the use of PLPs but this support may need to be ongoing. Are PLPs beneficial? In contrast to previous PLP studies that only evaluated benefit using student surveys i.e. evaluation of reaction, this study also evaluated benefit using academic performance (as the primary outcome) i.e. evaluation of learning [ - ]. There was no difference in the academic performance between the groups. However, like previous PLP studies, the student satisfaction with the intervention was high: students reported that the PLP and the induction meeting enhanced their learning experience (n = 51, 85%), they used their PLPs (n = 42, 70%) and recommended that a similar intervention should be provided in O&G and other clinical specialties (n = 59, 98%). The question arises as to why the evidence of benefit from the student evaluation did not translate into an improved academic performance. There were 2 main reasons. Firstly, the use of academic performance as a primary outcome may have placed too great an emphasis on an objective outcome and may not have fully captured the benefits accrued. Secondly, the impact may have been limited by the lack of an interim or exit meeting to consolidate the PLP. Therefore, ongoing support may be an approach to produce an enhanced academic performance. Strengths Limitations Implications for academic practice inclusion of the entire class within the study and the high participation rate within the intervention group ensured that the findings are broadly generalisable to other institutions with a similar student demographic and programme design. The use of cluster sampling rather than individual sampling minimised contamination between the intervention and control groups. The blinding of academic staff members minimised observation bias. The use of an RCT with an objective outcome provided a robust assessment of the proposed intervention but may not have reflected the full range of its potential benefit. Given that the study was performed in a single institution within a single discipline, the findings would need to be replicated in other disciplines and institutions. A cost-effectiveness analysis would inform the debate on whether faculty members’ time should be diverted to this approach. Inherently medical educators want their students to consider and plan their learning approaches. Supported PLPs enable medical students and their educators to achieve this. This study shows that supported PLPs are of some benefit but that further research is required on the optimal strategies that student should adopt and the amount of support that faculty should provideresult in improved academic performance. Further research is required on the optimal approach to incorporating PLPs into undergraduate medical programmes and the amount of support that faculty should provide, particularly in terms of an interim or exit education meeting.
Interleukin-22 Deficiency Contributes to Dextran Sulfate Sodium-Induced Inflammation in Japanese Medaka,
7b7e6848-a041-49bf-b997-1aeeafde97ba
8573258
Anatomy[mh]
Mucosal tissue forms the first line of defense against pathogenic microorganisms. Symbiotic microorganisms colonize the mucus layer, and their mutualistic relationships are vital to host health ( ). Mucins secreted by goblet cells form a thick internal mucus layer ( ). Antimicrobial peptides (AMPs) synthesized from epithelial Paneth cells and keratinocytes kill microbes or inhibit their growth ( ). Physicochemical defenses help maintain mucosal homeostasis, and their dysfunction may induce various autoimmune diseases ( , ). In the mammalian mucosa, interleukin (IL)-22 is a key cytokine that maintains the epithelial barrier. It has attracted the attention of researchers as it is associated with skin inflammation and inflammatory bowel disease (IBD) ( , ). IL-22 was first cloned from IL-9-stimulated murine T cells and characterized as an IL-10-related, T cell-derived inducible factor because it showed high amino acid (aa) sequence homology with IL-10 ( ). Mammalian IL-22 belongs to the IL-10 cytokine family, which includes IL-10, IL-19, IL-20, IL-24, and IL-26. IL-22 is primarily produced by type 3 innate lymphoid, natural killer, T helper type (Th)-1, Th-17, and Th-22 cells as well as by neutrophils ( – ). IL-22 is synthesized and secreted in response to proinflammatory cytokines such as IL-1β, IL-6, TNF-α, and IL-23 ( ). The activation of the aryl hydrocarbon receptor transcription factor promotes IL-22 synthesis in immunocytes that secrete it ( ). The biologically active form of IL-22 is a monomer. However, non-covalent and non-intertwining dimers and high concentrations of IL-22 tetramers have also been detected ( , ). All IL-10 family members bind to a heterodimeric receptor complex comprising two chains in the class II cytokine receptor family (CRF2) ( ). IL-22 binds to IL-22 receptor alpha1 (IL-22RA1) expressed in epithelial cells, keratinocytes, and IL-10RB ( , ) and transmits cellular signals via the JAK/STAT, AKT, ERK, SAPK/JNK, and MAPK signaling pathways ( ). STAT3 is a major transcription factor in these cascades ( ). Besides the transmembrane receptor complex, a single-chain secreted (soluble) receptor known as the IL-22 binding protein (IL-22BP; alternatively referred to as IL-22RA2) is also expressed. It is encoded by an IL-22RA1-independent gene ( ). IL-22BP is secreted by various non-immune cells and tissues and has a stronger affinity for IL-22 than IL-22RA1 ( ). IL-22 binding to IL-22BP prevents IL-22/IL-22RA1 interaction and competitively inhibits its signaling ( ). Multiple aspects of IL-22 physiology have been reported, including epithelial cell proliferation, tight junction formation, and mucus and AMP biosynthesis ( , , ). In a murine model of IBD with dextran sulfate sodium (DSS)-induced bowel inflammation, IL-22 deficiency delayed healing and increased mortality. It downregulated the genes associated with anti-apoptosis regulation ( mcl1 , survivin , and bcl2 ), epithelial cell proliferation ( myc , pla2g5 , and smo ), and AMP production ( S100A8 , S100A9 , Reg3β , and Reg3γ ) ( , ). Of these, the IL-22-dependent induction of apoptosis has also been reported in recent years in the context of cell death ( ). In contrast, recombinant IL-22 administration restored the production of mucins (muc-1, muc-3, muc-10, and muc-13) and goblet cell numbers that had been decreased owing to DSS-induced inflammation ( ). Teleost IL-22 has been characterized in several bony fish species and is expressed at high levels in mucosal tissues (the gills and intestines of zebrafish ( Danio rerio ) ( ); gills of cod ( Gadus morhua ) and haddock ( Melanogrammus aeglefinus ) ( ); gills, intestines, and tail fin of rainbow trout ( Oncorhynchus mykiss ) ( ); intestines and gills of turbot ( Scophthalmus maximus ) ( ); gills and intestines of golden pompano ( Trachinotus ovatus ) ( ); gills and skin of yellow catfish ( Pelteobagrus fulvidraco ) ( ); and gills of mandarin fish ( Siniperca chuatsi ) ( )). IL-22 was detected in rainbow trout leukocytes and epithelial cells ( ). IL-22 also induces certain AMPs, such as β-defensin and hepcidin, in response to bacterial infection ( , ). However, the roles of teleost IL-22 in the immune response and mucus homeostasis have not yet been clarified. In this study, the il22 and its receptors, il22ra1 and il22bp cDNA sequences in Japanese medaka ( Oryzias latipes ) were characterized, and the expression in mucosal tissues using qPCR and in situ hybridization (ISH) was elucidated. An IL-22 knockout (KO) medaka line using CRISPR-Cas9 genome editing was newly established, and a DSS-induced inflammation model using the fish was also devised. The comprehensive transcriptomic analyses were performed by the DSS-model using IL-22-KO and wild-type (WT) medaka larvae, and their intestinal histological differences were also elucidated on epithelial repair, barrier protection, and the changes in goblet cell number and mucus layer thickness after inflammatory damage in teleosts. Medaka Molecular Cloning of RNA Extraction and cDNA Synthesis for qPCR Analysis Expression Analysis Using qPCR ISH IL-22-Deficient Medaka Strain Establishment DSS Inflammation Model Next-Generation Sequencing (RNA-Seq) Sequence Read Mapping and Differential Expression and Gene Enrichment Analyses Histological Staining Healthy Japanese medaka ( Oryzias latipes ; an inbred Cab line) were maintained in transparent plastic circulating freshwater tanks at 26°C, under a 14-h light/10-h dark cycle. Both adult and larval medaka were used in this study. In the experiments on adult fish, WT fish weighing 200–300 mg at 3–4 months post-hatching (mph) were used for analyzing gene expression with respect to tissue distribution, and WT and IL-22-KO medaka weighing 100–150 mg at 2 mph were used in the DSS experiment. Larval medaka at 14 days post-hatching (dph) were used in gene expression analyses following whole-body DSS-exposed state, which is widely used in studies on the inflammatory state, similar to that in inflammatory bowel disease in mammals. All medaka were fed twice daily. The inbred Cab strain of Japanese medaka was used for il22, il22ra1 , and il22bp cDNA sequence determinations and all subsequent experiments. All animal experiments were conducted in accordance with the relevant national and international guidelines, including those stated in the “Act on Welfare and Management of Animals” of the Ministry of the Environment of Japan. Ethics approval from the local Institutional Animal Care and Use Committees (IACUC) was not sought as the law does not mandate fish protection. il22 , il22ra1 , and il22bp cDNAs The Hd-rR medaka il22, il22ra1 , and il22bp cDNA sequences were identified from the medaka genomic database registered in the Ensembl genome browser ( https://asia.ensembl.org/index.html ). The loci of il22, il22ra1 , and il22bp and their adjacent synteny structures were compared among medaka, other teleosts, and mammals. To determine the il22, il22ra1 , and il22bp open reading frame (ORF) sequences in Cab medaka, gene-specific primers were designed ( ). KAPA™ HiFi-HotStart DNA (high-fidelity PCR) polymerase (Kapa Biosystems, Wilmington, MA, USA) was used in PCR amplification. The PCR products were cloned into a pTAC-2 vector (BioDynamics, Kumamoto, Japan). Plasmid DNA from ≥ three independent clones was purified using a Monarch Plasmid Miniprep kit (New England Biolabs, Ipswich, MA, USA). Sequencing was performed in a 3730xl DNA Analyzer (Applied Biosystems, Foster City, CA, USA). The aa sequences deduced for each ORF were used to predict the functional domain structures of Cab medaka IL-22, IL-22RA1, and IL-22BP using the Simple Molecular Architecture Research Tool (SMART v.7.0) ( http://smart.embl-heidelberg.de/smart/set_mode.cgi?NORMAL=1 ). Multiple alignments of the IL-22, IL-22RA1, and IL-22BP aa sequences were performed using the multiple alignment tool ClustalW ( http://www.mbio.ncsu.ebu/BioEdit/bioedit.html ) in BioEdit. Signal peptide sequences were predicted using the SignalP-5.0 Server (http://www.cbs.dtu.dk/services/SignalP/). Protein structure homology modeling was performed using the SWISS-MODEL program ( https://swissmodel.expasy.org ). The predicted complete aa sequences of IL-22, IL-22RA1, and IL-22BP were used for constructing phylogenetic trees using the neighbor-joining method in MEGA7.0 ( https://www.megasoftware.net ), with 1,000 bootstrap replicates. Total RNA was extracted from adult WT medaka brain, gills, intestines, kidneys, liver, muscles, skin, and spleen for analyzing the tissue distribution of il22, il22ra1 , and il22bp expression (n=5). In WT and IL-22-KO medaka, total RNA was extracted from the whole body of larval medaka (n=7) and the mucosal tissues (from the anterior intestine, posterior intestine, gills, and skin) of adult medaka (n=5). The comparisons between WT and IL-22-KO medaka were performed not only in the naïve sate but also in the DSS-stimulated state. For RNA extraction, the RNAiso Plus kit (Takara Bio, Kusatsu, Shiga, Japan) was used according to the manufacturer’s instructions. Total RNA quality was assessed using a NanoDrop spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA). Total RNA purity was evaluated using the OD 260 :OD 280 ratio, which was confirmed to be > 1.8 for all samples. cDNA was synthesized from 500 ng of extracted total RNA per sample using the ReverTra Ace qPCR RT Master Kit with gDNA remover (Toyobo, Osaka, Japan) according to the manufacturer’s instructions. The cDNA samples were prepared as previously described in Section 2.3. Five and seven adult and larval fish were analyzed, respectively. Seven larval fish per group were used in the DSS experiment. For qPCR, gene-specific primers were designed and used to amplify the conserved il22, il22ra1 , and il22bp regions. The medaka β-actin ( actb ) gene was used as the internal control to confirm cDNA quality and quantity. The primer sequences are listed in . qPCR was conducted in triplicate in a 15 μL reaction volume comprising 7.5 μL of Brilliant III Ultra-Fast SYBR ® Green QPCR Master Mix (Agilent Technologies, Santa Clara, CA, USA), 1.0 μL of cDNA, 1.5 μL of each forward and reverse primer (5 pmol), and 3.5 μL of distilled water. The qPCR cycle was as follows: 95°C for 15 s; 60°C for 30 s; and 40 cycles on a CFX Connect™ (Bio-Rad Laboratories, Hercules, CA, USA). A melting curve analysis was performed on the amplified products at the end of each cycle to confirm amplification specificity. The relative expression ratios were calculated using the comparative threshold cycle (Ct or 2 -ΔΔCt ) method ( ). The Ct values of the target gene and internal control were determined for each sample. The average Ct for triplicate samples was used to calculate the expression levels relative to that of actb . Student’s t -test was used when homoscedasticity between group pairs could be assumed. Welch’s t -test was used when homoscedasticity between group pairs could not be assumed. ISH was performed on adult medaka (3 mph) gills and intestines and larval medaka (14 dph) intestines to evaluate the localization of il22 and il22bp mRNA. A gene-specific digoxigenin (DIG)-labeled RNA probe was synthesized with gene-specific primers (to amplify the full-length ORF; ) using a DIG RNA labeling kit (SP6/T7; Roche Diagnostics, Basel, Switzerland) according to the manufacturer’s instructions. Briefly, tissue samples were fixed overnight in 4% (v/v) paraformaldehyde (PFA)/0.1 M phosphate buffer (PB) at 4°C. The tissue samples were dehydrated, embedded in paraffin (Fujifilm Wako, Osaka, Japan), and cut into 8 μm-thick sections using a microtome (Leica Biosystems, Wetzlar, Germany). After dewaxing and rehydration, the sections were permeabilized with proteinase K (Fujifilm Wako) in diethyl pyrocarbonate (DEPC)-treated phosphate-buffered saline (PBS) (5 μg/mL) at 37°C for 15 min, fixed in 4% (v/v) PFA/PBS for 10 min, and treated twice with DEPC-treated PBS containing glycine (2 mg/mL) for 10 min. The sections were post-fixed with 4% (v/v) PFA/0.1 M PB for 5 min. Prehybridization was performed for 2 h using a probe diluting solution (50% (v/v) formamide, 5× SSC, 5× Denhardt’s solution (Fujifilm Wako), and 2 mg/mL RNA (Roche Diagnostics) in DEPC-treated water) after 30 min of incubation in 5× SSC/formamide. The DIG-labeled antisense and sense RNA probes were diluted using a probe-diluting solution (0.5 μg/mL), and hybridization was performed at 55°C for 16 h. DIG was detected using horseradish peroxidase-labeled anti-DIG immunoglobulin G (IgG), and color was developed using nitro-blue tetrazolium chloride and 5-bromo-4-chloro-3’-indolyphosphate p-toluidine salt (NBT/BCIP) solution (Roche Diagnostics). Benchling ( https://www.benchling.com/academic/ ) was used to design a crRNA in exon 1 of medaka il22 . The crRNA sequence is shown in . The sgRNA was prepared by annealing the crRNA and tractr-RNA (Thermo Fisher Scientific). Approximately 0.5 nL of a solution containing sgRNA (50 ng/μL) and Cas9 protein (400 ng/μL) (Thermo Fisher Scientific) was co-injected with a manipulator (Narishige, Tokyo, Japan) into single-cell-stage medaka embryos. Medaka large eggs facilitates microinjection during genome editing ( ). One month later, the gene editing efficiency of the extracted genomic DNA was confirmed in a heteroduplex mobility assay (HMA) using a primer set ( ) amplifying the crRNA and other specific regions. F0 medaka with confirmed mutations were interbred with WT medaka (Cab) to produce F1 heterozygotes. The latter were then interbred with WT Cab medaka to produce F2 heterozygotes. F2 medaka males and females with the same mutation were mated to produce F3 homozygous progeny and/or mutant lines. HMA verified the mutant locus in the F3 medaka genome. Briefly, F3 medaka were anesthetized with MS-222 (Sigma-Aldrich, St. Louis, MO, USA), and their genomic DNA was extracted from the epidermal mucosa, dissolved in 20 μL of 0.2 mM EDTA (Fujifilm Wako) and 25 mM NaOH (Fujifilm Wako), and incubated at 95°C for 20 min. The samples were then neutralized with an equal volume of 40 mM Tris/HCl (pH 8.0) (FUJIFILM Wako). The genomic DNA-containing solution was used as a template, and PCR was performed using KOD FX Neo (Toyobo). The PCR conditions were as follows: 95°C for 3 min; 38 cycles of 98°C for 10 s, 66°C for 5 s, and 68°C for 5 s; and 72°C for 5 min. The PCR products were cloned into the pTAC-2 vector (BioDynamics). Polymerase DNA from ≥ three independent clones was purified using a Monarch Gibraltar Miniprep kit (New England Biolabs). Sequencing was performed in a 3730xl DNA Analyzer (Applied Biosystems). Medaka larvae at 14 dph were used for the DSS exposure test. WT and IL-22-KO medaka larvae were obtained by natural spawning and raised until 7 dph at 26°C in freshwater supplemented with methylene blue. The larvae were then transferred to plain freshwater until the experiment commenced. Inflammation was induced with 0.5% (w/v) DSS (40,000 MW; Sigma-Aldrich) per a previously described method ( ). DSS stock solution (10% w/v) was diluted to 0.5% (w/v) in freshwater at 26°C with gentle rocking. Larval medaka (14 dph) were stimulated with 0.5% (w/v) DSS for 24 h and transferred to breeding water. Samples were collected on day 1 for histological and transcriptomic analyses and again on days 2 and 5 for histological analyses. For RNA-seq analysis, WT and IL-22-KO larval medaka (14 dph) under naïve conditions and 1-day DSS stimulation were compared. Total RNA was extracted from whole larval Cab Japanese medaka using the RNAiso Plus Kit (Takara Bio) according to the manufacturer’s instructions. Total RNA from each medaka larva was extracted separately and not normalized. RNA was quantified using a NanoDrop spectrophotometer (Thermo Fisher Scientific) at OD 260 :OD 280 . A ratio of 1.8 was set as the minimum RNA purity cut-off. To synthesize the cDNA library, total RNA from ten individuals per group were equally pooled and sequenced in a DNBSEQ-G400 instrument (Mouse Genome Informatics, Bar Harbor, ME, USA) using Danaform (Yokohama, Japan). Processed reads were deposited in the DNA Data Bank of Japan (DDBJ) Sequence Read Archive under the Accession No. DRA011594. The collected reads were mapped to the annotated medaka Hd-rR reference genome (release 85; http://www. ensembl.org/index.html ) using the STAR program and analyzed using its Feature Counts function. Transcriptional expression was estimated as fragments per kilobase of exon length per million reads. Transcripts with P < 0.05 were considered significantly differentially expressed. Genes that were significantly differentially expressed in each comparison were subjected to gene enrichment analysis using Database for Annotation, Visualization, and Integrated Discovery (DAVID) ( ). Gene ontology (GO) terms in the biological process (BP) (GOTERM_BP_FAT), cellular component (CC) (GOTERM_CC_FAT), and molecular function (MF) (GOTERM_MF_FAT) categories as well as Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways were selected. Gene interactions and networks were analyzed using the STRING App database of Cytoscape (version 3.8.0) ( ). Larval medaka were anesthetized by soaking in 0.2 mg/mL MS-222 and fixed overnight with Davidson’s fixative at 4°C. The samples were dehydrated with an alcohol gradient series and Hist-clear and embedded in paraffin (Fujifilm Wako). Transverse or parasagittal sections of 5 μm thickness were cut using a microtome (Leica Biosystems) and mounted on PLATINUM PRO slides (Matsunami, Osaka, Japan). H&E and AB staining were performed to identify the phenotypic differences between WT and IL-22-KO medaka as well as to observe DSS-induced inflammation. For H&E staining, the slides were dewaxed using Clear Plus, hydrated with an ethanol/water gradient, and stained with hematoxylin (Fujifilm Wako) and 1% eosin solution for 5 min. Mucin was stained with 3% (w/v) Alcian blue (Sigma-Aldrich) in 3% (v/v) acetic acid at pH 2.5 for 30 min. Histological observations were performed using BZ-X700 (KEYENCE, Osaka, Japan). In each group, three individual larvae were used for counting goblet cells (n=3), and the area corresponding to the area observed is shown in , which was confirmed to form a part of the anterior intestine in teleosts. Individually, ten consecutively sliced sections of 5 μm (total thickness of 50 μm) were used for counting. The goblet cells in the anterior intestinal epithelia were manually counted in each section, and ImageJ version 1.53a ( https://imagej.nih.gov/ij/ ) was used for calculating the area size. For detecting apoptotic cells, terminal deoxynucleotidyl transferase dUTP nick-end labeling (TUNEL) staining was performed using the In Situ Cell Death Detection Kit, TMR red (Sigma-Aldrich) according to the manufacturer’s instructions. TUNEL staining is often used to assess the progression of inflammation in mammalian DSS-based inflammation models ( , ). Three consecutively sliced sections, each of thickness 5 μm, from the anterior intestine of larval medaka were used for counting positive signals, and in each group, three larval medaka were used individually for counting. Positive TUNEL staining signals. Characterization of the Putative Functional Domain and Motif Comparisons Phylogenetic Analyses Synteny Tissue Distribution of IL-22-KO Medaka Strain Establishment il22 Larval WT and IL-22-KO Medaka Transcriptome Analysis DSS-Induced Injury and Repair in WT and IL-22-KO Medaka Complement Production in DSS-Stimulated WT and IL-22-KO Medaka Mucus Production in DSS-Stimulated WT and IL-22-KO Medaka il22 , il22ra1 , and il22bp cDNA Sequences of Cab Medaka According to data in the Ensembl database, the full-length il22 cDNA sequence of Hd-rR Japanese medaka (Ensembl ID, ENSORLG00000026810) contains 573 base pairs (bp). It contains a 573 bp ORF encoding a predicted 190 aa protein with an estimated mass of 21.30 kDa. For Cab Japanese medaka, we cloned the 570 bp ORF of il22 cDNA (GenBank accession No. LC528229) encoding a predicted 189 aa protein with a 35 aa signal peptide at the N -terminus ( ). The mature IL-22 peptide contains 154 aa, and its estimated mass is 17.77 kDa. The full-length il22ra1 cDNA sequence of Hd-rR Japanese medaka (ENSORLG00000027190) is of 4,144 bp and contains a 1,539 bp ORF encoding a predicted 512 aa protein with an estimated mass of 56.72 kDa. For inbred Cab Japanese medaka, a 1,539 bp ORF of il22ra1 cDNA (LC528230) was cloned, which encoded a predicted 512 aa protein with a 22 aa signal peptide at the N -terminus ( ). The mature IL-22RA1 peptide contains 490 aa, and its estimated mass is 54.25 kDa. The full-length il22bp cDNA sequence of Hd-rR Japanese medaka was of 1,717 bp and comprised a 660 bp ORF encoding a predicted 219 aa protein with an estimated mass of 24.87 kDa (ENSORLG00000019053). For the inbred Cab Japanese medaka, we cloned a 660 bp ORF of the il22bp cDNA (LC528231) encoding a predicted 219 aa protein with a 21 aa signal peptide at the N -terminus ( ). The mature IL-22BP peptide contains 198 aa, and its estimated mass is 22.72 kDa. Multiple sequence alignments revealed substantial conservation among the predicted aa sequences and functional domains of medaka IL-22, IL-22RA1, and IL-22BP and those of other organisms ( ). Japanese medaka Ol_IL-22 contains four cysteine residues, which are conserved in IL-22 from other fish species. Of the four cysteine residues, three are also conserved in mammals. The Ol_IL-22 sequence showed a high identity (66.8%) and similarity (87.8%) with the Chinese perch Sc_IL-22 sequence in GenBank. The six A–F α-helices present in Human Hs_IL-22 were also identified in Ol_IL-22 ( ). According to PSIPRED ( http://bioinf.cs.ucl.ac.uk/psipred/ ) and SWISS-MODEL programs, the sequence features and predicted 3D structure locations of Ol_IL-22 resembled those of its human ortholog ( ). The deduced Ol_IL-22 sequence contained three cysteine residues conserved among fish and mammals and one cysteine residue conserved only among fish species. The Ol_IL-22RA1 protein has a fibronectin type III (FNIII) domain 1 (24–112 aa), a FNIII domain 2 (121–222 aa), and a transmembrane region (227–248 aa). Ol_IL-22RA1 shared similarity (64.6%) with human Hs_IL-22RA1 and showed the highest similarity (72.2%) with Japanese pufferfish Tr_IL-22RA1, among sequences from aligned species. IL-22RA1 3D structure prediction revealed structural similarity with the proteins from medaka, yellow catfish, and humans ( ). Ol_IL-22BP contained the FNIII domain 1 (28–122 aa) and the FNIII domain 2 (128–218 aa). Two FNIII domains and four cysteine residues for disulfide bridge formation were conserved in all aligned IL-22BP proteins. Ol_IL-22BP shared similarity (66.7%) with human Hs_IL-22BP and showed the highest similarity (75.36%) with Atlantic salmon Ss_IL-22BP ( ). The predicted 3D structure of Ol_IL-22BP showed similarity with those of yellow catfish and humans ( ). We aligned the aa sequences of Japanese medaka IL-22 and members of the IL-10 cytokine family with those of other fish and vertebrates. The phylogenetic tree showed that all sequences clustered into three major clades. The first comprised smaller clades for IL-19, IL-20, and IL-24. The second comprised clades for IL-26 and IL-10. The third included a major clade subdivided into a smaller clade for IL-22 that was further divided into clades for fish, mammals, birds, and amphibians. Medaka IL-22 was localized to the fish IL-22 clade, and its nearest relatives were those of mandarin fish and turbot ( ). We also aligned the aa sequences of Japanese medaka IL-22RA1, IL-22BP, and CRF2 with those of other fish and vertebrates ( ). In the phylogenetic tree, all sequences clustered into a clade comprising IL-10R2, IL-20R1, IL-22RA1, and IL-22BP and another comprising IL-10R1 and IL-20R2. Medaka IL-22RA1 was localized to the fish IL-22RA1 clade and was closely related to northern pike and Atlantic salmon IL-22RA1. Medaka IL-22BP was localized to the fish IL-22BP clade and closely related to mandarin fish IL-22BP. We analyzed the gene order and orientation using BLASTn for the contigs harboring il22 , il22ra1 , and il22bp ( ). ifng , mdm1 , and cand1 were localized upstream or downstream of il22 on chromosome 23 in Japanese medaka as well as in other fish and vertebrate species ( ). cnr2 , pnrc2 , and mym3 were localized upstream of il22ra1 on chromosome 16 in Japanese medaka as well as in other fish and vertebrate species ( ). olig3 and ifngr1 were localized upstream of il22bp on chromosome 15 in Japanese medaka as well as in other fish and vertebrate species, except zebrafish ( ). il22 , il22ra1 , and il22bp mRNA We analyzed the il22 , il22ra1 , and il22bp expression levels in the brain, gills, intestines, kidneys, liver, muscles, skin, and spleen. qPCR analysis showed the expression of these genes in all sampled tissues ( ). The genes were highly expressed in the healthy medaka gills, intestines, and skin mucosae. il22bp was also abundant in the muscle, liver, and brain ( ). We also investigated temporal changes in larval (1–21 dph) il22 expression and found that il22 was ubiquitously expressed at all developmental stages, with significant increase in expression at 7 dph ( ). We performed histological staining by ISH on adult medaka (3 mph) gills and intestines and larval medaka (14 dph) intestines. We observed il22 and il22bp expression in the intestinal and gill epithelia of healthy medaka ( ) and the intestinal epithelia of medaka larvae ( ), with no detection in the negative controls using sense-probe ( ). Even though we attempted to detect il22ra1 expression, the signal was not observed (data not shown). The crRNA in exon 1 of il22 was highly mutated ( ). We injected a mixture of sgRNA and Cas9 protein into the embryo and confirmed a 4 bp deletion in the region containing the crRNA ( ). In the mutant strain, the IL-22 aa sequence was terminated in the middle of the full-length sequence because of a codon frameshift ( ). In contrast, the IL-22-KO (–4) larval and adult strains showed no morphological anomalies ( ). , il22ra1 , and il22bp Expression in Larval WT and IL-22-KO Medaka Compared with that in WT, in IL-22 (–4)-KO medaka, il22 ( i.e. , mutated il22 transcripts) and il22bp were markedly downregulated ( ). There was no difference in il22ra1 expression between IL-22-KO and WT ( ). WT and IL-22-KO medaka (14 dph) were treated with 0.5% (w/v) DSS for 24 h and observed for 5 days to confirm whether DSS caused reproducible inflammation. DSS stimulation drastically lowered the relative survival rates. Nevertheless, there was no significant difference between WT and IL-22-KO medaka in terms of post-DSS treatment survival ( ). We performed RNA-seq to investigate the effects of IL-22-KO on IL-22 downstream gene expression in response to DSS-induced inflammation. We obtained averages of 208,381,750 (WT), 218,140,591 (IL-22-KO), 210,655,244 (WT_DSS), and 217,765,749 (IL-22-KO_DSS) reads from the synthesized cDNA library. After annotation, 23,708 (WT), 23,710 (IL-22-KO), 23,759 (WT_DSS), and 23,804 (IL-22-KO_DSS) genes were detected in each library ( ). The overall gene expression differed between WT and IL-22-KO and between DSS-treated and untreated fish ( ). There were 179 upregulated and 368 downregulated genes in IL-22-KO compared to that in WT ( ). lists the top 50 differentially expressed genes (DEGs) between WT and IL-22-KO. In protein-protein interaction network analyses using the STRING App in Cytoscape, 170 of 368 downregulated genes in IL-22-KO formed a series of cluster with IL-22. Of these, rora , il1b , il17a/f1 , il22bp , socs3 , ptgdsb.1 ( lcn) , and prf1 were confirmed as the first interactors with IL-22 ( ). GO analysis was performed on the significantly downregulated genes using DAVID. shows the top 10 terms under BP, CC, and MF with most genes. Under BP, terms related to various types of immunity, cell death (including apoptosis), and cell proliferation/differentiation were annotated ( ). Immune-related genes, including socs3 and rora , and those encoding AMPs ( defb and hamp ) and cytokines ( il1β , il12ba , il17a/f1 , and il22bp ) also interacted with mammalian IL-22. qPCR confirmed that the genes were downregulated in IL-22-KO medaka ( ). H&E staining was performed for WT and IL-22-KO medaka at 1, 2, and 5 days after DSS stimulation to detect intestinal injury and regeneration. We observed erosion in the anterior intestines of WT and IL-22-KO medaka at 1 and 2 days after DSS stimulation ( and ). At 5 days after DSS exposure, the intestinal epithelium was regenerated in the anterior intestine of WT medaka ( and ). In contrast, the anterior intestine, particularly the damaged villi (indicated by black arrows), did not recover in IL-22-KO medaka ( and ). TUNEL staining was performed for assessing the degree of apoptosis caused by DSS. Under naïve conditions, the number of positive cells between WT and IL-22-KO did not differ significantly, and both groups showed signals derived from the neoplasia of the intestinal epithelium on the top of the villi. Consistent with the H&E-based observation at 5 days after DSS stimulation, TUNEL-positive cells became ubiquitous in the intestinal epithelium, and the number of TUNEL-positive cells per field in IL-22-KO medaka was significantly higher than that in WT medaka ( ). We then subjected the genes and IBD-sensitive proinflammatory cytokine genes— il1b , il22 , il23r , and tnfa— to qPCR to confirm the expression levels. il1b , il22 , il23r , and tnfa were significantly upregulated in response to DSS stimulation compared to that in untreated tissues ( ). Of the genes, il22 was significantly downregulated in IL-22-KO medaka compared to that in WT medaka ( ). For RNA-seq, we compared relative changes in the gene expression of WT and IL-22-KO medaka treated with DSS. There were 556 significant DEGs between WT and WT_DSS ( P < 0.05). lists the top 50 DEGs between WT and WT_DSS. Of these, 408 were upregulated and 148 were downregulated ( ). However, a comparison of WT_DSS and KO_DSS showed 340 significant DEGs ( P < 0.05). lists the top 50 DEGs between WT_DSS and KO_DSS. In all, 132 genes were upregulated, and 208 genes were downregulated ( ). GO analysis was performed on the DEGs between WT and WT_DSS. The 148 downregulated genes annotated with terms such as lipid metabolism, defense response, and extracellular matrix organization under BP. The 408 upregulated genes annotated with terms such as cell death, acute inflammatory response, cell proliferation, angiogenesis, cell growth, cell migration, cell-junction, and Wnt signaling pathway ( ). KEGG pathway enrichment analysis showed that the PI3K-Akt and MAPK signaling pathways were enhanced ( and ). GO analysis was performed on the DEGs obtained by comparing WT_DSS and IL-22-KO_DSS. The 208 downregulated genes annotated with terms such as response to chemical stimulus, complement activation, cell growth, and angiogenesis ( ). KEGG pathway enrichment analysis showed that the PI3K-Akt and MAPK signaling pathways were inhibited ( and ). In contrast, the 132 upregulated genes annotated with terms related to cell death and cytokine secretion ( ). Based on the RNA-seq results, we selected the PI3K-Akt and MAPK signaling pathway genes ddit4l , fgf19 , and hspa5 , which were downregulated in DSS-stimulated IL-22-KO medaka to < 50% of the level in DSS-stimulated WT medaka. Gene expression was also quantified using qPCR analysis ( ). The PI3K-Akt signaling pathway gene ddit4l was significantly upregulated in WT_DSS compared with that in WT, and significantly downregulated in IL-22-KO_DSS compared with that in WT_DSS ( ). fgf19 was significantly downregulated in IL-22-KO medaka compared with that in WT medaka ( ). In contrast, qPCR analysis revealed that hspa5 was significantly upregulated in WT_DSS compared with that in WT, but there was no significant difference in hspa5 expression between IL-22-KO_DSS and WT_DSS ( ). In an additional experiment, DSS stimulation was also performed in adult medaka using immersion strategy, similar to that in larval analysis, and the genes related to inflammatory response and mucus production were selected based on the results of the larval experiment and quantified using qPCR. With respect to the qPCR analysis of il22 and inflammation-related genes, the mucosal tissues, including those from the intestines (anterior and posterior), gills, and skin, were selected as target tissues to assess the effects of DSS. il22 transcripts were significantly downregulated in the anterior and posterior intestines compared to that in WT in both naïve and DSS-stimulated states. The number of il22 transcripts in the IL-22-KO medaka gills only decreased upon DSS stimulation, and no significant change was observed in the transcripts expressed in the skin. Of the investigated genes, ddit4l was downregulated in the IL-22-KO medaka anterior intestine in both naïve and DSS-exposed states, and this result corresponded to that of the larval experiment ( ). RNA-seq data analysis revealed that multiple complement genes in WT were upregulated upon DSS stimulation, and comparison of the FPKM values of WT_DSS and IL-22-KO_DSS also revealed the significantly lower expression of several complement and related genes in IL-22-KO medaka. DSS-treated WT medaka showed the significant upregulation of c1qc , c1ql2 , c4b , and c6 genes encoding complement factors. In IL-22-KO_DSS, however, c1ql2 , c1r , c4b , c5 , c6 , and c7 were significantly downregulated compared with that in WT_DSS. KEGG enrichment analysis showed the relative downregulation of the complement cascade in IL-22-KO_DSS ( ). Of the abovementioned genes, the expression of c1qc and c6 were also confirmed by qPCR. The expression of c1qc increased significantly in WT upon DSS stimulation, as observed via RNA-seq ( ). Additionally, c6 expression in IL-22-KO was significantly lower than that in WT in naïve states ( ). We performed AB staining to detect relative changes in the mucus layer and acidic mucus-producing cells (goblet cells) in response to DSS stimulation. WT and IL-22-KO medaka presented with expanded anterior intestine mucus layers at 1 day after DSS stimulation ( ). However, there was no significant increase in the number of AB-positive cells in WT medaka ( ). Meanwhile, IL-22-KO medaka had significantly fewer AB-positive cells than WT medaka, and the numbers increased significantly after DSS stimulation ( ). In mammalian intestines, FGF-7 (also known as keratinocyte growth factor; KGF) is widely known to contribute to mucus production via goblet cell proliferation ( ). Additionally, type 2 cytokine (IL-4, IL-9, and IL-13) responses, particularly those associated with IL-4 and IL-13, are known to contribute to goblet cell development ( ). In teleosts, il4/13a2 is a counterpart of two paralogous genes, mammalian IL4 and IL13 ( ). In larval medaka subjected to DSS stimulation, qPCR analysis revealed the upregulation of il4/13a2 in WT, but no significant difference was observed in the expression levels between WT and IL-22-KO medaka ( ). fgf7 showed significantly lower expression in larval IL-22-KO medaka than in WT medaka in naïve states ( ). Mucin 2 ( muc2 ) is a type of secretory mucin with extremely high expression in the teleost gastrointestinal tract ( ). Although the medaka muc2 sequence is submitted as an uncharacterized protein (ENSORLG00000006006) in Ensembl Genome Browser, paralogous muc2 sequences of multiple teleost species, including medaka (Accession. No. XM_023955731), are characterized and registered in the NCBI database ( https://www.ncbi.nlm.nih.gov ). qPCR analysis showed that muc2 expression in larval IL-22-KO medaka was significantly lower than that in WT in the naïve state ( ). However, other types of mucins annotated based on the Ensembl reference showed no significant changes in RNA-seq results ( ). Changes in fgf7 and muc2 expression in the anterior and posterior intestines of adult medaka upon DSS stimulation were also quantified using qPCR. The expression of fgf7 and muc2 in IL-22-KO intestines was significantly downregulated in the naïve state. After DSS stimulation, the expression of muc2 (in both intestines) and fgf7 (only in the posterior intestine) was significantly lower in IL-22-KO than in WT medaka ( ). IL-22 has been characterized in several teleost species. However, its functions in teleost immunity and mucus homeostasis have not been clarified. In this study, we characterized il22 and its receptors il22ra1 and il22bp in Japanese medaka. For the first time, we established mutant medaka il22 using the CRISPR-Cas9 genome editing system. We also developed a DSS-induced inflammation model in medaka and elucidated the roles of teleost IL-22 by comprehensive transcriptomic analyses. The cloned medaka IL-22 comprised six α-helices, which is a typical structure of IL-10 cytokine family members ( ). The crystalline structures of human and zebrafish IL-22 showed that both IL-22 proteins have two disulfide bridges. However, two positions of the bridges did not match between the human (Cys40-Cys132 and Cys89-Cys178) ( ) and zebrafish (Cys117-Cys163 and Cys118-210) proteins ( ), which are conserved in mammals and teleosts, respectively. Medaka IL-22 showed a relatively high percentage of sequence similarity (77.3%) with human IL-22, and the domain structure was predicted to be similar to that of human IL-22. As with mammalian IL-22RA1 and IL-22BP, medaka and other teleost homologs of these receptor genes possess two FNIII repeats, which are commonly conserved in class II cytokine receptors ( ). In medaka IL-22RA1, four cysteines form two disulfide bridges, of which one bridge is common to teleosts and mammals, and another bridge is specific to teleost IL-22RA1 ( ), and these residues are also conserved. No previous study on the affinity between teleost IL-22 and IL-22RA1 has been reported to date. In medaka IL-22BP, the conservation of two disulfide bridges common to teleosts and mammals was also predicted. Therefore, characteristic cysteine residues that form disulfide bridges and are conserved in teleost genes are also conserved in the two IL-22 receptors of medaka. Additionally, similar 3D structures of IL-22, IL-22RA1, and IL-22BP, with six α-helices in IL-22 and FNIII domains in IL-22RA1 and IL-22BP, were predicted. Furthermore, the results of synteny and phylogenetic analyses strongly suggested that the three IL-22-related genes identified in Japanese medaka are orthologous to the mammalian IL22 gene. In qPCR analysis, medaka il22, il22ra1 , and il22bp were ubiquitously expressed in all tissues, including the brain, gills, intestines, kidney, liver, muscle, skin, and spleen. Of these, mucosal tissues, such as those in the gills, intestines, and skin, showed high expression of il22, il22ra1 , and il22bp . The tissue distribution of gene expression was consistent with the results previously reported in other teleost species ( – ). Meanwhile, in mammals, mucosal lymphoid tissues did not show the highest levels of IL22 expression. For example, the highest expression of mice IL22 was detected in the cerebellum, along with a relatively high expression in the colon ( ). Meanwhile, the same transcriptome data showed the high expression of IL22RA1 and IL22BP in the small and large intestines of mice ( ). In our histological analysis using ISH, the signals of il22 and il22bp were detected in the epithelium of the intestines and gills. The expression patterns and tissue distributions of il22 and il22bp suggested the putative functions of the gene products in mucosal immunity. Meanwhile, even though we attempted to detect il22ra1 expression, the positive signal was not observed (data not shown). il22ra1 expression was not detected via RNA-seq of medaka larvae, although it was detected in qPCR analysis. In teleosts, reports showing the tissue localization of fish il22ra1 have not been published. It is known that in mammals, IL22RA1 is broadly expressed in the epithelial cells of mucosal tissues, with no expression in specific cells, such as hematopoietic cells ( ), and the ubiquitous localization of the protein in the intestinal epithelium can be confirmed by immunohistochemistry ( ). Taken together, the low expression level and expression in a wide range of tissues may have prevented the detection of medaka il22ra1 via ISH. Transcriptome analyses were performed for comparing the gene expression between WT and IL-22-KO medaka using RNA-seq, following which several genes were extracted based on the RNA-seq results and their expression was quantified using qPCR. In the protein-protein interaction network analysis performed using the STRING App of Cytoscape, the significantly downregulated genes in IL-22-KO medaka under naïve states formed a cluster with IL22 , and among them, rora , il1b , il22bp , il17a/f1 , socs3 , ptgdsb.1 ( lcn) , and prf1 were the first neighbor genes of IL-22. Additionally, under comparison in naïve states, GO analysis confirmed the downregulation of the genes particularly classified under GO terms related to immune response, cell death, and cell proliferation in IL-22-KO medaka. The cytokine ( il1b , il12ba , il17a/f1 , and il22bp ), AMP ( defb and hamp ), and apoptosis-related genes ( bcl2115 , nupr1 , and chac1 ) corresponded to these GO terms. Mammalian IL-22 is generally known to induce various AMPs, including S100A7, S100A8, S100A9, β-defensin 2, RegIIIc, and RegIIIb, in mammals ( , – ). In previous studies on IL-22RA1 KO mice, the induction of Bcl2115 , Nupr1 , and Chac1 transcripts was suggested to be associated with the IL-22/IL-22RA1 axis via STAT3 activation ( ). Furthermore, a recombinant protein-based functional study on teleost IL-22 revealed that recombinant IL-22 can induce defb and hamp expression in rainbow trout ( ), il1b expression in grass carp ( ), and il22bp and hamp expression in mandarin fish ( ). Thus, the phenotypic characteristics of IL-22-KO medaka showed several similarities with those reported in previous studies on the IL-22 induction abilities in mammals and teleosts. In this study, we treated medaka with DSS to induce IBD-like injury. The DSS-induced inflammation model presented with idiopathic erosions and ulcers, resembling ulcerative colitis symptoms. H&E staining revealed intestinal epithelial erosion in response to DSS treatment. Upon DSS stimulation by immersion, medaka larvae showed intestinal symptoms and expression changes of multiple genes similar to those previously suggested to be associated with human IBD or IBD experimental models. Both RNA-seq and qPCR analyses in whole larval medaka showed significant upregulation of the inflammatory cytokine genes il1b , tnfa , and il22 , and the upregulation of these genes was also quantified in the intestines of adult medaka using qPCR. The expression of IL-22 and these inflammatory cytokines also increased during the development of human IBD. Il23r -deficient mice lacks Il22 expression, and IL-23R-mediated IL-22 production is considerably important for improving colitis ( , ). Furthermore, GO analyses revealed that multiple genes related to specific GO terms, including cell proliferation, regulation of cell death, angiogenesis, and Wnt signaling pathway, among others, are upregulated upon DSS stimulation. Of the genes categorized under these GO terms, the expression of genes such as cd38 and nr4a was reported to be elevated in mammalian intestines upon DSS treatment ( , ). Additionally, DSS-induced injury promotes Wnt signaling for epithelial renewal and regeneration ( ). Generally, in DSS experiments on mice, stimulation is performed by supplying DSS in drinking water ( ). Meanwhile, in the immersion technique used by us, inflammatory responses, such as those reported in previous studies on mice, were observed with respect to transcripts response and intestinal histology. After DSS immersion, IL-22-KO medaka showed different symptoms and related phenotypes compared to WT medaka. In the histological study, the intestinal tissues of WT medaka recovered 5 days after DSS exposure, whereas the intestinal tissue of IL-22-KO medaka did not heal within this period and delayed wound healing in IL-22-KO medaka was observed by H&E staining. The number of positive signals in TUNEL staining at 5 days in IL-22-KO was significantly higher than that in WT. In the mammalian intestinal tract, the IL-22/STAT3 axis is known to induce various apoptosis suppressor genes ( – ). However, contradictory effects of IL-22 as an apoptosis accelerator have also been reported in recent years. A recent report showed that colonic IL-22RA1-KO impairs apoptosis and gene inductions related to DNA repair in response to DSS and azoxymethane treatment and promotes subsequent tumor development ( ). However, most details of IL-22-induced apoptosis in response to DNA damage in the intestinal tract and its anti-cancer effects are yet to be clarified. In the present study, we suggested that the consistent results from H&E staining-based histological observation and TUNEL staining indicates the attenuation of the wound healing ability in IL-22-KO medaka. Multiple previous studies on IL-22-KO mice have shown that IL-22 acts via STAT3 activation during the protection and wound healing of the intestinal epithelium upon DSS treatment ( , ) Transcriptomic changes that can cause histological differences between WT and IL-22-KO medaka were confirmed in the RNA-seq analysis. KEGG analysis revealed the downregulation of the PI3K-Akt and MAPK signaling pathways in IL-22-KO medaka. In the DSS-induced colitis mice model, PI3K-Akt signaling also contributed to wound healing in the intestinal epithelium ( , ). We examined the alteration of mucus production in DSS-treated fish and the relationship between IL-22 cascades. Our results showed the increase in mucus production upon DSS treatment and the significant decrease in goblet cell number in IL-22-KO medaka (compared to that in WT medaka) in both of naïve and DSS-stimulated states. In our analysis, the downregulation of fgf7 and muc2 in IL-22-KO medaka under naïve states was confirmed in both of larval whole body and adult medaka intestines. In a study on IL-22-KO mice intestines, the increase in goblet cell number and muc2 expression was suppressed upon intestinal helminth infection ( ). Additionally, patients with ulcerative colitis showed reduced goblet cell counts and mucus thickness, and these symptoms were also observed in case of DSS-induced inflammation in mice ( ). muc2 deficiency directly results in the development of colitis in mice ( ). Meanwhile, zebrafish larvae immersed DSS-water in our experiments showed a drastic increase in the apparent mucus layer with no increase in goblet cell number and muc5 expression ( ). In our transcriptome analysis, the expression of muc , except muc2 , showed no significant changes. The discrepancy between the apparent mucus production and the muc expression levels may be attributed to the quantitative change in glycosylation at the different O-type glycosylation sites present in muc genes. In fact, reportedly, changes in the degree of glycosylation in the muc gene observed in patients with IBD promotes IBD pathogenesis ( , ). This study showed the strong interactions between IL-22 and complement components in response to DSS treatment. At present, most details of the IL-22-dependent activation of the complement pathway in patients with IBD or in DSS-induced inflammation models remain unknown. However, IL-22-KO mice showed lower intestinal expression of C3 than WT mice, with increasing susceptibility, when infected with Clostridium difficile ( ). Other studies on mice have shown that C1q expression is elevated in the recovery phase after DSS exposure, and C1q-mediated Wnt signaling activation has recently been suggested to be important for tissue repair and mucosal regeneration ( ). In this study, we showed that the complement genes c1qc , c1ql2 , c4b , and c6 were significantly upregulated in WT medaka in response to DSS stimulation. Interestingly, the complement genes, including c1ql2 , c1r , c4b , c5 , c6 , and c7 , were significantly downregulated in IL-22-KO_DSS compared to that in WT_DSS. The following two areas will be addressed with a high priority in our future studies: 1) the potential direct involvement of IL-22-mediated signals in the induction of complement factors, 2) the relationship between the increased expression of complement genes and the pathophysiology of DSS-induced enteritis. In conclusion, we established IL-22-KO medaka and compared the phenotypes between WT and KO medaka after characterizing three medaka IL-22-related genes: il22 , il22bp and il22ra1 . The phenotypic comparisons were performed based on transcriptomic and histological analyses, and the characteristics in the naïve and DSS-stimulated states were compared. IL-22-KO medaka showed the downregulation of several genes that were previously associated with IL-22-dependent induction in mammals and teleosts. Additionally, IL-22-KO medaka showed delayed wound healing after DSS stimulation and reduction of goblet cell numbers. Along with the histological characteristics of IL-22-KO, transcriptome analysis indicated expression changes in specific genes, which may have been a causal factor in the deterioration of homeostasis in the intestinal tract. Our findings showed that the DSS experimental model developed using medaka larvae may be a viable option for basic research on IBD and also suggested the involvement of IL-22-mediated signals in the pathophysiology of enteritis in medakaEthical review and approval was not required for the animal study because the ethical committee does not require fish treatment. Conceptualization: YT, YO, HM, TK, MS, and J-iH. Methodology: YT, YO, HM, TK, NH, MW, and J-iH. Visualization: YT, YO, and J-iH. Investigation and Resources: YT and YO. Data Curation, Validation, and Formal Analysis: YT, TK, NH, MW, and J-iH. Project Administration: YT, MS, and J-iH. Supervision: MS and J-iH. Writing-Original Draft: YT, YO, and J-iH. Writing-Review & Editing: TK, MS, and J-iH. Funding Acquisition: YO, MS, and J-iHGrant-in-Aid for Scientific Research (A) and (B) from the Japan Society for the Promotion of Science (JSPS), Japan [Nos. 17H01486 and 17H03863] and a Grant-in-Aid for JSPS Research Fellows from JSPS, Japan [No. 19J14996
The effect of the COVID-19 pandemic on forensic cases admitted to an emergency department
6e2ebfa1-560c-4e24-9ecf-00ca5d895610
9753859
Forensic Medicine[mh]
Incidents such as traffic accidents, poisonings, suicide attempts, assaults, and gunshot wounds that lead to the mental and physical deterioration and even death of individuals are considered judicial events. Diagnosis and medical intervention in forensic events are mostly done by emergency services [ – ]. The epidemic caused by the coronavirus COVID-19 has caused many changes in our lives. Along with the health problems caused by the disease, loss of relatives, restrictions in daily life, and financial difficulties, the risks of loneliness, hopelessness, insomnia, anxiety, anger, suicide, and violence have increased all over the world [ – ]. As a result of the measures taken, a decrease in traffic accidents can be predicted, due to the decrease in pedestrians and vehicles on the streets [ – ]. This study aims to compare the forensic cases admitted to our emergency department before and during the COVID-19 pandemic. Thus, we aim to reveal the effect of the COVID-19 pandemic on forensic cases. We believe that these data can shed light on preparing social support programs for the public in similar situations that may occur in the future. This research study was planned as a retrospective observational study and performed in the Emergency Department of Fatih Sultan Mehmet Education and Researh Hospital. Our hospital is a third-level training and research hospital in Istanbul, with 290,000 patients admitted to our emergency department annually. We define the pandemic period as 11.03.2020, when the first COVID-19 case was reported in our country, to 01.06.2020, when the normalization process started. We use the same date range 1 year previously to define the pre-pandemic period (11.03.2019 to 01.06.2019). Every patient admitted to the emergency department was questioned at our institution as a possible forensic case. Forensic cases were first evaluated in the emergency department by emergency medical specialists. This initial evaluation was based on the patient’s statement, a physical examination, and tests that can be performed under emergency conditions. Cases brought in by the police were also included. We selected the patients who were given a forensic case code from the hospital records. Regarding the inclusion and exclusion criteria, all cases given a forensic case code in the emergency department during the study period were included in the study. Although it was very unlikely, it was determined that those whose records had any missing data would be excluded from the study. Patients’ data were obtained from the hospital information system. Patients were grouped into pre-pandemic period cases and pandemic period cases. The age, gender, and reason for admission (in-vehicle traffic accident, pedestrian traffic accident, suicide attempt, assault, knife wound, firearm injury, work accident, forensic examination before or after police pursuit, fall from height, poisoning, drowning, burns, and electrical accident) were statistically compared between the groups. R Version 2.15.3 (R Core Team 2013) was used for the statistical analysis. The study data were used to report the means, standard deviations, frequencies, and percentages. The conformity of the quantitative data to the normal distribution was tested using the Shapiro–Wilk test and graphical examinations. An independent group t -test was used for comparisons between the two groups of normally distributed quantitative variables. Pearson’s chi-square test, Fisher’s exact test, and the Fisher-Freeman-Halton exact test were used to compare the qualitative data. Statistical significance was accepted as p < 0.05. Approval for the study was obtained from the Ministry of Health COVID-19 Scientific Research Platform, Number x-2020-06-04T11_03_30.xml, as well as from the Clinical Research Ethics Committee of Fatih Sultan Mehmet Education and Research Hospital, Number 2020/83. During the period determined for the study, 4296 patients were registered as forensic cases; all of these patients were included in the study, as there was no missing data. Of the total of 4296 patients included in the study, 3011 cases (70.08%) had been admitted during the pre-pandemic period, and 1285 (29.91%) during the pandemic period. Only 22.4% ( n : 675) of the patients who were admitted during the pre-pandemic period and 26.1% ( n : 335) of the patients who were admitted during the pandemic period were women. There was a significant difference between the pre-pandemic period and the pandemic period cases in terms of gender ( p = 0.010). The rate of women who were admitted as forensic cases during the pandemic period was higher than before the pandemic. The mean age of pre-pandemic period admissions was 31.52 ± 12.66, while the mean age of pandemic period admissions was 32.13 ± 12.6 years. There was no statistically significant difference between the two groups in terms of age ( p > 0.05). A total of 425 of the patients participating in the study were younger than 18 years of age, 122 during the COVID period, and 303 during the pre-COVID period. The cause-related distribution of the patients presenting during the pre-COVID period was examined: 40 patients (13.2%) were admitted due to traffic accidents, 155 patients (51.2%) for forensic examination, 82 patients (27.1%) due to assault, seven patients (2.3%) for suicide, 19 patients (6.3%) due to a work accident, four patients (1.3%) for falling from a height, and two (0.7%) patients for burns. The distribution of patients under the age of 18 at the time of COVID was as follows: seven patients (5.7%) were admitted due to a traffic accident, 80 patients (65.6%) for forensic examination, 28 patients (23%) due to assault, four patients (3.3%) for suicide, and three patients (2.5%) due to a work accident. During the pandemic, while the percentages of suicide attempts, motorcycle traffic accidents (TA), and assault incidents were higher than in the time before the pandemic, the percentages of in-vehicle TAs and pedestrian TAs were lower (respectively, p = 0.035, p = 0.005, p = 0.001, p = 0.015, p = 0.008). The percentage of women suffering domestic violence at the time of the pandemic was higher compared to the time before the pandemic ( p < 0.001). For males, while the percentage of motorcycle TA events was higher at the time of the pandemic compared to the time before the pandemic, the percentages of forensic examinations and falling from height were lower (respectively, p < 0.001, p = 0.003, p < 0.001). The descriptive data are summarized in Table . The genders of the forensic cases, the pre-pandemic and pandemic distributions, and the p values are summarized in Table . COVID-19 started in China in December 2019 and was acknowledged as a pandemic by the World Health Organization (WHO) in January 2020. Due to the pandemic, we are going through a period in which both our physical and mental health are affected by the national economies, health systems, and individuals. Restrictions to prevent the spread of the disease have resulted in social isolation. The closure of non-essential businesses has also resulted in financial difficulties. These negative factors may lead to the worldwide increase of loneliness, anxiety, hopelessness, suicidal tendencies, and domestic violence [ – , ]. In addition to these issues, a decrease in traffic accidents was expected as a result of the decrease in the number of pedestrians and vehicles on roads due to the measures taken [ – , ]. Events that follow judicial instructions and cause the deterioration of people’s physical and mental health through external factors, such as traffic accidents, suicide attempts, firearm or penetrating instrument injuries, physical or sexual violence, work accidents, and poisoning, are defined as judicial cases. Forensic cases are usually first admitted to emergency services, and their diagnosis and medical intervention are mostly provided by emergency services [ , , ]. Various studies in our country have reported the mean age of forensic cases as between 27 and 33 years. In our study, the mean age is 31.70 ± 12.64 years, which is similar to that reported in the literature, and there is no difference in terms of the mean age between the pre-pandemic period and the pandemic period . In this study, we found a significant difference in terms of gender for both the pre-pandemic and pandemic periods. Men were more likely to be admitted to the emergency department for forensic reasons than women in both groups. We found that the rate of women’s admissions during the pandemic period was significantly higher than in the pre-pandemic period. In studies conducted before the pandemic, work accidents and traffic accidents were reported more frequently for men in our country, and it was observed that the majority of forensic cases involved men [ – ]. This may be because men are more involved in social and business life . This study indicates a statistically significant increase in the percentage of admissions due to assault on women during the pandemic period compared to the pre-pandemic period. It has been reported that domestic violence increased at different rates in different countries during the pandemic . Domestic violence is generally defined as the physical, emotional, economic, or sexual abuse of the weak by the strong. It can refer to violence of partners against each other, or it can refer to violence against a child or elderly person at home [ , , ]. Every member of society can experience domestic violence, but it has been reported in the literature that women are exposed to higher rates of violence compared to men [ , , ]. Stress factors, unemployment, decreased income, decreased social support, and alcohol and substance use are among the factors that cause domestic violence [ , , , – ]. In addition, it has been demonstrated that domestic violence increases during natural or man-made disasters [ , , , ]. In this study, the percentage of admissions due to suicide attempts increased significantly during the pandemic period compared to the pre-pandemic period. When we look at gender, we note that this increase is reported in the female group. Due to the negative effects of the pandemic, there may be an increase in suicidal attempts, and similar results have been reported in the literature [ – ]. Buschmann and Tsokos evaluated 11 suicide cases following the restrictions due to the COVID pandemic. They stated that all the patients had underlying psychiatric conditions, that their COVID tests had been negative, and that they had high levels of anxiety according to the anamnesis taken from the relatives of the patients . Existing psychiatric disorders may worsen with social isolation, and depressive disorders and suicidal tendencies may increase when social support is removed [ – ]. The fact that women have less economic and social support, or limited access to existing support due to the pandemic, and an increase in domestic violence may also be reasons for the higher incidence of suicide among women. Since our study was retrospective, we cannot say exactly how many of the women had experienced domestic violence. However, according to social facts, it is possible to say that these assault cases occurred mostly inside the home. Countries have implemented different degrees of restriction to prevent the spread of the disease during the pandemic. Schools, non-essential workplaces, and entertainment venues were closed, travel restrictions were imposed, and a curfew was imposed in our country. Due to the effect of these restrictions, the number of pedestrians and vehicles on the road decreased. Similar effects were observed globally [ , , , ]. The expected effect of the reduction in traffic was a reduction in traffic and vehicle-related accidents. In some countries, such as the USA, Australia, England, Spain, and Denmark, it was observed that traffic accidents decreased in line with this expectation [ , , ]. By contrast, Hakkenbrak et al. reported an increased number of traffic-related accidents in the Netherlands; this may be because there was no stay-at-home order or curfew in the Netherlands . On the other hand, Tandon et al. observed that, although traffic decreased in the US state of Virginia, there was no decrease in traffic-related accidents; this may be due to a decrease in driving safety due to empty roads and an increase in alcohol use . In our study, a decrease was observed in the number of trauma cases due to in-vehicle and pedestrian traffic accidents during the pandemic period. However, the increase in motorcycle accidents during the pandemic period was statistically significant ( p = 0.005). These results support the expectation that, with the decrease in traffic, pedestrian and vehicle accidents will also decrease. During the pandemic in our country, the curfew and working from home increased online shopping. Therefore, we believe that the percentage of motorcycle accidents among men increased due to the increase in motorcycle couriers, with the majority of these couriers being men. Yasin et al. reported that, while the number of pedestrian and motor vehicle accidents decreased in the UAE, the number of motorcycle accidents increased . This study reveals that the pandemic also affected the number of patients who were admitted to the emergency department for forensic reasons. According to our results, the percentage of suicide attempts, assault cases among women, and motorcycle accidents among men increased during the COVID-19 pandemic. We believe that fear of the disease due to the pandemic, the losses it caused, and the hopelessness and uncertainty about the future people experienced caused an increase in judicial incidents such as suicide and violence. Social support programs involving state and non-governmental organizations can be provided to enable people to stay in touch with each other, and perhaps such forensic incidents can be reduced by providing remote access to the health system, by developing applications such as telemedicine. Our study represents a retrospective single-center study. Our hospital is located in a large metropolis and our city hosts a wide variety of people in terms of culture and faith. However, forensic cases can be affected by geographical and cultural factors. The results of this study can be generalized to a multicenter study and prospective studies should involve different regions in terms of geography, culture, and religious beliefs. This study was conducted after the first COVID-19 case was announced in our country. Different results may emerge from studies covering longer periods due to fluctuations in people’s moods. Another limitation of the study is that we cannot say exactly how many battered women have been exposed to domestic violence. In future studies, it would be appropriate to examine children and women’s exposure to domestic violence. Since our study is a retrospective study, cross-cutting groups such as suicidal traffic accidents could not be distinguished. This is an important limitation of our study. The COVID-19 pandemic also affected forensic cases: The rate of suicide attempts increased compared to the pre-pandemic period. Domestic violence against women increased. The rate of motorcycle accidents increased.
Operationalization of the Brief ICF Core Set for Hearing Loss: An ICF-Based e-Intake Tool in Clinical Otology and Audiology Practice
bbe1775d-41f1-4d9f-9e63-ecea082af07e
7722460
Otolaryngology[mh]
The consequences of ear and hearing problems are multifaceted and often go beyond the level of ear and auditory impairments in structures and functions: various aspects of functioning in daily life and general health can be negatively influenced. Examples are restrictions in social relationships, inability to perform work, and depressed mood ( ; ; ; ; ). Promoting, maintaining, and improving overall functioning from a holistic perspective, instead of applying a mere focus on impaired body structures and functions, are increasingly recognized as the primary target and point of departure in audiology, both in clinical practice and research ( ; ; ; ; ; ). To successfully assess the level of functioning of an individual with hearing problems, it is necessary to capture the whole spectrum of a person’s impairments, activity limitations, participation restrictions, and relevant contextual factors ( ). According to , such a bio-psychosocial perspective would form a good basis for identifying all relevant aspects that should be addressed in the care pathway ( ; ). When applying such a bio-psychosocial approach to the general otology or audiology clinical practice, a challenging issue is the lack of a universal definition and an instrument describing functioning in a standardized way ( ; , ; ; ; ). The International Classification of Functioning, Disability and Health (ICF) of the World Health Organization provides a comprehensive framework to describe functioning. It is based on a bio-psychosocial model of health ( ). According to the ICF, an individual’s level of functioning is the outcome of complex interactions between a health condition, body function and structures (emotional, cognitive, and physical functions and anatomy), activities (tasks and demands of life), participation (engagement in life situations), and contextual factors. Contextual factors are divided into environmental factors (EF) (e.g., physical, social and attitudinal elements that can act as a barrier to or facilitator of an individual’s functioning) and personal factors (potentially influencing how a disability is experienced such as gender, age, habits, lifestyle, coping styles) ( ). To make the ICF hearing specific, two ICF Core Sets for Hearing Loss (CSHL) were developed, a brief one and a comprehensive one ( ; ). CSHL are shortlists of ICF categories (covering body functions, body structures, activities, participation and EF) that are considered most relevant for describing the functioning of an adult with hearing loss. While the Brief ICF CSHL provides a minimum standard, the Comprehensive ICF CSHL is meant for multiprofessional comprehensive assessment ( ; ). The Core Sets were developed through a WHO-defined process including three phases: preparatory phase, phase I, and phase II. The preparatory phase consisted of four scientific studies. These were conducted to identify ICF categories that were considered relevant by three different stakeholder groups: (1) Researchers: A systematic literature review was performed on outcome measures used in research, including adults with hearing loss, and these outcome measures were linked to the ICF categories ( ), (2) Experts: An internet-based international expert survey among hearing health professionals was performed ( ), and (3) Patients: Qualitative focus group interviews with Dutch and South-African adult patients were organized ( ). The information collected during the preparatory phase was presented at a consensus meeting (phase I), at which consensus was reached on the final set of ICF categories to be included in the CSHL ( ). Phase II is currently ongoing, aiming to validate and test the Core Sets in practice ( ). As mentioned, the Core Sets provide a minimum standard to describe the typical spectrum of problems in functioning. This standard may be extended for any purpose stated, such as according to the needs of the specific setting ( ). In two previous studies, we examined the “overlap” between the content of the ICF CSHL and the intake documents used in the oto-audiology practices in the Netherlands and the United States (i.e., the percentage of CSHL categories included in the intake documentation). Both studies showed substantial overlap (50 to 100%), supporting the CSHL’s content validity ( ; ). However, there was also partial “non-overlap”, especially in psychosocial topics, indicating that current intake procedures may not cover all aspects relevant to patients with ear and/or hearing problems (as indicated by the CSHL). In addition, the ICF’s category sleep function and various personal factors (currently not included in the CSHL) emerged from the intake documents as potentially relevant for functioning. This finding suggests that the CSHL may need to be expanded. While the CSHL cover lists of aspects that would need to be considered to describe functioning, it is not known how this should be done. In other words, operationalization of the CSHL can take different forms. The aim of the current study was to operationalize the Brief CSHL into a tool to be used as an intake (admission) instrument for patients visiting the oto-audiology department. Given that an individual’s functioning is best assessed from the patient’s perspective (FDA 2009), we chose to operationalize the Brief CSHL into a self-reported diagnostic screening tool. This tool is further referred to as “ICF-based e-intake tool”. The goal of our tool is to use it to screen adults with ear and/or hearing problems (for simplicity, these are further indicated as: “ear and hearing problems”) to be able to identify the problems and environmental and personal factors that are relevant to their functioning. This screening will be done prior to their treatment, and is meant to support the intake procedure and subsequent treatment or intervention. Ultimately, by using the ICF-based e-intake tool in oto-audiology practice, we aim to support and enhance patient-centered care and shared decision-making by: (1) providing an overview of the patient’s responses (i.e., his/her “functioning profile”) both to the clinician and the patient before the intake appointment; (2) discussing the profile during the intake appointment, and (3) providing tailored follow-up actions or treatment opportunities within the tool. Figure illustrates how we envisage incorporation of the intake tool may support patient-centered care planning for individuals with ear and hearing problems. The objective of this article is to describe the process of developing the self-reported intake tool. The development of a self-reported instrument usually comprises the following six steps: (1) definition and elaboration of the construct intended to be measured, (2) choice of measurement method, (3) selecting and formulating items, (4) choice of response formats, (5) content evaluation, 6) field-testing ( ). Steps 1–2 have been described above. This study focuses on steps 3–5. A mixed method design was used and included: the selection of appropriate items from a pool of existing, commonly used patient-reported outcome measures (PROMs), a formal decision-making process, and qualitative content assessments. In addition, the integration of the ICF-based e-intake tool in a computer-based system is described. Selecting and Formulating Items and Choice of Response Formats Content Evaluation Data Analysis Digital Format We explored various options to integrate the intake tool in a digital format as this was the preferred mode of administration. Digital administration enables a rapid provision of the patient’s “functioning profile” to the patient and clinician during the intake procedure. A). Selection of Categories to Be Represented in the ICF-Based e-Intake Tool • B). Formulating Items for the Selected ICF Categories • C). Determining Response Formats • For existing items that were adopted verbatim, the response format was based on the original response categories. For the items formulated by the project group, the ICF qualifiers were used to describe the extent of a problem in a particular domain (i.e., no problem (0); mild problem (1); moderate problem (2); severe problem (3); complete problem (4); ). Phases A–C resulted in a preliminary item list agreed upon within the project team. Additional categories to the Brief CSHL were selected based on our previous study (see ) and based on expertise of clinicians (i.e., experienced audiologist, ENT surgeon, and psychologist). The ICF categories of the Brief CSHL are provided in Appendix 1 in Supplemental Digital Content 1, http://links.lww.com/EANDH/A637 . The method used to formulate items for the Core Set categories involved a formal decision-making and consensus process in the multidisciplinary project team consisting of an ENT surgeon, audiologist, psychologist, and researchers with relevant experience in oto-audiology research. First, a pool of items was created by linking the items from existing questionnaires to the ICF categories of the Brief CSHL and the selected additional categories. Three sources were used to create an item pool: (1) existing ear and hearing questionnaires relevant for the field as shown by the review study by . These were questionnaires available in the Dutch language; (2) additional questionnaires routinely used in Dutch clinical oto-audiology practices; and (3) general functioning questionnaires based on the concepts of the ICF (e.g., WHO Disability Assessment Schedule 2.0 [WHODAS 2.0]; ), World Health Survey [WHS]; ). This item pool was used to select specific items that were considered appropriate to screen the ICF categories. Each member of the project team evaluated and indicated the relevance (yes, no) of each item and provided additional comments to justify their choice (Phase A). Second, the results of Phase A were discussed in various meetings until consensus was reached about operationalization of each ICF category. New items were created in cases where existing items could not be linked to the particular category, or where they were considered unsuitable. For the formulations of particular constructs of these items, we used the official descriptions of the ICF categories as formulated by the WHO (e.g., e3 support and relationships: “people or animals that provide practical physical or emotional support, nurturing, protection, assistance and relationships to other persons, in their home, place of work, school or at play or in other aspects of their daily activities”; ICF 2017). For all items, rules were drawn up to secure uniform formulations (e.g., regarding the recall period and the experienced degree of difficulty). The aim of this part was to test whether the item list was judged relevant (all items should be relevant for the construct of interest within a specific population and context of use), comprehensive (no key aspects of the construct should be missing), and comprehensible (the items should be understood by patients as intended) ( ). The preliminary item list was therefore administered to a panel of relevant stakeholder representatives. After that, it was piloted in a group of patients. D). Expert Survey • E). Patient Pilot Study • The modified item list was tested in a small sample of patients who were randomly selected from the VUmc patient pool. These were new patients who had their first appointment scheduled. Patients were recruited at Amsterdam UMC, location VUmc in Amsterdam, The Netherlands. Patients were included who visited the outpatient clinic of the VUmc for an ear and/or hearing problem for the first time, were 18 years or older, and who spoke Dutch. A maximum variation strategy ( ) was applied to select participants, with regard to patients’ ear/hearing problem(s), gender, and age. This was done to create a heterogeneous group of patients, covering the full spectrum of oto-audiology diseases/complaints, with an equal gender distribution and a wide age range. Recruitment of patients took place via the secretary of the department, who sent an information letter 2 weeks prior to the scheduled intake visit by email. When a patient indicated to be willing to participate, L. v. L. explained the study in more detail and scheduled the study interview. Recruitment of new patients ceased when variation was achieved. Patients were interviewed directly prior to their appointment with the audiologist or ENT surgeon. They were therefore asked to arrive half an hour earlier. All patients were interviewed at the outpatient clinic of VUmc. Prior to the interview, written informed consent was obtained. The intake tool was administered in a digital format. Interviews were held in Dutch. The aim of the pilot study was to study the relevance, comprehensibility, and comprehensiveness of the intake tool. This was done based on the “three-step test” interview (TSTI) ( ). The TSTI combines observational and interviewing techniques to identify how items are interpreted and whether problems occur during completion of the item list. The TSTI comprises three consecutive steps: concurrent thinking aloud, retrospective interview, and a structured interview using an interview guide. – During the first step, the interviewer observed the patients as they were completing the item list. Patients were asked and encouraged to verbalize their thoughts while doing so. The interviewer used prompts to encourage the patient to verbalize his/her thoughts. The patient’s comments and interviewer’s observations were written down by the interviewer. The time needed to complete the item list was also noted by the interviewer. – During the second step, patients were interviewed regarding their response behavior and comments made during the first step. – During the third step, a brief structured interview about the comprehensibility and comprehensiveness of the item list was conducted. The format of the intake tool and how the patient preferred to view the results of the completed item list was also discussed. In addition, patients were invited to share any additional comments about the intake tool. The interviewer prompts and the interview guide are shown in Appendix 3 in Supplemental Digital Content 3, http://links.lww.com/EANDH/A657 . The digital item list was pre-tested by colleagues who needed around 15 minutes to complete the list. Hence, it was decided to reserve a time slot of 30 minutes for completion of the item list (step 1; 15 minutes) and the interview (steps 2 and 3; 15 minutes) to minimize patient burden. In one case, the intake consult was postponed somewhat (with the consent of the patient and the clinician) so that sufficient time would be available for the interview. No repeat interviews were carried out. An expert survey was conducted among Dutch representatives of all relevant stakeholder groups, that is, patients, audiologists, ENT surgeons, a general practitioner, and a clinimetrician/methodologist. The selection of experts was based on a convenience sampling method ( ) and recruitment took place through the contacts of the project team members via email. When an expert indicated to be willing to participate, L. v. L. explained the study in more detail via email or telephone and sent the expert survey via email. Consent was implicit by agreeing to participate in the expert survey via email, after which the survey was sent. The representatives were asked to score each item on its relevance and comprehensibility. In addition, the item list was rated on comprehensiveness and the order in which the domains and associated items were queried. At the end of the survey, respondents were able to provide additional comments. See Appendix 2 in Supplemental Digital Content 2, http://links.lww.com/EANDH/A656 for the survey questions. In addition to the expert survey, the main developer of the ICF CSHL (Dr. Granberg) was consulted for feedback on the item list. This was done by using survey questions via email. Specific attention was asked for the operationalization of the hearing-related categories. This was done because the description of ICF categories relating to hearing, listening, and communication are unclear and overlapping (as previously pointed out by the developers; ). For the data collected in the expert survey, results and comments were summarized by L. v. L. and discussed within the project group. Items were modified based on consensus in the project group. All patients were interviewed by a researcher who was trained and experienced in qualitative research methods (L. v. L.) (see Appendix 4 in Supplemental Digital Content 4, http://links.lww.com/EANDH/A658 for the researcher’s characteristics, which have been reported according to the COREQ criteria; ). All patient interviews were audio-recorded and transcribed verbatim. Qualitative content analysis was used ( ) to analyze the data. Coding was on item level (except for comments made in step 3 which concerned the item list as a whole and lay out of the intake tool), across the 3 steps of the interview. Comments and problems were labeled based on content and subsequently grouped into categories. Transcription and coding were performed by L. v. L, under supervision of M. P. and S. K. Transcripts were not returned to participants for comment or correction. Results were discussed and items were modified based on consensus in the project group. This study was approved by the Medical Ethics Committee of the VU University Medical Centre, Amsterdam, The Netherlands (reference number 2013-067). Selecting and Formulating Items and Choice of Response Formats General Information (Personal Factors) General Body Functions Ear and Hearing Structures and Functions Activities and Participation and Environmental Factors Mastery and Coping Behavior (Personal Factors) Content Evaluation D). Expert Survey • E). Patient Pilot Study • Forty-seven patients were invited, and 11 patients participated in the TSTI (response rate 23%). Table shows their characteristics. The categorization according to the International Classification of Diseases version 2010 ( ICD-10 ) – chapter VIII, “Diseases of the ear and mastoid process”: diseases of the external ear, diseases of the middle ear, diseases of the inner ear, and other diseases – shows that the broad range of ear and hearing problems that can generally be encountered in the oto-audiology practice was represented in this group of participants. The mean time to complete the item list was 16 min (range: 9-24 min). Steps 1 and 2: Thinking aloud and retrospective interview Problems With Response Options Difficult Formulations Response Would Be Dependent on Specific Situation Instructions Were not Read It was observed that patients consistently did not read the instructions at the beginning of each domain or subset of items. Step 3: Structured Interview The data collected in step 3 showed that all patients thought that the intake tool was relevant. Regarding the content of the item list, patients stated that the items were relevant to them and comprehensible (except for the items on EF). Regarding the comprehensiveness of the item list, some patients indicated that more detail on some specific complaints would be desirable but they did not miss any key concepts. They also agreed on the general nature of the intake tool and mentioned that further specification may not be feasible. Regarding the layout of the item list, it was mentioned twice that the font size should be somewhat bigger. Patients found it difficult to comment on presentation of the (future) functioning profile because they found it hard to envisage how this would look like. The option to save or print the filled-out form was regarded as mostly convenient to them. Regarding the layout of the intake tool, a simple format and a low quantity of questions per screen was preferred. A). Identification of Categories to Be Represented in the ICF-Based e-Intake Tool • Operationalization and Response Format • The ICF categories were divided into the following domains: (1) general information, including reason for visit, sociodemographic and medical background-related items; (2) general body functions; (3) ear and hearing structures and functions; (4) activities and participation (A&P); (5) EF; and (6) mastery and coping. The sections below describe how the ICF categories of each domain were operationalized. A total of 39 categories were chosen to be covered in the intake tool, including the 27 categories from the original Brief CSHL and 12 additional categories. Additional categories were added based on our previous research. These categories were as follows: – Sleep functions (i.e., b134) and Personal Factors. Our previous study showed that sleep functions and personal factors are important for patients with ear and hearing problems, but that these categories are not part of the Core Set (L.M. ). Literature substantiates the relevance of these categories for this patient group ( ; ; ; ; ; ), and therefore the project team decided to include them in the intake tool. Additional categories added based on clinical expertise within the team were as follows: – The subcategories of the ICF categories (i.e., third-level) b230 “hearing function” and b240 “sensations associated with hearing and vestibular functions” (i.e., b2301–b2304 and b2400–b2405). The project team decided to include these categories as the Brief CSHL includes only second-level categories ( ). Hearing impairment and ear complaints are at the core of ear and hearing care, and therefore more detailed information on hearing functions and ear functions was regarded relevant; and – The ICF categories b250 “taste function” and b255 “smell function”. These were included because in the field of otology these are considered important indicators for nerve damage to the auditory organ. Please note that Personal Factors are not yet classified within the ICF. However, a list of examples is available from the ICF and these include demographics, other health conditions (HCs), coping styles, social background, education and profession, past life events, overall behavior patterns, and other factors playing a role in disability ( ). In addition to demographics, other HCs, social background, education and profession, other operationalized personal factors were mastery and coping behaviors in communication situations. These constructs were selected, because with our intake tool we aimed for (1) a global view of personal factors indicating how people deal with setbacks such as diseases (including hearing impairment/ear problems) (i.e., mastery), and (2) a specific view of personal factors indicating how the patient deals with his/her ear and hearing problems at the moment (i.e., coping behaviors in communication). Mastery is the extent to which a person perceives one’s life as being under one’s own control in contrast to being fatalistically ruled (Pearlin & ). It is considered a relevant psychosocial resource when coping with stressful life events. For example, a higher sense of mastery is associated with better psychosocial adjustment to the hearing impairment in older adults ( ). Regarding coping behaviors, evidence shows that applying maladaptive (as compared to adaptive) coping behaviors can lead to higher levels of hearing disability, and subsequent psychosocial problems in people with hearing impairment (e.g., ). B-C). In a previous qualitative study, patients indicated that they would like to start the intake tool with reporting the reason for their visit to the outpatient clinic. This way, the focus of the visit would be clear to the professional ( ). Therefore, the category “reason for visit” was included as the first item. For the operationalization of demographics, other HCs, social background, education and profession-related factors, items were based on similar items used in large national cohort studies (i.e., LASA, see ; and NL-SH, see ). For the operationalization of body functions, items were based on the content and wording of the Speech Spatial and Qualities Questionnaire items ( ), items used in a large national cohort study (LASA, ), WHODAS 2.0, WHS and WHO’s official descriptions of ICF categories. Items were formulated as “How much difficulty do you have … [with sleeping]”. The response format was based on the ICF qualifier to specify the degree of difficulty. For the operationalization of body functions category “temperament and personality functions”, the construct self-esteem was selected. This was done on the one hand because it is known that a poor hearing status can negatively affect self-esteem (e.g., ; ). And on the other hand, the level of confidence/self-esteem can influence the management of hearing loss, for instance through applying certain coping strategies ( ; ). Moreover, it is known that involvement from the social environment can positively address incurred hearing losses and lead to important benefits including higher self-esteem ( ). Lastly, hearing loss management through taking up hearing aids could negatively influence one’s confidence levels (stigma), while it could also improve self-esteem (because communication is improved). “Emotional functions” was operationalized through the constructs feelings of loneliness, depressive complaints, and anxiety complaints. These constructs are known to be commonly affected by ear and hearing problems (e.g., ; ; ; ). For the operationalization of the ICF categories on ear structures, a figure was made in which the patient could indicate where he/she thinks his/her ear and hearing problem is located. Also the response option “I don’t know” was added. It was decided that it would be relevant to know how well the patient would be able to indicate the location of the hearing or ear problem, to discuss this during the intake and to be able to correct perceptions. For the operationalization of the hearing, listening, and communication ICF categories (i.e., b230, d115, d310, d350 and d360), the project group agreed to use the validated, 28-item version of the Amsterdam Inventory for Auditory Disability and Handicap (AIADH; ). The AIADH is being used widely in the Dutch hearing aid dispensing practice. The AIADH assesses self-reported disabilities and handicap in everyday hearing. The AIADH includes five hearing domains (subscales): auditory localization, intelligibility in noise, intelligibility in quiet, detection of sounds, and distinction of sounds. For each of the five subscales, we selected the most discriminating item based on Item Response Theory (see ). For instance, for the subscale “auditory localization”, the item “Can you hear from what corner of a lecture room someone is asking a question during a meeting?” was chosen, because this item had the highest discriminative ability to indicate auditory disability. In addition to selecting the items with the highest discriminatory power, the items on “conversations over the telephone” and “conversations in quiet” were selected to ensure coverage of all ICF categories in the intake tool. The original four-point response scale was used, “never, sometimes, often, always”. For the operationalization of ear problems, wording was based on clinical expertise, and the operationalization ran parallel to, and was influenced by, the development of the Otology Questionnaire Amsterdam (see ). The ICF qualifier system, by which the severity of the complaint can be graded, was used as response scale. For the operationalization of ICF categories in the A&P and EF domains, formulation was based on the wording of WHODAS 2.0 and WHS items and WHO’s official descriptions of ICF categories. Items in the A&P domain were formulated as “How much difficulty do you have in … [participating in community activities]”. Items in the EF domain were formulated as “To what extent do you feel supported/hindered in your daily functioning by … [your healthcare providers]”. The ICF qualifier system, to specify the degree of difficulty (for the A&P domain) and degree of perceived support and degree of impediment (for the EF domain), was used as response scales. The construct of mastery was operationalized using an abbreviated five-item version of the Pearlin Mastery Scale ( ). The scale measures the extent to which an individual regards his/her life chances as being under their personal control rather than being fatalistically ruled. The original five-point Likert scale, ranging from “strongly disagree” to “strongly agree”, was used as response scale. For the operationalization of coping behavior relating to hearing impairment, items of the subscales “communication strategies” and “personal adjustment” (including embarrassment and acceptance of the ear and hearing problem) of the Communication Profile for the Hearing Impaired (CPHI) were chosen. CPHI items with the highest discriminating power were included (as reported in )In addition, the response option “not applicable” was included to allow patients to indicate whether an item applied to them or not. ll invited experts responded positively to the invitation and expert survey. In total, the preliminary item list was assessed by 10 stakeholders: four patient representatives from Dutch patient organizations, two audiologists (one from a secondary center and one from an academic center), two (resident) ENT surgeons (one from a secondary hospital and one from an academic hospital), a general practitioner, and a clinimetrician/methodologist. With regard to the relevance of the items, most experts rated all items as relevant, but clinicians indicated that items in the A&P domain in general should be reworded. These questions would be more relevant when explicitly asking how the patient’s ear and/or hearing problems influence functioning in daily life . With regard to comprehensibility, items were generally well understood, but some suggestions for a better formulation of items or response categories were made. With regard to the comprehensiveness of the total item list, no important domains were considered to be missing. One of the patient representatives indicated the need for the opportunity to further explain his/her given pre-defined answers (open space). The order of the item list was found adequate. s 1 and 2 showed that every patient encountered problems with at least one of the items of the intake tool. All patients filled in every item. Three categories of comments/problems were identified: (1) problems with response options; (2) difficulty with formulations; (3) response to the item would depend on the specific situation. These categories are discussed below. One respondent mentioned she found it difficult to choose between the response categories indicating the degree of difficulty experienced. “Then I think ‘maybe it is not so bad [the ear problem]’, for example compared to others. I find it very difficult to say such a thing about yourself”. Two respondents indicated to have problems with the item about localization of the ear/hearing problem. They did not know how to answer this question. Almost every patient encountered problems with answering the EF items. Problems related to the fact that each category was questioned twice, that is, first to what degree the category acted as a barrier to the person’s functioning and then to what degree the category acted as a facilitator of the person’s functioning. Patients suggested that only one item per category should be asked, and this could be either in the formulation as a barrier or a facilitating factor. In addition, the item about the accessibility to care was not well understood. One respondent reported to have problems with the item about which chronic diseases are experienced “at this moment”. The respondent indicated to have had problems, but he “did not suffer from it at this moment”, and therefore did not know how to answer this item. Another respondent thought the item on feelings of loneliness was difficult to understand. Some patients indicated that the answer on items “depended on the situation”, but could always answer the question after some consideration. For example, regarding the item about difficulties when attending education, one respondent reported that the answer on this question would depend on whether the education material was provided orally or in a written fashion. Another example was the items on coping behavior (personal factors). It was reported that whether or not being able to cope well would depend on the specific (social) situation. One respondent suggested to include the option to provide comments in the items, to be able to better explain the chosen response category. Based on the responses of the experts, changes were made in the instructions of the items covering A&P and EF so that these would specifically address factors in relation to the patient’s ear and hearing problems. The description was adjusted into “The following questions are about the influence of your ear and/or hearing problem on your daily activities” (A&P) and “The following questions address the influence of different EF on your daily functioning. With regard to your ear and/or hearing problem, indicate to what extent these provide support for your daily functioning” (EF). In addition, some items were modified to improve the wording. Based on the problems patients encountered while answering the EF items, these items and response categories were adapted. From the literature it is known that positive items are generally preferred. Therefore, only items about the facilitating effect of the item were retained. In addition, the item about the accessibility to care was simplified. Items adopted from existing questionnaires were retained despite the (few) identified problems. According to patient’s suggestions, the instructions were written in a bold font style and were copied on every new page (in case of a page break). The table with the final item list is available in Supplemental Digital Content 1, http://links.lww.com/EANDH/A637 . The online portal “KLIK” was chosen to implement the intake tool. KLIK provides an online environment to administer PROMs digitally. The use of KLIK is as follows. Prior to the intake visit, patients are asked to register to the online portal ( www.hetklikt.nu ). After completion of the questionnaire, the patient’s outcomes are digitally presented and converted into a “functioning profile”. A three-color traffic light system was chosen to be used to indicate in which area(s) further detailed examination(s), action(s) and/or intervention(s) are needed. Figure provides an example of such a functioning profile. Because the cutoff points can only be determined after sufficient data collection, the traffic light system could not be utilized for the first version of the tool. The functioning profile can be saved as PDF and/or printed. This way, it could be used by patients in preparing for and during the intake appointment. Moreover, the PDF format allows it to be added to the patient’s medical file such that it is visible to clinicians. This study aimed to operationalize the recently developed ICF Brief CSHL ( ) into an self-reported diagnostic screening tool for patients with ear and hearing problems visiting the audiology or ENT outpatient clinic. This study is part of Phase II of the WHO’s Core Set’s development process ( ). The ICF-based e-intake tool assesses the functioning of an individual with ear and hearing problems and includes the assessment of potentially influencing environmental and personal factors. The current version of the intake tool covers 39 ICF categories. It comprises 62 items, and it takes approximately 16 min to complete it. Content validity is the most important measurement property of a self-reported instrument ( ). The results of the current study present preliminary evidence to support the content validity of the tool as an instrument to screen for ear and hearing problems relating to functioning and the environmental and personal factors that may interact with these problems. Furthermore, overall, the intake tool was perceived to be relevant and to have a logical and clear structure, as indicated by the stakeholder representatives and the patients who participated in the pilot study. The tool was integrated into a digital, web-based patient system called KLIK. The integration of the intake tool into such a system will facilitate its use by clinicians ( ). It offers options to create routing pathways by presenting additional items based on a patient’s response on a previous item. Also, a summary of the patient’s answers in a graphical functioning profile can be generated. KLIK has been adopted and implemented for self-reported questionnaires in different settings and in different hospitals across the Netherlands, in both child and adult care ( ). The feasibility and user-friendliness of our intake tool in oto-audiology patients will need to be further evaluated to optimize its intended use in the otology and audiology practice. Clinical Implications Operationalization of Other ICF Core Sets International Perspective Patient-Centered Care • A Tool for Clinical Oto-Audiology Practice • The intake tool is developed with the ultimate aim to improve patient-centered care in oto-audiology practice. It is important to recognize that the intake tool in itself will not directly cause patient-centered care ( ). Rather, the functioning profile resulting from the intake tool may act as a facilitator of patient-centered care. It is considered a starting point of the intake process, enhancing communication between the clinician and the patient about the experienced challenges in functioning, clarifying priorities for care, and fostering equal partnership in determining treatment (e.g., ). It is important to emphasize that the goal of the intake tool is not to replace the intake appointment but to serve as an aid to facilitate the intake conversation. Several studies have addressed the impact of self-reported instruments on the (intake) appointment with the clinician. Reviews provide evidence of improved patient-clinician communication, better identification of psychosocial problems, and better guidance in clinical decisions made in response to patient-reported symptoms ( ; ; ; ; ; ). However, whether the intake tool will indeed facilitate patient-centered care will partly depend on its successful implementation. That will imply changes in practice for both patients and clinicians in order to accommodate the collection and the feedback of the patient-reported information. Changing practices is known to be challenging ( ; ; ). In parallel studies, we identified the perceived barriers to and enablers of using the intake tool ( ) and used this information for the development of an implementation intervention ( ). With our intake tool, we opted for an integrated and uniform approach to collect functioning information in the initial contact, independent of the specific oto/audiology discipline the patient encounters first. Information about a person’s functioning documented during the intake should facilitate a proficient and interconnected collaboration between the team members during the care process, that is, by using the standardized intake tool in both disciplines. Over the past few years, operationalization of ICF Core Sets for use in clinical practice occurred in other domains. Examples are the Brief Core Set Questionnaire of Breast Cancer for Screening in cancer care ( ), the Work rehabilitation Questionnaire for vocational rehabilitation ( ), a health index for patients with ankylosing spondylitis ( ), the Neuromuscular disease impact profile for neuromuscular diseases ( ), and the ICF CS-based questionnaire for non-traumatic spinal cord injury ( ). Contrary to our diagnostic screening tool, these instruments were developed to measure the effect of treatments or interventions on functioning. None of these cover contextual factors. We chose to create a tool that can provide a quick, standardized screen of ear and hearing–related functioning. It highlights aspects that need further examination and/or actions. It is known that having only one to two items to measure a construct generally yields insufficient reliability for evaluative purposes ( ). Including more items per construct was discussed within the project team, but this would yield a too lengthy questionnaire and therefore would result in an unacceptable patient burden. The current version of the tool is not suitable for the measurement of the effectiveness of treatments. For that purpose, it would have to be expanded to provide a more detailed assessment of sub-constructs of functioning. It could be combined with validated symptom-specific questionnaires. For example, to measure improvement in self-perceived disability and handicap in everyday hearing, the full version of the AIADH could be incorporated. Similarly, to measure the effect of treatment or interventions on patient’s coping behavior, the full CPHI could be added. Also other PROMs not part of the intake tool may be used. Examples are the Dizziness Handicap Inventory to measure dizziness. Such multi-item scales would be suitable for follow- up measurements as they have better sensitivity and responsiveness than one or two-item scales. Development of an ICF-based instrument that can be used to evaluate treatment effectiveness was beyond the scope of the current project. This project is part of Phase II of the WHO-defined process to develop ICF Core Sets. Other than the described purpose of the intake tool, it can be used to serve additional objectives. It can be used to (1) promote and guide further development of Core Sets for use in clinical practice, research and education in the field of Audiology, (2) develop strategies for the implementation of the ICF Core Sets for HL in clinical practice, (3) encourage international collaboration and alignment in these processes. Similar activities to operationalize the Brief Core Set into a self-assessment instrument are ongoing in the United States ( , b, ) and in Sweden (“ICF-core sets for hearing loss; validation and operationalization of Brief ICF-Core set for hearing loss into a self-assessment instrument”). The experience gained in our study, in combination with the other initiatives, are of importance to achieving the WHO’s goals with the Core Sets. Operationalization Content Assessment Generalizability We chose to operationalize the ICF-category “emotional functions” into feelings of loneliness (item 14, Supplemental Digital Content 1, http://links.lww.com/EANDH/A637 ), sorrow, sadness, depressive complaints (item 15, Supplemental Digital Content 1, http://links.lww.com/EANDH/A637 ), and feelings of worry and anxiety (item 16, Supplemental Digital Content 1, http://links.lww.com/EANDH/A637 ). With regard to psychological personality traits in the component personal factors, “mastery” and “coping behavior” were selected. This was done based on the literature showing that these provide a representative picture of a patient’s personality/intrinsic factors potentially influencing someone’s living with ear and hearing problems. Nevertheless, the choice for including only these two categories may seem arbitrary and other additional categories could have been considered. An example is frustration, which is a well-known consequence of hearing impairment (e.g., ; ). Another consideration concerns existing difficulties with regard to the conceptualization and categorization of personal factors ( ; ). For example, the psychological assets in the personal factors component (e.g., emotional reactions) seem to overlap with the categories of mental functions of the BF component. This was also the case in the current study. We tried to adhere to the descriptions of the ICF categories, but the choice for the operationalization of embarrassment as a personal factor rather than an emotional reaction (see items 53 and 55, SCD 1) may therefore be regarded as somewhat arbitrary. Another possible shortcoming of the operationalization process may be the consensus being based on expertise from a small group of experts from one hospital setting. Consequently, choices were made based on preferences within this setting and thus may not apply in other (hospital) settings. However, we validated our choices as much as possible by testing the draft item list in a broader expert group and in a heterogeneous sample of patients. Different response formats were selected for the different domains in our intake tool. Previous research showed that mixed response scales may be confusing for respondents (e.g., ). Moreover, it is known from the literature that the patient’s self-reported data should be easy to interpret by the clinician in order to facilitate its implementation ( ). Mixed response scales may hamper that. However, both experts and patients included in the content assessment did not report important problems with the response scales (except for the domain of EF, which was adapted accordingly). With regard to clinician burden and ease of using the intake tool, our other study in which we identified the barriers and enabler to use the intake tool indicated that clinicians indeed preferred a simple overview of easy-to-interpret results ( ). At this point in the development process, such an overview has not been developed and considered for review by the clinicians yet. This will be addressed during next steps of the development and testing of the tool (see further under “Future directions”). With regard to the data of the patient pilot-study, bias could have occurred because the interviewer was also part of the project team. However, the aim of the pilot study was to ensure that the questionnaire content would match the target group, so the interviewer was motivated to know all the critical points in order to be able to improve the content of the item list. Therefore, we do not expect this was a negative factor. A limiting factor was the use of closed-ended questions in the interview guide, which may have limited the respondents’ answers and more detailed explanations of their experiences with the item list. Another possible limitation is that the tool is developed in Dutch, and decisions were made based on the Dutch health care system. Instruments must fit into the health care system where they should be applied (ISOQOL 2011). The current version of the intake tool is intended for use in the Dutch otology and audiology system, which – for now – limits its use to Dutch speaking patients. Its application and generalizability to other countries and care systems would need to be addressed in future work. It may be argued that this study was limited in the sense that the consensus meeting on the selection and initial formulation of the items did not include patient representatives. As already mentioned in the Introduction, the development of the ICF CSHL did include patients’ participation in various stages of the Core Sets’ development and consensus process. The patient perspective on functioning with hearing loss was carefully mapped in a qualitative focus group study ( ). The current study did include the patients’ voice in the pilot study, and a wide range of ear/hearing problems was included. Nonetheless, this concerned only a limited absolute number of highly motivated patients who thus may not be representative of the average patient. The suitability and use of the intake tool for all patient groups will need further evaluation in a large-scale field-test study. In addition, to make the clinician’s and patient’s use of the intake tool as efficient as possible, the ease of reviewing and interpreting the patient’s responses will need to be addressed. For clinicians, a system that has been shown to be easy to use is the traffic light system. It is also easy to read (provides a graphical summary format) and can deliver concrete actions to take. Such a traffic light system was successfully applied in pediatric cancer care ( ). However, applying it requires relevant cutoffs for the each item and/or underlying domains. Moreover, a follow-up decision tree is needed to guide clinicians on their actions (e.g., treatment options, referral to another health care professional) (See also ). A field-test study and the input of and consensus among clinicians will be needed to determine meaningful cutoffs. This is essential for clinicians’ motivation to use the tool (e.g., ). The current study describes the development of an ICF-based e-intake tool to be used by patients and clinicians to assess functioning in individual adults with ear and hearing problems. Based on stakeholders’ responses, item instructions for A&P and EF were adapted and explicitly related to patients’ ear and hearing problems. Patients’ responses resulted in changes to the items of EF. Overall, the intake tool was perceived to be relevant and to have a logical and clear structure. In addition, the tool showed sufficient content validity. The findings of the current study cover important developmental steps taken toward creating an intake facilitating individualized clinical otology and audiology services using a biopsychosocial perspective. We gratefully acknowledge stakeholders for participating in the content evaluation studies: Chantal Emaus, Willem Dekker, Marcel Maré, Henk van Rees, Jiska van Stralen, Caroline Terwee, Karen van den Toren, Marein van der Torn, Niek Versfeld, Susanne van Wijk. We are thankful for Dr. Granberg’s input on the development of hearing-related items. We thank the secretary and clinician assistants of the Department of Otolaryngology-Head and Neck Surgery of Amsterdam UMC, location VUmc, who assisted in the recruitment of patients.
Elevated A2F bisect
8bd14590-2086-4c6d-9736-91560f1a0c72
11922979
Surgical Procedures, Operative[mh]
Metabolic dysfunction-associated steatotic liver disease (MASLD), a hepatic manifestation of obesity, diabetes mellitus, and dyslipidemia in the absence of significant alcohol consumption, is a prominent contributor to both liver-related morbidity and mortality, and has a significant impact on public health. The prevalence of MASLD is estimated to be around 30% worldwide [ – ]. MASLD is classified as either a progressive form (metabolic dysfunction-associated steatohepatitis (MASH)) or a non-progressive form (metabolic dysfunction-associated steatotic liver (MASL)). Since liver fibrosis is the most important prognostic factor for MASLD , accurate diagnosis of progression is crucial; however, the gold standard for accurately assessing liver fibrosis is liver biopsy, which is an invasive procedure that is both painful and associated with various complications . Moreover, sampling error can lead to a false diagnosis, and histologic examination of the biopsy must be conducted by a specialized hepatologist to avoid intra- and inter-observer errors . Although ultrasound, computed tomography, and magnetic resonance imaging are non-invasive diagnostic procedures, it is difficult to distinguish the progression of liver fibrosis in MASLD using imaging procedures alone . Therefore, a less invasive and sensitive biomarker that reflects the progression of liver fibrosis is highly desirable. The surface of mammalian cells is coated with a dense layer of glycocalyx comprising glycoproteins and glycolipids. Protein glycosylation, one of the most common post-translational modifications, plays an important role in many biological processes, including cell differentiation, cell adhesion, intermolecular interactions, and regulation of signaling pathways . More than 50% of proteins in human serum/plasma are glycosylated . Glycosylation can affect the biological activity of proteins, as well as their stability and transport to the cell surface; however , glycosylation patterns can alter markedly in response to various diseases such as autoimmune disorders, cancer, chronic inflammatory diseases, and viral infections . Glycoproteins such as carbohydrate antigen 19-9 (CA19-9), CA125, prostate-specific antigen (PSA), and alpha-fetoprotein (AFP-L3) are used as cancer biomarkers in clinical practice, and detection of core-type fucosylated or multi-sialylated LacdiNAc structures on PSA has the potential to improve diagnostic or prognostic performance . Therefore, we developed a glycoblotting method that allows rapid and quantitative glycome analysis, and found alterations in the expression of several N -glycans in the serum of patients with hepatocellular carcinoma . Furthermore, total glycome analysis, which includes N -glycans, glycosphingolipids (GSLs), free oligosaccharides (fOS), and glycosaminoglycans (GAGs), identified novel glycan-related candidate biomarkers in various biological samples [ – ]. We also developed a method involving sialic acid linkage-specific alkylamidation (SALSA) of N - and GSL-glycans via lactone ring-opening aminolysis . The SALSA method allows sialic acid linkage isomers to be distinguished by mass spectrometry analysis. Combining the aminolysis-SALSA method with isotope labeling revealed alterations in the ratio of α2,3-linked sialoglycans with or without fucose residues during the progression of fibrosis in patients with NAFLD . In the present study, we used these advanced glycomic techniques to analyze serum samples from MASLD patients and demonstrated that expression of A2F bisect N -glycan (di-sialylated, biantennary, with core fucose and bisecting GlcNAc) and its precursors increases during fibrosis progression. We also identified specific carrier proteins of A2F bisect N -glycan, meaning a simple sandwich Enzyme-Linked Immuno Sorbent Assay (ELISA) system can be used to diagnose liver fibrosis progression. Patients Precipitation of glycoproteins from human serum Preparation of Sandwich ELISA for detection of immunoglobulin A bearing neutral bisect Statistics analysis This study enrolled 269 patients with liver biopsy conducted MASLD, diagnosed according to the criteria as follows; defined as the presence of hepatic steatosis in conjunction with one cardiometabolic risk factor and no other discernible cause . The patients were recruited at Hokkaido University Hospital and six participating institutions. All patients underwent percutaneous liver needle biopsy to diagnose fatty liver disease between 2005 and 2020. We typically performed liver biopsies using an 18-gauge automated biopsy gun (Monopty needle; Bard Biopsy Systems, Tempe, AZ) and generally obtained 1.5–2.5 cm of liver tissue for diagnosis. All biopsy specimens were embedded in paraffin blocks in accordance with standard procedures and then stained with hematoxylin and eosin, Masson’s trichrome stain, and Gitter stain prior to evaluation by a hepatopathologist blinded to the clinical data. Samples were investigated and quantified based on the NAFLD activity score (NAS) for steatosis (0–3), lobular inflammation (0–3), and hepatocyte ballooning (0–2). Each fibrosis parameter was scored according to the fibrosis stage of the Brunt classification : advanced fibrosis was defined as Brunt stage F3/4. Serum was collected within 3 days of liver biopsy and stored at − 80 °C until analysis. The exclusion criteria were as follows: daily alcohol consumption > 30 g for men or > 20 g for women, and the presence of another hepatic disease such as hepatitis B, hepatitis C, hepatocellular carcinoma, autoimmune hepatitis, primary biliary cholangitis, primary sclerosing cholangitis, hemochromatosis, Wilson's disease, or congestive liver disease. The study protocol complied with the ethical guidelineand each participating hospital. Written informed consent to participate in this study was obtained from each patient. This study is registered in the UMIN Clinical Trials Registry as UMIN000030720. The clinical characteristics of the MASLD patients are summarized in Table , with further details provided in the supplementary information. Clinical data, including sex, age, height, and weight, were obtained for each patient at the time of liver biopsy, and body mass index (BMI) was calculated as weight divided by height in meters squared. The following biochemical variables in serum were measured by a conventional automated analyzer: platelet count (Plt), albumin, total bilirubin, aspartate aminotransferase (AST), alanine aminotransferase (ALT), γ-glutamyltransferase (γGTP), cholinesterase (ChE), α-fetoprotein (AFP), triglycerides (TG), low-density lipoprotein LDL-cholesterol (LDL-C), high-density lipoprotein cholesterol (HDL-C), hemoglobin A1c (HbA1c), ferritin, C-reactive protein (CRP), fasting blood sugar (FBS), and immunoreactive insulin. Insulin resistance was evaluated based on the homeostasis model assessment and expressed as an index of the insulin resistance (HOMA-IR) value, calculated using the following equation: HOMA-IR value = fasting insulin (µU/mL) × fasting glucose (mg/dL)/405. The formula used to predict liver fibrosis from data obtained non-invasively was as reported previously: fibrosis 4 (FIB4) index = age × AST (IU/L) × Plt (× 10 9 /L) −1 × √ALT (IU/L) −1 . The aspartate aminotransferase (AST)-to-alanine aminotransferase (ALT) ratio (AAR) was calculated as AST (IU/L)/ALT (IU/L) . The serum level of Mac-2-binding protein glycol isomer (M2BPGi) was measured as a marker of liver fibrosis. Glycoproteins were prepared by ethanol precipitation as described previously . A detailed description is provided in the supplementary methods. N -glycans by glycoblotting combined with aminolysis-SALSA Preparation of serum N -glycans based on glycoblotting and aminolysis-SALSA was performed using the SweetBlot high-throughput, semi-automated work system (System Instruments Co., Tokyo, Japan) . A detailed description is provided in the supplementary methods. N- glycans By peptide mass fingerprinting (PMF) analysis, immunoglobulin A (IgA) was identified as one of the carrier proteins with A2F bisect N -glycans as shown in Supplementary Table S4. From the results, we constructed a sandwich ELISA system using an anti-human IgA antibody and PHA-E lectin. Briefly, ELISA plates (MaxiSorp Plate; Thermo Fisher Scientific, Japan) were coated with a mouse anti-human IgA antibody (0.4 μg/mL; Nordic-MUBio, Susteren, Netherlands). Next, 100 μL of each serum sample (diluted 1:8000 in PBS, pH 7.2/0.05% Tween 20) or standard IgA (0–350 ng/mL in PBS, pH 7.2/0.05% Tween 20) were added to the plate for 1 h at 25 °C. The wells were washed three times with PBS, pH 7.2/0.05% Tween 20, and then incubated with 100 μL of PHA-E-HRP for 1 h at 25 °C, followed by washing as described above. For color development, TMB was added for 30 min at 25 °C. After terminating the reaction with sulfuric acid, absorbance at 450 nm (OD450) was measured in a microplate reader. For correction, the OD630 value (reference absorbance at 630 nm) was subtracted from the OD450. The amount of the IgA carrying neutral bisect N -glycans was calculated from a calibration curve generated using human IgA. Continuous variables were analyzed using the Mann–Whitney U test, and categorical variables were analyzed using Fisher’s exact test. Multivariate logistic regression analysis with stepwise forward selection was performed using variables identified as significant ( P < 0.05) in univariate analyses. The diagnostic performance of the markers was assessed by analyzing receiver operating characteristic (ROC) curves. The probability of true positives (sensitivity) and true negatives (specificity), as well as the positive-predictive value (PPV) and negative-predictive value (NPV), were determined for the selected cut-off values, and the area under the ROC (AUC) was calculated for each index. Cut-off points were determined based on the optimum sum of the sensitivity and specificity. Statistical analyses were performed using GraphPad Prism version 8.4.3 (GraphPad Software, MA, USA), SPSS Statistics 24.0 (IBM Corp., Armonk, NY, USA), and EZR (Saitama Medical Center, Jichi Medical University, Saitama, Japan). A p -value of 0.05 was deemed significant. Characteristics of the MASLD patients Comprehensive Correlation between expression of glycans carrying bisecting GlcNAc and core fucose and conventional parameters of liver fibrosis Identification of carrier proteins bearing A2F bisect glycan and its precursors Construction of simple ELISA system based on detection of bisect glycans on IgA for diagnosis of liver fibrosis. Based on the results of A2F bisect glycan carrier protein identification, we constructed a sandwich ELISA system using an anti-human IgA antibody and PHA-E lectin, and the values of IgA bearing bisect glycans measured (bisect-IgA values). PHA-E lectin recognizes the bisecting GlcNAc structure, although the specific terminal sialic acids weaken the interaction . ELISA was performed using serum samples from groups F0/1/2 ( n = 73) and F3/4 ( n = 32), and its diagnostic utility for advanced liver fibrosis was tested. The bisect-IgA values increased significantly in group F3/4 and correlated with the fibrosis stage (Fig. A and Supplementary Figure S5). Moreover, the bisect-IgA values showed a correlation with the total sum ( r = 0.684), whereas they correlated weakly with lobular inflammation ( r = 0.397) and the hepatocyte ballooning score ( r = 0.255), which are pathological inflammatory parameters as shown in Supplementary Figure S5. The correlation between bisect-IgA values and the FIB4 index was also not strong ( r = 0.546). ROC analysis revealed that the AUC of the established ELISA system was 0.838, higher than those of the total sum (AUC = 0.819), the neutral sum (AUC = 0.817), and the acidic sum (AUC = 0.793; Fig. B and Supplementary Table S5), but the difference was not statistically significant. We also developed another sandwich ELISA system using PHA-E lectin and an anti-human kappa light chain antibody that detects immunoglobulins. In this ELISA system, the AUC value was 0.801, which is slightly lower than that of the ELISA based on the anti-IgA antibody (Supplementary Figure S4 and Table S6). In total, 269 MASLD patients were enrolled in the study. Patients were divided into three groups, F0/1 ( n = 41/85), F2 ( n = 47), and F3/4 ( n = 72/24) based on the pathological severity of fibrosis in liver biopsy specimens. As shown in Table , the F2 group had a lower proportion of males and a lower BMI than the F0/1 group. Age, AST, and AFP levels were significantly higher in the F2 and F3/4 groups than in the F0/1 group. FBS levels were significantly higher in the F3/4 groups than in the F0/1 and F2 groups. HbA1c levels were significantly higher in the F3/4 group than in the F0/1 group. Platelet counts fell significantly as liver fibrosis progressed. Albumin, ALT, and ChE in the F3/4 groups were significantly lower than in the F0/1 and F2 groups. TG and LDL-C levels in the F3/4 group were significantly lower than in the F0/1 group. There were no significant differences in T-Bil, γ-GTP, HDL-C, CRP, IRI, and HOMA-IR values among the five groups. All fibrosis prediction formulas (FIB4 index and AAR) and fibrosis markers (M2BPGi) were significantly higher in patients with progression of liver fibrosis. N -glycome analysis in the serum of patients with MASLD by glycoblotting combined with aminolysis-SALSA Patients were divided into three groups, F0/1 ( n = 126), F2 ( n = 47), and F3/4 ( n = 96), to explore the relationship between alterations in glycan expression and progression of fibrosis. After N -glycomic analysis, 138 types of N -glycan were observed in patient serum samples. The amount of each glycan according to the stage of fibrosis progression and the p -values are summarized in Supplementary Table . Whereas expression of total N -glycans and A2 glycan (N-86; (Hex) 2 (HexNAc) 2 (α2,6NeuAc) 2 + (Man) 3 (GlcNAc) 2 ), which is the most abundant form in serum, did not change, that of many individual N- glycans changed significantly as fibrosis progressed. When ranked in decreasing order of p -value derived from comparative analyses of the F0/1/2 and F3/4 groups (Supplementary Tables S2), A2F bisect (N-107; (Hex) 2 (HexNAc) 3 (Fuc) 1 (α2,6NeuAc) 2 + (Man) 3 (GlcNAc) 2 ) and its precursors occupied the top positions. Therefore, we first analyzed the biosynthetic pathway of A2F bisect glycan, including its expression levels (Fig. ). In addition, we carried out a ROC analysis to evaluate the diagnostic utility of A2F bisect glycan and its precursor glycans for discriminating advanced liver fibrosis (F3/4) (Table ). As shown in Fig. , expression of A2F bisect (N-107), A1F bisect (N-88), and monosialylated G1F bisect (N-78) glycans increased significantly in cases of advanced fibrosis, with AUC values of 0.754, 0.746, and 0.79, respectively. The expression levels of A2 bisect (N-101), A1 bisect (N-79), and monosialylated G1 bisect (N-67) glycans lacking core fucose tended to be higher in cases of advanced fibrosis; however, the AUC values were lower than those of fucosylated glycans. Furthermore, levels of A2 (N-86) and A1 (N-66) glycans lacking bisecting GlcNAc and core fucose did not change as fibrosis progressed (Supplementary Table S3). Regarding neutral N -glycans, expression of G2F bisect (N-42), G1F bisect (N-38), and G0F bisect (N-30) glycans containing bisecting GlcNAc and core fucose increased significantly as fibrosis progressed, with AUC values of 0.764, 0.803, and 0.792 respectively. Expression of G2 bisect (N-39), G1 bisect (N-31), and G0 bisect (N-25) glycans lacking core fucose also increased significantly as fibrosis progressed, but their AUC values were lower than those of fucosylated glycans. The AUC values of G2 (N-29), G1 (N-24), and G0 (N-20) glycans lacking bisecting GlcNAc and core fucose were lower than those of glycans containing bisecting GlcNAc (Supplementary Table S3). Next, we categorized glycans into three groups to further evaluate the progression of fibrosis. Each of the three groups included both core fucose and bisecting GlcNAc residues as follows: (1) neutral N -glycans (Neutral sum; N-30, N-38, and N-42); (2) sialylated N -glycans (Acidic sum; N-78, N-88, and N-107); and (3) the total amount of N -glycans (Total sum). The expression levels of all three groups increased significantly as fibrosis progressed (Supplementary Figure ); the AUC values for the neutral sum, acidic sum, and total sum groups were 0.804, 0.762, and 0.795, respectively (Table ). The FIB4 index, the AAR, and M2BPGi levels are used as conventional parameters to evaluate the progression of liver fibrosis . First, we examined the correlation between these conventional parameters of liver fibrosis and expression of A2F bisect and its precursors carrying bisecting GlcNAc and core fucose. As shown in Table , the AUC values of individual bisect-related N -glycans and the calculated sums were similar to or higher than those of conventional parameters of liver fibrosis, while expression of these glycans showed a weak correlation with conventional parameters (Fig. ). Expression of these glycans also correlated with fibrosis stage, similar to conventional parameters. Next, we examined the correlation between the expression of A2F bisect glycan and its precursors and pathological parameters such as the steatosis score, lobular inflammation score, hepatocyte ballooning score, and the summed value (NAS score). Expression of the selected glycan candidates correlated negatively with the steatosis score (similar to the FIB4 index, AAR, and M2BPGi). By contrast, expression levels of A2F bisect glycan, its precursors, and the calculated sum groups correlated weakly with the lobular inflammation score, and did not correlate significantly with the hepatocyte ballooning score or the NAS (Fig. ). Therefore, we conducted multivariate regression analysis using variables independently associated with advanced fibrosis in univariate analysis and revealed that the FIB4 index (odds ratio (OR), 1.705; 95% confidence interval (CI), 1.291–2.252; P < 0.001) and the neutral sum (OR, 1.013; 95% CI, 1.007–1.019; P < 0.001) (Table ) were significantly and independently associated with advanced fibrosis. The diagnostic performance of these combined variables was 0.840, which is better than either alone (0.804 and 0.792, respectively; Fig. ). Fibrosis biomarkers based on changes in protein-specific glycosyl expression may be more specific than markers in whole serum. Therefore, we attempted to identify glycan carrier proteins by focusing on A2F bisect glycan and its precursors. Initially, serum was fractionated into an eluted fraction and a flow-through fraction using Protein G Sepharose, followed by N -glycomic analysis of each fraction as previously described . The N -glycome profiles of the eluted and flow-through fractions are shown in Supplementary Figure S2. In the eluted fraction, glycans derived from IgG (such as G0F (N-23), G1F (N-28), and G2F (N-36)) were enriched. By contrast, and unexpectedly, A2F bisect glycan (N-107), which is the final product in the biosynthetic pathway, was more abundant in the eluted fraction than in the flow-through fraction. Moreover, the eluted fraction contained precursors of N-107 carrying bisecting GlcNAc and core fucose (N-30, 38, 42, 78, and 88). Next, we tried to identify the carrier protein present in the eluted fraction of pooled serum from patients in group F3/4. Proteins in the eluted fraction were separated by SDS-PAGE, resulting in visualization of 21 major protein bands after Coomassie brilliant blue staining (Supplementary Figure S3). The protein from each band was extracted and subjected to N -glycan analysis; the results showed that 15 of the 21 protein bands contained N -linked glycoproteins. Furthermore, A2F bisect glycan was detected in only three protein bands (No. 9, 10, and 14), with approximately 75% being present in protein band No. 10 (Supplementary Figure S3). Band No. 10 also contained all the precursor glycans carrying bisecting GlcNAc and core fucose. We identified three types of protein by PMF and MS/MS analysis. The major proteins in band No. 9 were immunoglobulin heavy constant mu (IgM) and complement C3. Band No. 10 and 14 contained immunoglobulin heavy constant alpha 1 and 2 (IgA1 and 2) and complement C3, respectively (Supplementary Table S4). The N -glycan profiles of IgM and IgA were broadly consistent with those reported by other groups [[ – ]] . MASLD is thought to affect 30% of the global population, and is considered an important type of liver disease in the post-viral era; however, a lack of non-invasive, rapid, and low-cost methods means that diagnosis of advanced liver fibrosis is difficult. The FIB4 index is developed as a non-invasive scoring system based on routine tests to predict liver fibrosis in patients co-infected with HIV/HCV . A previous study reported the utility of combining the FIB4 index with magnetic resonance elastography (MRE) ; however, few facilities offer MRE, and it is a costly and time-consuming test. Several approaches involving the detection of lectins bound to glycans on proteins have been investigated as innovative biomarkers of disease, and for clinical testing. In the context of liver disease, Wisteria floribunda agglutinin-positive M2BP (M2BPGi) appears to be useful for evaluating liver fibrosis in patients with viral hepatitis, autoimmune hepatitis, and MASLD [ , , – ]. Previously, we developed a comparative glycomic analysis method based on lactone ring-opening isotope labeling to identify α2,3-linked sialoglycans . Alterations in α2,3-linked sialoglycans present in serum during progression of liver fibrosis were detected quantitatively by linkage-specific aminolysis; however, this analytical method is not suitable for α2,6-linked sialylated and neutral N -glycans. In the present study, we performed comprehensive and quantitative analyses of N -glycans using aminolysis-SALSA. We found that the expression of many N -glycans differed among the patient groups. Interestingly, we found that levels of A2F bisect N -glycan and its precursors increased significantly during liver fibrosis. In addition, we confirmed that the level of A2F bisect N -glycan, the total sum, and the neutral sum in healthy individuals without fatty liver were approximately equal to those in the F0 and F1 groups (Supplementary Figure S7). ROC analysis of A2F bisect-related glycans revealed that diagnostic performance was strongly associated with the levels of bisecting GlcNAc and core fucose structures. Moreover, the total amount of categorized N -glycans (Total sum: N-30, N-38, N-42, N-78, N-88, and N-107) and neutral N -glycans (Neutral sum: N-30, N-38, and N-42) was a better diagnostic indicator of liver fibrosis than conventional parameters (i.e., the FIB4 index, AAR, and M2BPGi). The FIB4 index and M2BPGi reflect not only liver fibrosis but also other factors such as inflammation and liver injury [ – ]. When comparing these conventional markers with our glycan parameters, we found that the FIB4 index and M2BPGi also correlated with pathological inflammatory parameters such as lobular inflammation and the hepatocyte ballooning score. By contrast, we found that the expression levels of A2F bisect and its precursors carrying both bisecting GlcNAc and core fucose residues show similar tendencies during the progression of liver fibrosis, and are rarely associated with inflammation. The cut-off value of the FIB4 index is strongly affected by age . The FIB3 index subtracts the effects of age but needs more validation before use in routine clinical practice . When we examined the diagnostic performance of the FIB4 index and the neutral sum for advanced liver fibrosis according to age, we found that the FIB4 index performed less well (AUC = 0.689) in those older than 60 years. By contrast, the diagnostic performance of the neutral sum was not affected by age (AUC = 0.791 at < 60 years; AUC = 0.786 at ≥60 years) (Supplementary Table S7). These results indicate that the neutral sum may be an age-independent marker of liver fibrosis. Moreover, A2F bisect-related glycans were not associated with inflammatory parameters, which improved the diagnostic performance for advanced liver fibrosis when the neutral sum was combined with the FIB4 index (AUC = 0.840). Additionally, it was reported that the FIB4 index is less accurate at predicting liver fibrosis in patients with diabetes . In this study, 153 patients (56.9%) had diabetes. The accuracy of the FIB4 index for predicting advanced liver fibrosis was relatively low in patients with diabetes (AUC = 0.771) compared with the entire cohort (AUC = 0.792) (Supplementary Figure S8). By contrast, the diagnostic accuracy of the neutral sum remained relatively high in patients with diabetes (AUC = 0.843) compared with the entire cohort (AUC = 0.804). Therefore, diabetes may affect the predictive accuracy of the FIB4 index, but not of the neutral sum, for advanced liver fibrosis. However, further analysis is needed to validate these findings. Moreover, we further evaluated the diagnostic accuracy of A2F bisect glycan and its precursor glycans for the detection of F2 and F4 fibrosis. For F2 fibrosis, the diagnostic performance of A2F bisect glycan and its precursor glycans was comparable with those of the FIB4 index, M2BPGi, and AAR (Supplementary Table S8). For F4 fibrosis, while the sensitivity of A2F bisect glycan and its precursor glycans was higher than those of the FIB4 index, M2BPGi, and AAR, its AUC was slightly lower than those of certain other markers (Supplementary Table S9). However, the limited number of F4 cases in this analysis underscores the need for further studies with larger cohorts. Additionally, A2F bisect glycan and its precursor glycans demonstrated weak correlations with other markers and did not reflect inflammation (Fig. ). This suggests that combining A2F bisect glycan and its precursor glycans with other markers may enhance diagnostic accuracy. We believe this warrants further investigation. Previously, we developed the focused protein glycomics (FPG) procedure, which allows analysis of the glycan profiles of gel-separated serum proteins by MALDI-TOF MS, and identified unique glycoisoforms of vitamin D-binding protein and haptoglobin in STAM model mice with hepatocarcinogenesis . In the present study, we attempted to use this method to identify the proteins that carry A2F bisect N -glycan, and identified IGHM, IGHA1 and 2, and complement C3. The N- glycan profiles on IGHM and IGHA proteins were broadly consistent with those reported previously [ – ]. Furthermore, IGHA1 and 2 carried high levels of A2F bisect glycan and its precursors, and the levels correlated strongly with liver fibrosis. N -acetylglucosaminyltransferase III (MGAT3) is a glycosyltransferase that transfers GlcNAc to the core Man residue of N -glycans via a β 1,4-linkage to form a bisecting structure. The Human Protein Atlas ( https://www.proteinatlas.org ) shows that MGAT3 activity is relatively high in the brain and kidneys. Activity of MGAT3 in the normal liver is nearly undetectable; however, its expression increases during hepatocarcinogenesis [ – ]. Additionally, MGAT3 activity in B cells, which produce IgA after differentiation into plasma cells, increases during liver fibrosis/cirrhosis . Ochoa-Rios et al. reported that the levels of fucosylated and bisecting N -glycans are increased in human livers and model mice with non-alcoholic steatohepatitis . Therefore, the progression of liver fibrosis in MASH may significantly affect the expression levels of bisecting N -glycan with core fucose on IgA. We identified specific bisect N -glycans biosynthesized by MGAT3, and some of the carrier proteins, associated with the progression of liver fibrosis in patients with MASLD. About 75% of the detected A2F bisect glycan in whole serum was carried on IgA proteins. Many of glycans on IgA carried either bisecting GlcNAc or core fucose. The glycan alteration on IgA by MGAT3 is of great interest during progression of fibrosis in patients with MASH. The elucidation of mechanisms underlying progression of liver fibrosis associated with IgA for bisect N -glycans can lead to develop novel therapeutic approaches. McPherson et al. reported that secretion of serum IgA correlates positively with the fibrosis stage . A recent study by Kotsiliti et al. reported that intestinal B cells induce metabolic activity in T cells, accompanied by increased secretion of IgA, in patients with MASH . Focusing on the alteration of specific bisect glycan on IgA, we constructed a sandwich ELISA that combines anti-human IgA with PHA-E lectins that recognize the neutral bisecting GlcNAc structure (bisect-IgA values). The bisect-IgA values showed a high correlation with the neutral sum calculated by MS analysis. The diagnostic utility of this ELISA system for advanced liver fibrosis was also comparable with that of calculated sums such as the neutral sum, total sum, and acidic sum. The bisect-IgA values showed higher diagnostic performance than that of IgA for liver fibrosis. Although this ELISA needs to be validated using a large number of specimens, the system would be very useful for mass screening to identify patients with advanced fibrosis. In this study, to ensure internal validity, we performed multivariate analysis to adjust for potential confounding factors such as age, sex, and BMI. These analyses allowed us to isolate the independent effect of A2F bisect N -glycan and its precursors as a biomarker for advanced fibrosis, minimizing the influence of other variables. Regarding external validity, our cohort of 269 liver biopsy cases included a diverse population with differences in the degree of liver fibrosis, age, sex, and BMI, supporting the generalizability of the findings. However, further studies of different populations would be beneficial to confirm the broader applicability of these findings. In addition, this study has several limitations. First, the sample size was relatively small because all cases included in the analysis were diagnosed through liver biopsy. Second, we were unable to evaluate the predictive accuracy for progression to hepatocellular carcinoma or decompensated liver cirrhosis. Third, due to the retrospective design of the study, several clinically relevant parameters could not be obtained. We were also unable to examine the difference in diagnostic performance of the enhanced liver fibrosis score (ELF score), an existing biomarker that is widely used worldwide. To address these limitations, larger, prospective studies are warranted. In conclusion, this multicenter study identified A2F bisect N -glycan and its precursors as novel and highly accurate biomarkers for advanced fibrosis in patients with MASLD. We found that the expression levels of bisect glycans correlated weakly or rarely with lobular inflammation or hepatocyte ballooning. Combined analysis based on the calculated neutral sum (N-30, N-38, and N-42) and the FIB4 index showed improved diagnostic performance. Moreover, IgA1 and 2 were identified as carrier proteins for A2F bisect N -glycan, and a simple sandwich ELISA system using an anti-human IgA antibody and PHA-E lectin was able to diagnose the progression of liver fibrosis. The value of diagnostic performance using both sandwich ELISA and the FIB4 index also showed higher than using only one of each. Unlike conventional fibrosis biomarkers, the novel glycomarker reflects liver fibrosis more accurately without being affected by inflammation. Taken together, the glycan alteration bearing bisect GlcNAc on IgA may have the potential to serve as a novel diagnostic tool for MASLD infile1 (DOCX 3000 KB)
The effect of mindfulness-based childbirth education intervention on fear of childbirth: systematic review and meta-analysis
133b8673-204b-4d36-8055-8c8826a6a384
11329262
Patient Education as Topic[mh]
The fear of childbirth is defined as the fear felt before, during, and after childbirth. Evaluating childbirth as negative cognitively and approaching childbirth with anxiety and fear are also used to express fear of childbirth . In a study on fear of childbirth and the related factors in pregnancy involving 203 pregnant women, it was found that pregnant women showed high levels of fear of childbirth . This fear prevents women from getting pregnant and giving birth . The fear of childbirth has negative effects on the pregnancy process and childbirth . The hormonal changes that take place in the body due to fear of childbirth suppress contractions. This prolongs the childbirth process and requires surgical interventions for the realisation of labour . Factors that may be associated with fear of childbirth have been identified in the literature. Concerns about the health condition of the infant, attitudes and behaviours of healthcare professionals and the mother's health condition lead to an elevated fear of childbirth , . The primary concerns experienced during pregnancy are related to the health of the infant . The healthcare professionals should comfort the pregnant woman with appropriate techniques and avoid negative behaviours in order to prevent the pregnant woman from having a negative experience and to have a healthier childbirth process. It is considered a necessity of the care service provided by midwives and nurses to provide properly the necessary counselling to the mother and partner during the childbirth process. These methods can reduce the risks related to the childbirth process and ensure a more successful and comfortable childbirth. Trainings about the childbirth process have been found to contribute to the reduction of negative thoughts and stress levels observed in pregnant women due to labour , . Recent studies have indicated that mindfulness-based training is used as a supplement to routine care in pregnant women. Based on this information, the aim of this study was to determine the effect of mindfulness-based childbirth education on fear of childbirth by meta-analysis method. Research model Search strategy Inclusion and exclusion criteria Study selection and data extraction Risk of bias assessment Data analysis A comprehensive Meta-Analysis programme (CMA) (Version 3.0) was used for statistical analyses of the data, effect sizes and heterogeneity analyses. While calculating the effect size, the size was determined by Hedges G , a statistic that focuses on the standardisation of the outcomes achieved and the number of samples in the study. A random-effects model was used to take into account differences between subjects, intervention methods, durations and assessment tools in the included studies. Heterogeneity analyses were made by examining Tau, I 2 , H 2 and Q values. The heterogeneity of effect sizes was assessed using Q and I 2 statistics. The I 2 values indicate low (25–50%), medium (51–75%) or high (>75%) heterogeneity , . As a result of the assessment made in the study, a Q value of 4.761 (p=0.19) and an I 2 value of 36.989% were obtained. These values indicated that there was a heterogeneous structure. Due to the heterogeneous structure, the fixed This means that the study should include approximately 179 articles with statistically insignificant results for each study included in the meta-analysis in order to render the effect size insignificant. In Kendall's tau analysis, a test value of 0.34 was obtained. This result indicated that there was no publication bias. This research is a meta-analysis study. A meta-analysis refers to an analysis done to obtain an overall result by combining the results of different studies . The study protocol was registered in the database of the International Prospective Register of Systematic Reviews (PROSPERO), allowing meta-analysis studies to be recorded (ID: CRD42022316472). Before the data were collected, research questions were set in accordance with the PICOS (Participants, Intervention, Comparison, Outcomes, and Study Design) method, and a literature review was conducted based on these questions. Since the national literature lacks any studies in this field, the papers in the international literature constituted the database of the study. EBSCO, PubMed, Google Scholar, Web Of Science and CINAHL (Cumulative Index to Nursing and Allied Health Literature) online databases were searched for international articles. The keywords "mindfulness", "fear of childbirth", "mindfulness-based childbirth", "mindfulness education" and "childbirth" were used during the search. On the literature review, 18 papers related to the study were reached and the sample of the study consisted of four studies that met the inclusion criteria . The Preferred Reporting Items for Systematic Reviews and Meta Analyses (PRISMA) model was used as a guide for reporting the study data . The following criteria were used to determine which studies would be included in the meta-analysis. The analysis included the studies which were published between 2013 and 2022 and aimed to determine the effect of mindfulness-based childbirth education on the fear of childbirth. Randomised controlled experimental or quasi-experimental studies. The experimental group consisted of pregnant women and the Wijma Birth Expectation/Experience Questionnaire (W-DEQ-A) was used in the evaluation. The studies in which the effectiveness of mindfulness-based childbirth education on the experimental group was reported were included. The meta-analysis included the studies that met the inclusion criteria from the studies with full text available. The coding, a data extraction process, is to remove the data eligible for the study from the complex data in the studies . The data were coded in a coding form prepared in Excel format before statistical analysis. The coding form included the authors and year, study design, number of people in the experimental and control groups, mean age, intervention period, and gestational week. The quality of the selected articles was evaluated by two researchers (SD and EE) with the Quality Assessment Tool (EPHPP) checklist. The evaluation of the risk of bias in all selected articles was done by two authors (SD and EE) independently using modified Cochrane tools for assessing the risk of bias, following the criteria outlined in the Cochrane Handbook for Systematic Reviews of Interventions. The other author (RAD) checked the results. The risk of bias was classified into seven domains. The bias risk for each area was classified as "low risk," "high risk" or "uncertain risk," according to the decision criteria in the "Risk of bias" assessment tool. When the total of the four studies included in the study was considered, the mean age of the pregnant women ranged between 30 and 33 years. The participants consisted of both multiparous and nulliparous pregnant women who were in the 12th–29th gestational week, received mindfulness-based childbirth education during pregnancy and for whom the effectiveness of the education was assessed with the Wijma Childbirth Expectancy/Experience Questionnaire (W-DEQ-A). shows the characteristics of the participants, the intervention details, the outcome measures of the studies and additional information about the intervention in experimental and control conditions . In the study, Q statistics and I 2 values were analysed for the heterogeneity test. As a result of the analysis, the Q value was 4.761 (p=0.19) and the I 2 value was 37.99%-effect model was analysed, and an effect size of −0.734 (95%CI: −1.049: −0.419) was found to be statistically significant at the medium level (p=0.00). In order to render the effect size of −0.734 obtained according to the fixed-effect model insignificant (taking 0.001), Orwin's fail-safe N value was obtained as 716. Kendall's tau analysis resulted in a test value of 0.34, which indicated that there was no publication bias (p=0.367). Based on Egger's regression analysis method, the β 0 value was obtained as −1.203, the t-value as 0.555 and the p-value as 0.317 ( and ). All four articles showed that awareness-based childbirth education provided to pregnant women alleviated the fear of childbirth – . The studies compared routine antenatal care with mindfulness-based childbirth education. The results of two studies showed that mindfulness-based childbirth education was more effective in alleviating the fear of childbirth compared to routine care (Byrne et al., p<0.01; Kuo et al., p<0.001), while the other two studies found that W-DEQ scores were lower than the scores obtained before the education and the difference between them was not statistically significant (Duncan et al., p=0.48; Veringa et al., p=0.045). Fear of childbirth is an experience with negative consequences for maternal and newborn health . Fear of childbirth, experienced at mild, moderate and severe levels, can lead to complications during childbirth, difficulties in the mother–infant relationship and depression and anxiety disorders in pregnant women . Birth, known as a miraculous experience in the life cycle of women, is perceived as a threat for some women and fear of childbirth appears. Here, it is highly important to transform the woman's perception of childbirth from "fear" to a positive perception. One of the methods reported to be effective in reducing fear of childbirth in recent years is mindfulness-based approaches. The mindfulness-based practices, executed from the prenatal period, focus on breathing practices, attachment with the infant and feeling emotions and the use of these practices during childbirth . In this present comprehensive meta-analysis of the effectiveness of mindfulness-based childbirth education in reducing the fear of childbirth in pregnant women, it was found that mindfulness-based education provided to pregnant women was effective in reducing fear of childbirth. Risk factors for fear of childbirth should be determined with a detailed history taken during pregnancy follow-up, and fear of childbirth in pregnant women should be assessed. Once the level of fear of childbirth has been determined, interventions such as education, counselling and childbirth support can be provided to reduce the fear of childbirth and to inform about childbirth . In a study, it was reported that women's fear of childbirth reduced with the training provided by healthcare professionals . A systematic review study assessing mindfulness and perinatal mental health showed that mindfulness-based programmes of 8 weeks applied to pregnant women lowered perceived stress, anxiety and depressive symptoms and the level of postpartum depression in pregnant women. It was concluded that mindfulness-based programmes elevated the levels of mindfulness and self-compassion of pregnant women . The use of a mindfulness-based model in training for childbirth preparation has a positive effect on reducing fear of childbirth and on maternal and neonatal health . A randomised controlled study conducted with 63 pregnant women showed that participants in the intervention group underwent mindfulness-based cognitive behavioural therapy, while women in the control group had only routine antenatal care. In the study, it was found that the mean anxiety and depression scores of the intervention group were significantly lower than the scores of the control group . A randomised controlled study conducted with 96 pregnant women showed that the intervention group attended a mindfulness-based childbirth and parenting programme, while the control group attended routine childbirth preparation education classes. The 8-week mindfulness-based programme effectively lowered the perceived stress level and depression in pregnant women and raised self-efficacy and mindfulness in childbirth . In another study conducted on 60 pregnant women having their first pregnancy with 24–36 gestational weeks, the pregnant women in the intervention group were subjected to a mindfulness-based stress alleviation programme along with routine care. Immediately after the intervention and 1 month later, it was observed that there were significant reductions in anxiety symptoms of pregnant women . Studies in the literature show that mindfulness-based childbirth education is effective in improving the mental health condition of pregnant women and reducing the fear of childbirth. Mindfulness-based education provided to pregnant women was found to be effective in reducing the fear of childbirth. It is considered that the integration of mindfulness-based education into routine pregnancy follow-ups may have positive effects on the psychological well-being of pregnant women. It is considered that reducing the fear of childbirth would increase vaginal childbirth rates and lower caesarean section rates, as well as provide a comfortable childbirth experience for pregnant women. Large-scale studies that have been meticulously designed are required to further confirm the results of this meta-analysis.
An Innovative Workshop Embedding Pathology Service Users into the Undergraduate Biomedical Science Curriculum
502e4aae-ad31-415e-b490-97bac66b03af
10442479
Pathology[mh]
To ensure the delivery of high-quality patient care and pathology services, it is imperative to have a thorough understanding of the needs of patients. The integration of service users into the biomedical science curriculum has been driven by the refinement of the Health and Care Professions Council (HCPC) Standards of Education and Training (SETs), which explicitly state that “service users and carers must be involved in the programme (SET 3.7)” and “the learning outcomes must ensure that learners meet the Standards of Proficiency for the relevant part of the register (SET 4.1)” . The HCPC Biomedical Scientist Standards of Proficiency (SOPs) have underscored the importance of incorporating patients’ perspectives and listening to patients’ voices to enhance the delivery of pathology services and patient care . While the HCPC’s definition of “service user” refers to individuals who utilise or are impacted by the services of HCPC professionals, it has historically been challenging to define pathology “service users,” as pathology laboratories were typically located at the periphery of hospitals with limited interaction with the ultimate service user . Since 2014, the HCPC has required all programmes approved by the regulatory body to involve “Service users and carers” in the programme . However, the revised HCPC Standards of Proficiency of September 2023 have emphasised the “central role of the service user” and the requirement for “registrants to understand the importance of valid consent and effective communication in providing good care.” In addition, registrants should be “promoting public health and preventing service users’ ill-health” and understand “the importance of valid consent and effective communication in providing good care” . The timing of these revisions coincides with a shift in public knowledge, where patients now have a better understanding and a greater appreciation of the role of laboratory medicine in the diagnosis and treatment of disease . The COVID-19 pandemic served as a catalyst for raising public awareness and recognition of the critical role played by Biomedical Scientists (BMS) in the United Kingdom in the processing and testing of COVID-19 samples . Before the pandemic, patients were primarily familiar with the role of medical professionals such as doctors and nurses in providing healthcare services, whereas the pandemic drew attention to the vital role of laboratory workers who operate behind the scenes in testing and diagnosing diseases . The evolution of point-of-care testing (POCT) in the last decade has brought about significant changes to the role of Biomedical Scientists as diagnostic testing has become more accessible across healthcare pathways. Many commercial POCT manufacturers recognise the value of close working relationships with BMS and have established collaborative working and development groups . However, the COVID-19 pandemic has dramatically changed the role and responsibilities of BMS, thereby necessitating a corresponding adaptation in the training of future biomedical science students . A BMS processes hundreds of patient samples on a typical workday, which can lead to a lack of appreciation for the fact that each sample represents an individual patient. Thus, it is imperative for biomedical science students to be conscious of the importance of test results for patients. It is important to recognise that medical professionals, such as doctors and nurses, who order laboratory tests are considered service users for pathology laboratories; however, the primary beneficiaries are ultimately the patients themselves. The involvement of patients in medical education has become a standard practice among educators . The General Medical Council (GMC) has long recognised the value of patient involvement and requires educators to incorporate a variety of patient-centred sessions into the undergraduate curriculum . However, there is still much to be learned about how to systematically integrate patient involvement into other allied healthcare courses. Studies have demonstrated that both patients and practitioners benefit from a patient-centred curriculum . Patients take on the role of educators, teaching students about patient-centred care and the importance of patient autonomy, and helping to make education increasingly engaging and transformative . As BMSs rarely interact directly with patients on a daily basis, the involvement of patients in the curriculum reinforces the importance of the patient being behind every sample. Medical educators and patients have joined forces in promoting patient-centeredness; however, BMS service users have yet to be fully integrated into the biomedical science curriculum in the same way. Reflecting upon the experience of patients can assist learning and professional development; this reflective writing is considered a core element in medical education that promotes critical thinking, better communication, and empathy skills . Therefore, the aim of this study was to embed patients and BMS service users into the undergraduate biomedical science curriculum through a “service user event” with a reflective assessment to enhance students’ knowledge and understanding of the impact of pathology laboratory results on the NHS service and ultimately the patient. The steps involved in the creation of the novel, innovative service user event are detailed in and can be adopted by other higher education institutes that require the incorporation of service users into their curriculum. Service User EventService User Reflection AssessmentCollecting Student Feedback and Analysing the Results Final year biomedical science students’ experiences of the service user event were collected following submission of the reflective assessment through an eight-item online questionnaire . Ethical approval was granted by the Health and Life Sciences Ethical Committee (Project #1494). Students were invited to participate in the study by email and were provided with a link to the online survey via the virtual learning environment. Online consent was required before accessing the questions. Students completed questions asking whether, after the submission of their reflection, they had an increased understanding of: (1) The impact of pathology results on service users and effective communication in providing patient care. (2) An understanding of the changing role of the Biomedical Scientist in the patient pathway. (3) The value of embedding patients in the biomedical science curriculum to improve the delivery of healthcare. (4) The value of continuous reflective practice and its role in asking difficult questions and finding meaningful answers. The questions reflected the revised 2023 HCPC Standards of Proficiency, which address embedding service users within the biomedical science curriculum . A mixed methodology approach was adopted, which included open- and closed-ended questions. The results were analysed both quantitatively and qualitatively. To compare the responses of biomedical science students pre- and post-completion of the reflective assessment, a Chi-squared test was used to determine statistical significance ( p < 0.05). Free-text responses were analysed using thematic analysis . The researchers read the data for familiarity, generated codes to form initial themes, and checked for plausibility. The process was repeated by all three members of the team, and the final themes were collectively agreed upon to produce the thematic analysis. A service user event workshop was created and facilitated by academics from the School of Biosciences, Aston University, United Kingdom, for final-year biomedical science students. The event is part of the 30-credit final-year “Professional Development for Biomedical Scientists” module. The workshop was scheduled for 4 hours, and the following pathology service users were invited as guest speakers: a patient with beta-thalassaemia major, a patient diagnosed with a giant cell tumour, a Microbiology Consultant, a director of a primary care provider, and a patient referral optimisation officer. Both patients and service users provided consent to participate in the workshop. The patients discussed how pathology services have contributed to their diagnosis and treatment, while service providers discussed their roles and their interactions with pathology services. All speakers highlighted issues that have affected the delivery of an optimal service. This was followed by an interactive class discussion, which was directed by the assessment brief. Following the event, all students were required to complete a 750-word reflective piece, with this assessment contributing to 33% of the overall module mark ( : Assessment Brief). This assessment required students to reflect on the voices of the different service users and how taking action to address problematic areas in healthcare can enhance the delivery of biomedical science and improve patient care. Students were directed to specifically comment on several areas, such as 1) past and current challenges to the delivery of pathology services; 2) advancements in point-of-care testing (POCT) and its increased use in the diagnosis and monitoring of disease; 3) the changing role of Biomedical Scientists and diagnostic laboratories in healthcare; 4) the increased awareness of the profession following COVID-19; and 5) empowering patients to understand and access their results. To support students in writing a reflective piece, they were provided with a range of resources, which included a workshop that gave them an opportunity to approach reflective writing and links to the marking scheme. Students were also given a generic example of a reflection and were asked to work in mixed groups to mark the reflective piece and provide feedback according to the assessment marking scheme. In addition to this, students were directed to additional reading that covered the importance of reflective writing for practitioners and how to write in-depth reflections. Students were also allowed to attend an additional drop-in session to ask any questions they had regarding the assessment ( : Marking Rubric). total of 99 students were enrolled onto the module, and all attended the service user workshop, 57 of whom completed the post-event online survey. To better understand the demographics of the student cohort, they were asked if they had worked in the NHS in the last 3 years. A total of 20 (35.1%) respondents stated that they had worked in the NHS, with 9% of respondents having completed their Institute of Biomedical Science (IBMS) registration portfolio as a Trainee Biomedical Scientist during their placement year. Other roles included: Medical Laboratory Assistant, Administration and Clerical Staff, Domestic Assistant, Dental Receptionist, Vaccination Support Officer, and Clinical Trial Support Officer. Incorporation of Revised HCPC Standards of ProficiencyReflective Writing Can Emphasise the Central Role of Service Users Within the NHSBenefits of Embedding Patients into the Biomedical Science CurriculumImproved Confidence in Reflective Writing for Biomedical Science StudentsFree Text Responses for Thematic AnalysisFollowing the service user event, students were asked to reflect on whether their knowledge and understanding of the some of the revised 2023 HCPC Standards of Proficiency for all 15 HCPC registered professions had improved. Of the respondents, an overwhelming percentage either “strongly agreed” or “agreed” that the session increased their knowledge and understanding of; “ public health and prevention of service users’ ill-health ” (94.8%); “ the role of equality, diversity, and inclusion, with specific importance placed on ensuring practice is inclusive for all service-users ” (87.7%); “the central role of the service-user, including the importance of valid consent and effective communication in providing good care” (98.3%); “the importance of leadership at all levels of practice” (91.3%) and “the need to be able to use information, communication and digital technologies appropriate to practice” (96.5%) . Students wrote a 750-word reflection following the service user event. Post-assessment, over 93% of respondents either “strongly agreed” or “agreed” that the service user reflective assessment reinforced that “ communication amongst Biomedical Scientists ” and “ listening to service users ” are essential in delivering effective patient care through service improvement. Furthermore, 94.8% of respondents either “strongly agreed” or “agreed” that the reflective piece emphasised the importance of both “ the role of the Biomedical Scientist within the pathology laboratory ” within “ the patient treatment pathway ” and the “ limitations that may negatively impact ” the service and ultimately patient results. Lastly, on average, over 90% of respondents either “strongly agreed” or “agreed” that the reflective assessment has increased their understanding of “ POCT and other laboratory advancements within the NHS ” available to reduce diagnostic turnaround times for “ effectively treating patients ” . Students were asked for their views regarding the inclusion of patients into the curriculum. Remarkably, 100% of respondents either “strongly agreed” or “agreed” that “embedding patients in the biomedical science curriculum can improve the delivery of healthcare.” Furthermore, 94.8% reported that “contact with the patient lies at the heart of clinical education,” and 98.3% saw “the value of self-reflection and its role in asking difficult questions and finding meaningful answers” . Prior to the completion of the service user reflection, only 17.6% of final-year respondents were either “very confident” or “confident” in reflective writing, compared to 76.3% post-service user assessment ( p < 0.001) . To gauge a better understanding of what students felt about the impact of embedding patients into the biomedical science curriculum, a thematic analysis was conducted. From the responses to the free text question “ Q11. What is the impact of embedding patients into the biomedical science curriculum?” 46% ( n = 26) of the respondents answered question 11. Some of the students’ responses fell into multiple themes, and once these were analysed, the five final themes identified were categorised as shown in . The most prominent themes identified were 1) realisation that there is a patient behind each sample, 2) helping to identify improvements for pathology services/healthcare and 3) reinforcing the importance of a Biomedical Scientist in the patient pathway. Theme 1: Realisation That There is a Patient Behind Each SampleTheme 2: Helps Identify Improvements for Pathology Services/HealthcareTheme 3: Reinforces the Importance of the Role of a Biomedical Scientist in the Patient Pathway The third most common theme, identified by 31% of respondents, was the essential role played by Biomedical Scientists in the diagnosis, monitoring, and treatment of patients. Specific comments made by respondents included: “Embedding patients into the curriculum is important for biomedical science students. I feel that it provides perspective of the effects that your decisions can ultimately result in when working as a Biomedical Scientist.” [SIC] “The patient experiences were enlightening. Biomedical Scientists play a crucial role in patient care and should remember how important test results are for individual patients.” [SIC] A total of 42% of respondents reported that embedding patients in the curriculum reinforced that there is a patient behind each sample and that BMS must always work to a high standard. Listening to the service users reinforced how important each test result is to the patient. Comments included: “The event reinforced the importance of pathology results for patients. It can sometimes feel like a process in the laboratory, but each sample is linked to a patient outcome.” [SIC] “Directly including patient experiences in education helps future BMS to recognise the role they play in the patient pathway and emphasises the potential impact they can have on patients, e.g., high standards can save lives, but low standards can cause harm.” [SIC] “One of the core principles of the NHS and public health is ensuring patient is at the "heart" of everything it does. Embedding this principle into student mindset early is integral in ensuring they efficiently perform their particular roles in the future. Future events where we listen to the voice of service users, and their experiences will be progressively beneficial in developing our character so we can best serve our patients and their communities.” [SIC] “Working in laboratories often means little to no patient contact, so hearing patients’ stories helps to reinforce that there’s a patient behind each sample and how what we do in the lab directly impacts patients.” [SIC] As seen in Theme 1, 42% of respondents also reported that reflecting on service user experiences enabled them to identify improvements for pathology services. These included reducing sample turnaround times, identifying bottlenecks in current service provision, and understanding complex diseases and the impact of complications on patients. Comments included: “Patient experiences can help us to identify weaknesses/areas of improvement in the NHS so this has a key role in improving services for its users.” [SIC] “Understanding patient perspectives can help improve services delivered by BMS.” [SIC] “It is important to include patients where necessary into education of healthcare so that we can further our knowledge in terms of a particular disease, its symptoms and any unpredictable complications.” [SIC] “This would improve the time taken to provide patients with relaying test results and any form of diagnostic tests needed to be performed. This would also prevent tests that are not relevant from being tested for that patient as BMS are equipped with much more knowledge within this region.” [SIC] This study aimed to create an opportunity for final-year biomedical science undergraduate students to engage with service users. The “service user event,” accompanied by the reflective writing assessment, involved active student engagement with pathology service users, including both patients and practitioners, to foster a culture of reflective practice among students. The reflective assessment increased students’ awareness of the critical role of pathology laboratory results in ensuring optimal patient care while highlighting strategies to improve existing NHS services to enhance patient experience and outcomes. Incorporation of Revised HCPC Standards of ProficiencyFuture Work and Study Limitations Several improvements were suggested by respondents through the open-ended component of the survey. These included increasing the number and diversity of speakers within the workshop to provide a greater overview of the service users of the pathology laboratories within the NHS. In line with this, respondents were in favour of incorporating more patient-focused speakers, as sharing their direct experiences will improve the services within public health. Students also expressed a preference for the opportunity to discuss high-profile cases involving pathology services that affected patient outcomes, such as the case of Dr. Bawa-Garba . Students recognised inequalities in the way that Dr. Bawa-Garba was treated in relation to other medical professionals involved with the patients, highlighting the need for embedding equality, diversity, and inclusion training in education and as part of CPD . The inclusion of ethnically diverse patients and speakers from a wider range of healthcare roles will help to better prepare students for future employment where they will have to work with a more varied population, which will require personalised approaches to their healthcare . In line with other literature, respondents in the current study recognised the value of reflection and expressed an interest in its incorporation as both a formative and summative assessment throughout their biomedical science degree. Previous literature has emphasised the use of curriculum mapping to identify gaps in the curriculum and allow for the constructive alignment of graduate outcomes and assessments . Through the use of curriculum mapping, reflective assignments will be further embedded within the biomedical science curriculum. In terms of study limitations, at this year’s service user event, we were unable to include Biomedical Scientists and Advanced Practitioners due to timetabling constraints and their availability on the day. In previous events, we have had both professions attend and in the future we will endeavour to include representation from Biomedical Scientists and Advanced Practitioners. Additionally, to enhance attendance and accessibility, we will explore the option of adopting a hybrid approach. This may involve facilitating online participation for professionals, allowing them to connect remotely and interact with students during the event. Furthermore, the overall response rate to the post-workshop survey was 57.5%. While this is higher than the average response rate for similar surveys that usually generate a 30%–40% uptake , this could be improved. One suggestion for future work is to collect “before” evaluation data, as this would provide a useful comparison with students learning and skill development following the service user event. In addition, offering financial incentives, such as gift vouchers, would increase the number of survey responses collected, an initiative that is widely used . Finally, the service user event was held face-to-face on campus. Due to the increasing size of the biomedical science student cohorts each year, this can often present logistical challenges, such as finding suitable learning environments and space . Moreover, it can be difficult for patients and service users to travel to university campuses, which may not be local or easily accessible due to their conditions. One potential solution to overcome this is to host an online service user event, although this may come with its own challenges, such as negatively impacting student-service user discussions. The Biomedical Science degree at Aston University is an HCPC-approved course and an IBMS-accredited degree programme. The event and the assessment clearly met and highlighted the importance of the revised 2023 HCPC Standards of Proficiency , with an overwhelming number of respondents reporting an increased understanding of how to meet the needs of service users, with key themes being communication, consent, leadership, and use of information . Other studies involving patients within the undergraduate curriculum have reported the importance of creating a diverse learning environment to make education more engaging, powerful, and transformative while ultimately empowering patients . Patients have reported that their involvement in the undergraduate curriculum allows students to hear an alternative perspective in order to better understand their conditions, thus highlighting their empowerment . Our study identified a creative teaching and learning method involving patients and service users . The workshop itself created an environment that was student-directed, participatory, and constructivist by allowing students to openly ask patients about their experiences of . The students produced a 750-word reflective assessment to evaluate patient experiences, which was underpinned by Bloom’s revised framework, which requires students to remember, understand, apply, analyse, and evaluate patient experiences and pathology services . A total of 94.8% of final-year respondents reported an increased understanding of “ public health and prevention of service users’ ill health ” . Students are taught about the clinical presentation, diagnosis, and treatment of haemoglobinopathies as part of the biomedical science curriculum. However, through the service user event, they learned about the experiences of a patient living with beta-thalassaemia major, where the patient highlighted how errors made in the laboratories have substantially impacted their lives. Through a reflective assessment, the students were able to identify instances of good practice and laboratory advancements that could have potentially prevented the transfusion reaction experienced by the patient. Moreover, the students drew attention to emerging technologies such as point-of-care testing (POCT) devices, which offer swift and accurate results, enabling patients to actively manage their conditions and Biomedical Scientists to participate remotely in their care . Similarly, students are taught the fundamentals of cancer biology as part of their undergraduate degree. However, the inclusion of a patient with a history of a giant cell tumour introduced students to a new malignancy that they would not otherwise have studied. After listening to this patient’s experience, the students acquired new knowledge about the aetiology of a giant cell tumour, the difficulties surrounding the patient’s original misdiagnosis, and the long-term complications that they experienced as a result. After listening to this service user experience, the students were encouraged to undergo the three-stage process of reflection, which includes a recollection of the experience, attending to one’s own feelings, and re-evaluating the experience . Students resonated and empathised with the patient’s difficult experience, and upon re-evaluation, the majority felt that this patient’s experience could have been improved. Suggestions included reducing turnaround times for both pathology and medical imaging tests, improving clinician-patient communication to empower the patient, and increased application of POCT as part of the initial diagnostic testing in an emergency care setting . The COVID-19 pandemic has hastened changes that were already happening within the biomedical science profession, with regards to greater automation and POCT . The Microbiology Consultant introduced students to the dynamic profession of a Biomedical Scientist, with wider adoption of molecular technologies and laboratories being much more responsive to clinical needs . Furthermore, he showcased how clinical services are also changing and the need for greater efficiency . On reflection, students highlighted that medical staff are often less experienced in understanding and requesting appropriate tests, highlighting the role of the Biomedical Scientist for undergraduate students. Furthermore, students recognised that patient conditions are becoming increasingly complex, requiring more expert advice from laboratories. Students recognised that the traditional role of a Biomedical Scientist will continue to be an important part of the patient pathway in the future while also recognising the need for the role to adapt to reflect technological advances and changing clinical needs . The adaptation of the traditional role of a Biomedical Scientist is already evidenced by the advent of the “Advanced BMS Practitioner” role being introduced into clinical practice . Additionally, the revised HCPC SOPs emphasise the importance of “digital skills and new technologies,” where registrants must be able to “change their practice as needed to take account of new developments, technologies, and changing contexts” . The student survey identified the importance of effective communication as a key theme. The primary care provider gave examples of the impact of transcription errors in labelling specimens and highlighted the negative impact this had on patients awaiting test results. This situation emphasised to the students how errors that are not communicated to the patient can heighten health-related stress, a theme that was identified by the students as part of their reflective assessment . The patient referral optimisation officer provided insight into the NHS specialist allocation scheme and highlighted to the students some of the challenges that patients can face when enrolled in this scheme. Students reflected on how a negative interaction with a service provider can have a devastating long-term impact on patients. Students saw the value of communicating effectively while asking difficult questions and finding meaningful answers through the use of reflection . As the students identified the importance of effective communication with service users and stakeholders, this workshop met both the HCPC SOPs and the new QAA benchmark statement regarding communication . Other work has highlighted the importance of developing communication and interpersonal skills in undergraduate students . Reflection constitutes a crucial element of continuing professional development (CPD) for healthcare professionals. It is firmly ingrained in the Standards of Proficiency for Biomedical Scientists, serving to safeguard ongoing standards of practice . Following the completion of the reflective assignment, there was a significant increase in students’ confidence when writing reflectively . The ability to reflect is an important and necessary lifelong skill that is highly sought after by employers in an ever-increasingly competitive graduate market. Despite biomedical science programmes effectively educating individuals in highly specialised areas, the transferable skills required, such as critical thinking, effective communication, and the ability to reflect, are often lacking . Biomedical science programmes need to prioritise the inclusion of skill development opportunities through their portfolio of assessments, not only for current students but for them to become lifelong practitioners . The benefits of patient involvement in the biomedical science curriculum are multifactorial, positively impacting patients, students, and education providers. These benefits are summarised in . We hope the workshop can be widely adopted by other higher education institutes. While professional bodies require programmes to include service users within the biomedical science curriculum, pathology service users are often hard to identify. This large-scale workshop was successful in creating a platform to encompass a range of pathology service users while evoking meaningful discussions between students and these service users. The reflective assessment deepened students’ understanding of the need for efficient NHS pathology services and the crucial role of a Biomedical Scientist in the diagnosis and monitoring of disease. The workshop was an important activity not only in terms of addressing the HCPC SETs in relation to service user involvement but also provided an opportunity to ensure that undergraduate biomedical science students gained an active appreciation of all the current revised HCPC SOPs. Through the reflective assessment, an overwhelming number of students saw the benefits of including pathology service users in the curriculum and developed important transferable skills that are required in graduate careers. We recommend that other IBMS-accredited and HCPC-approved Biomedical Science programmes adopt and embed this innovative workshop into their programmes to help them meet these service users’ standards while fostering important transferable skills in their students. What is Known About This SubjectWhat This Paper Adds• Medical and Nursing programmes have successfully included patients in their undergraduate curriculum. • The revised HCPC SETs require Biomedical Science courses to include service users in the curriculum. • Including patients into a medically focused curriculum facilitates the development of essential transferable skills. • A novel approach to embedding pathology service users and revised HCPC SOPs into the Biomedical Science curriculum. • Using a pedagogical framework, the reflective assessment encourages students to become reflective practitioners. • The reflective assessment enhances students’ knowledge and understanding of the impact of pathology results on patients. This work represents an advance in biomedical science because the innovative workshop developed reflective students who value pathology service users and improvements in NHS service delivery.
Using ACGME milestones as a formative assessment for the internal medicine clerkship: a consecutive two-year outcome and follow-up after graduation
df5464a9-d306-4254-9d19-fb4d583a512f
10916194
Internal Medicine[mh]
With the reform of medical education, competency-based medical education (CBME) has evolved into an outcome-based and learner-centered approach . In this approach, learners engage in experiential learning, receive constructive feedback, and participate in reflective practices to continually refine their skills and knowledge, ultimately acquiring the necessary competencies. Assessments, therefore, play a crucial role in CBME to ensure the sequenced progression and final achievement of competencies by learners. In 2013, the Accreditation Council for Graduate Medical Education (ACGME) and the American Board of Internal Medicine introduced the Internal Medicine (IM) Milestones, breaking the six core competencies into 22 subcompetencies . Each subcompetency is categorized into five levels, describing expected behavioral progress ranging from critical deficiency to the ability to work independently and ultimately achieve aspirational levels. By defining the expected behavioral progress at each level, the Milestones provide explicit references for assessing learners’ competencies, outline the behaviors required for advancement in medical training, and serve as both summative and formative assessments. Emerging evidence from the United States indicates that milestone-based assessments are a viable approach for evaluating the competency of medical students, as they provide a succinct summary of their performance . The Vanderbilt University School of Medicine has developed and applied milestones for a set of focused competencies within the curriculum . It indicated that milestone-based assessment has significant potential to guide medical students’ development. The Johns Hopkins School of Medicine has developed a milestone template to capture the unique characteristics of the genetics elective, and this template received positive feedback from students participating in the curriculum . The University of Michigan Medical School has successfully designed 24 milestones specifically tailored to assess fourth-year medical students during their emergency medicine (EM) clerkship . This outcome highlighted the value and significance of developing a valid and reliable method for evaluating the performance of medical students. Likewise, the University of South Florida Morsani College of Medicine found that utilizing ACGME EM Milestones can effectively identify medical students requiring remediation . The University of Michigan Medical School also reported that 12 of the 16 subcompetencies of ACGME General Surgery Milestones could be utilized to assess the longitudinal development of competencies from surgery clerkship to surgery internship . These individual reports provide evidence supporting the suitability of using milestone-based assessment in undergraduate medical education (UME). Based on these findings, selecting specific subcompetencies of the ACGME IM Milestones may be feasible to assess medical students’ daily observed clinical activities during their IM clerkship. This way, clinical teachers can provide targeted feedback while assessing medical students’ performance in these areas. Learners exhibited notably different patterns of progress, depending on the specific subcompetency under assessment . Since 2019, 22 subcompetencies of the ACGME IM Milestones have been successfully implemented in assessing post-graduate year (PGY)-2 residents in their IM training and IM residents in 6 teaching hospitals in Taiwan . Through the detailed application of the ACGME IM Milestones to the clerkship, the assessment outcomes can reflect learners’ progress toward competence, highlighting diverse learning paths. Recognizing the points at which learners’ developmental trajectories diverge can serve as potential opportunities for remediation within the context of these subcompetencies . Furthermore, applying the ACGME IM Milestones in clerkship can promote the alignment of educational objectives and assessment methods across different stages of medical training and facilitate the seamless extension of CBME from UME to graduate medical education (GME). The objectives of this article are fourfold: (1) to choose readily observable competencies from the ACGME IM Milestones, specifically emphasizing patient care (PC) and medical knowledge (MK), (2) to evaluate medical students and analyze the assessment results; (3) to relay the feedback received from medical students who were observed during their two-year clerkship; and (4) to trace the developmental progress of students who continued their training with an additional first-year PGY program at the same university hospital. This study assesses the feasibility of utilizing Milestones as a formative assessment tool to bridge the transition between UME and GME. Students, post-graduate year-one residents, and clinical teachersClinical activities of clerkshipThe Taiwanese version of IM milestonesUsing milestones to assess learning outcomes and provide feedbackThe learning curveThe straight line scoringThe characteristics of the patientsEnd-of-rotation surveysStatistical analysis Following the approval of the Institutional Review Board of National Cheng Kung University Hospital, considered to be an expedited review (A-ER-111-473-T), we retrospectively collected the data mentioned above of the 65 medical students who underwent IM rotations in the academic years 2019 and 2020 and the data of these 26 PGY-1 residents who underwent the IM training courses in the academic year 2021. The data during 2019/08/01-2022/07/31 that the Institutional Review Board of National Cheng Kung University Hospital waived the participant informed consent requirement included the medical records written by the medical students in their IM rotations, the mini-CEX, and the Milestones assessment results of students and PGY residents in their IM training courses. The progress in the Milestones level of seven subcompetencies of the same student from the two-year IM clerkship and then the PGY-1 IM training were analyzed using the Friedman test. The difference in ratings among the seven subcompetencies of the Milestones and the seven categories of the mini-CEX were assessed using the Kruskal-Wallis test, followed by Dunn’s post hoc test. The effect sizes between students in the first- or second-year clerkship and PGY-1 training were calculated by Cohen’s d. The difference in categorical variables between the first- and second-year IM clerkship, such as the rate of SLS, the reason for hospitalization, and the rate of patients needing a handover, were compared using the Chi-square test. The difference in the patient number and the length of hospital stay of patients assigned to medical students between the first- and second-year IM clerkship were assessed using the paired t-test and the independent t-test, respectively. The Wilcoxon signed-rank test analyzed the changes in scores in the end-of-rotation survey between the first- and second-year IM clerkship. Two-tailed analytical results with p -values less than 0.05 were regarded as statistically significant. Data were analyzed using SPSS Statistics for Windows, Version 23.0 (IBM SPSS Statistics for Windows, Version 23.0. Armonk, NY: IBM Corp). From September 2019 to June 2021, 65 medical students from a medical college in southern Taiwan participated in an IM clerkship program. All the students were in a class cohort, and no one opted out during the two-year clerkship. The two-year IM clerkship program consisted of a twelve-week course divided into two six-week rotations within each year of the clerkship. During the six-week rotation each year, they rotated through diverse subspecialties every two weeks. In the first year, the specialty rotations included sections on gastroenterology, cardiovascular disease, and pulmonology. In the second year, in addition to rotating in the nephrology section, students could choose two other elective subspecialties, such as general medicine and infectious disease, for their learning course. Of these 65 medical students, 26 underwent PGY-1 training in the same university hospital after graduation. They rotated in the Department of IM for three months as part of their training. It allowed us to closely observe and trace their development regarding their subcompetencies in the ACGME IM Milestones. The IM department comprised 107 attending physicians, of which 60 had completed the general medicine teacher training conducted by the Taiwan Association of Medical Education. Before implementing Milestones assessments within the Department of IM, we organized two training sessions to elucidate the objectives and procedures of Milestones assessment. In line with the original design by ACGME, our institution adopted Milestones for assessing the competency of PGY residents in August 2018. When we extended Milestones assessments to IM clerkship, all the attending physicians already had a year of experience with Milestones assessments. During the IM clerkship, medical students assumed the role of frontline providers of patient care. Each medical student was mentored by an attending physician and received close supervision and guidance from experienced residents and the attending physician while rotating through each subspecialty. One of their primary responsibilities was documenting all medical records, including admission notes or on-service notes, progress notes, weekly summaries, and discharge or off-service notes if patients could not be discharged at the end of the subspecialty rotation. In addition, medical students practiced bedside skills for their assigned patients. The level of supervision varied depending on the complexity and significance of the task. While students were encouraged to propose diagnostic and therapeutic plans, the supervising physicians must agree to and sign those medical orders before execution. The supervising physicians ultimately retained the overall responsibility for ensuring the quality of care. During the IM rotations, medical students participated in various assessment activities. The supervising attending physicians conducted short practice observation assessments every two weeks, which included 1 to 2 mini-clinical evaluation exercise (mini-CEX) assessments and audits of the students’ patient care documentation. Furthermore, medical students presented their assigned cases to the care team during ward rounds, demonstrating their understanding of the patient’s condition and ability to communicate effectively within the team. The original ACGME IM Milestones 1.0 version, introduced in 2013–2014, presented expected behavioral descriptions for each subcompetency along the developmental continuum . However, some descriptions employed complex language, predominantly laden with educational jargon . This complexity made it challenging for users to comprehend the descriptions, increasing the time and difficulty involved in the assessment. For the convenience of Taiwanese users, we used the Taiwanese 1.0 version of the IM Milestones in this study, which was translated into Chinese through a collaborative effort between the Taiwan Society of Internal Medicine and educational experts . This translation employed language that was readily comprehensible to both students and teachers to delineate the expected behaviors for each level of competency. Furthermore, our electronic portfolio (e-portfolio) system included additional explanations for specific behavioral descriptions through pop-out windows, enhancing clarity and making the assessment process and feedback more accessible. Based on the aforementioned clinical activities, we have selected seven of the 22 subcompetencies outlined in the ACGME IM Milestones to assess students’ performance. These subcompetencies were PC-1 (gathering information for defining the problem), PC-2 (management planning), PC-4 (bedside skill), MK-1 (clinical knowledge), MK-2 (diagnostic knowledge), systems-based practice (SBP)-4 (patient transition), and professionalism (PROF)-1 (professional and respectful interaction). Each attending physician evaluated their supervised student’s progress and performed Milestones assessments according to the student’s performance in the clinical activities at the end of each specialty rotation. They utilized the Taiwanese 1.0 version of the IM Milestones worksheet on the e-portfolio system and selected statements accurately matching the students’ behaviors during the rotation. The e-portfolio system automatically determined the level of competence based on the selected statements. Following the original ACGME design, the competence levels were scored on a scale of 1 to 5, with increments of 0.5. Each level represented a different degree of competence, ranging from critical deficiency to aspirational performance . The behavior statements chosen by the attending physician and the system’s assigned levels were displayed on the e-portfolio system. It allowed students to compare these two on the e-portfolio system directly every two weeks at the end of each subspecialty rotation and the sixth week after completing the IM rotation. It provided insights into areas that require improvement. Furthermore, students could engage in discussions with their supervising attending physicians on strategies for attaining advanced-level behaviors. In total, medical students underwent Milestones assessments six times, with one assessment conducted every two weeks during the six-week rotation in both the first- and second-year IM clerkships. Instead of being used for summative decisions, the Milestones assessments conducted for medical students and PGY-1 residents were used as formative assessments. We defined level 1 of Milestones (i.e., the level of critical deficiency) as a low Milestones rating for medical students. It would necessitate further investigation to understand the underlying causes and determine the appropriate solutions to assist the student. During the three-month IM rotation of the first-year PGY training, each PGY-1 resident received a Milestones assessment from their supervising attending physician at the end of each month, resulting in three assessments throughout the training. The assessment encompassed all 22 subcompetencies of the Taiwanese 1.0 version of the IM Milestones, including the seven subcompetencies selected for medical students. To illustrate the diverse learning trajectories among the 26 students who completed their PGY-1 training at our hospital after graduation, we compiled a learning curve using the final Milestones assessment ratings during their first- and second-year IM clerkship and PGY-1 training. Straight line scoring (SLS), a string of identical ratings, is when a single learner receives the same score on the 9-point scale across all Milestones subcompetencies . Given that the progress of learning in each subcompetency of Milestones typically varies among learners, achieving an SLS purely by chance is highly unlikely. Assuming that a resident was accurately rated in each subcompetencies of Milestones, an SLS would seldom occur. Following the ACGME method for evaluating the results of the Milestones assessment, we checked the SLS rate of our assessment results. We also calculated the rated level of these SLSs. Transitioning to new contexts poses challenges for medical students as competent performance is context-dependent . The complexity of patients may impact the student’s learning experience and performance. We recorded the number of patients assigned to medical students during each specialty rotation to trace the development of their patient care capability. To capture contextual information, we categorized the cause of admission into two groups: patients admitted for scheduled procedures or treatments and patients admitted for acute illness. We also recorded the length of inpatient hospital stay and whether a handover was required at the end of the subspecialty rotation. Upon completing the first- and second-year IM clerkships, students must complete a satisfaction survey via the e-portfolio system. This survey employed a Likert scale featuring five response options, namely 1 (strongly disagree), 2 (disagree), 3 (neutral), 4 (agree), and 5 (strongly agree). The survey encompassed a wide range of aspects, including overall satisfaction with the IM clerkship, the extent to which the Milestones assessment results align with their self-evaluation results, and the perceived usefulness of Milestones assessment as a feedback mechanism for their ongoing learning. Number of milestones and mini-CEX assessments and chart audit – the databaseStraight line scoring of milestones assessment results – the quality of assessmentThe distribution of rated levels in each subcompetency of milestones – the global view of assessment outcomesThe scores for each category of mini-CEX Assessment – another evaluation metricThe comparison of ratings among seven subcompetencies – the development of milestones subcompetenciesThe progress of subcompetencies within the two-year clerkship and to PGY-1 training – the learning trajectoryThe characteristics of patients assigned to medical students – the context of learningThe feedback of medical students on the milestones assessment – the perspective of studentsDuring the two-year IM clerkship, 65 medical students completed one Milestones assessment in each subspecialty rotation, resulting in 390 assessment results available for analysis. During this period, they underwent 273 mini-CEX assessments and 424 chart audits in the first-year IM clerkship. In the second-year clerkship, they completed 207 mini-CEX assessments and 417 chart audits. Furthermore, 26 PGY-1 residents underwent monthly assessments for three months and yielded 78 assessment results to trace their progress during the early stage of their medical careers. In the first-year IM clerkship, the SLS rate was 36.9%. Among these SLSs, the majority (86.1%) were rated at level 3, followed by levels 4 and 2, accounting for 8.3% and 5.6%, respectively. The SLS rate in the second-year IM clerkship was 17.4%, significantly lower than in the first-year IM clerkship ( p < 0.0001). Like in the first-year IM clerkship, level 3 was the most commonly rated, accounting for 88.2%. The remaining SLSs were allocated to levels 2, 3.5, and 4, representing 2.9%, 2.9%, and 5.9%. The SLS rate in the PGY-1 training was 6.4%, significantly lower than in the first-year clerkship ( p < 0.0001), and all the SLSs were rated at level 4. Attending physicians most frequently rated level 3 across the clerkship’s seven subcompetencies of the Milestones. In the first-year IM clerkship, approximately 66.2% of assessment results were rated as level 3, with only 17.0% being rated below level 3. Similarly, around 66.0% of the second-year IM clerkship assessment results for the seven subcompetencies were rated level 3, while 17.4% were rated below 3. One first-year clerkship student received six level-1 ratings in seven Milestones subcompetencies, except for the PC-4 subcompetency. However, upon follow-up, this student improved and progressed in all seven subcompetencies during the second-year IM clerkship. In the second-year IM clerkship, only a level-1 rating was observed for the PC-2 subcompetency of another medical student. The mean scores of mini-CEX for each year of clerkship were presented in Table . In the first-year clerkship, the mean score for the seven categories ranged from 4.3 to 4.5, with no significant difference. In the second-year clerkship, the mean score went from 4.4 to 4.7. A significant difference in scoring was observed between the categories of medical interviewing and informed decision-making/counseling, with scores of 4.7 ± 0.7 and 4.4 ± 0.6, respectively ( p = 0.016). As shown in Table , the PC-2 subcompetency received the lowest rating with a mean level of 2.81 ± 0.64 among the seven subcompetencies in the first-year IM clerkship. Except for the MK-2 subcompetency, there were significant differences between the PC-2 subcompetency and the other five subcompetencies ( p < 0.000001). In the second-year IM clerkship, PC-2 remained the subcompetency with the lowest rating, with a mean level of 2.65 ± 0.65. The SBP-4 subcompetency had the highest rating among the seven subcompetencies, with a mean of 3.20 ± 0.40. The PC-2 subcompetency and the SBP-4 subcompetency significantly differed from the other six subcompetencies ( p < 0.000001). When comparing the levels of seven subcompetencies between the first- and second-year IM clerkships, we found that the rated levels did not show significant changes except for the PC-2 subcompetency and the SBP-4 subcompetency (Table ). The PC-2 subcompetency showed a significantly lower rating in the second-year IM clerkship than in the first-year IM clerkship ( p = 0.015); the SBP-4 subcompetency showed a significantly higher rating in the second-year IM clerkship than in the first-year IM clerkship ( p = 0.017). Among the 26 PGY-1 residents, the Milestones assessment results during their IM clerkship were similar to those of other students, with the PC-2 subcompetency still being the lowest mean level (2.91 ± 0.64 and 2.74 ± 0.69, respectively). During the PGY-1 training, a significant improvement in all seven subcompetencies of these 26 PGY-1 residents was noted. The Cohen’s d values calculated between the first-year clerkship students and PGY-1 residents and between the second-year clerkship students and PGY-1 residents indicated large effect sizes (Table ). Attending physicians rated PGY-1 residents higher than the medical students across all seven subcompetencies. The majority of the ratings for PGY-1 residents fell within the range of levels 3.0 (45.4%), 3.5 (11.5%), and 4.0 (36.2%). However, the PC-2 subcompetency still had a lower mean level (3.29 ± 0.67) than other subcompetencies (Table ). In Fig. , the overlaid learning curves depicted a consistent trend of increasing competency from the clerkship to PGY-1 training. It was worth noting that for all seven subcompetencies, the variation in competency levels between individuals either remained constant or narrowed. There was an exception in the PC-2 subcompetency curve; the PC-2 competency levels became diverse, and one PGY-1 resident exhibited regression. Approximately six patients were assigned to medical students in the first-year IM clerkship, and there was no significant progress in patient numbers cared for by the second-year IM clerkship students (6.4 ± 2.2 vs. 6.3 ± 1.8, p > 0.05). Most patients assigned to students in the first-year IM clerkship were hospitalized for scheduled events (56.5%). In contrast, only 7.3% of patients assigned to the second-year IM clerkship students were hospitalized for scheduled events, and 92.7% were admitted due to acute illness. Irrespective of the reason for hospitalization, patients cared for by the second-year IM clerkship students tended to have a more extended inpatient hospital stay than those managed for by the first-year IM clerkship students. The mean inpatient hospital stay was 5.6 days in the first-year IM clerkship and 12.2 days in the second year ( p < 0.0001). As a result, clinical handovers at the end of specialty rotation were more frequently encountered in the second-year IM clerkship, with 10.2% of patients experiencing a care transfer from one student to another. Only 0.2% of patients cared for by the first-year IM clerkship students had a similar experience. All 65 students completed the questionnaires with a 100% response rate for the two-year IM clerkship. The mean satisfaction scores for the IM clerkship in the first and second years were more than 4, and the difference was not statistically significant ( p > 0.05). The satisfaction with the Milestones assessment increased significantly in the second-year IM clerkship. The mean scores for the usefulness of Milestones as feedback for learning and the consistency between results of Milestones assessment and self-assessment were considerably higher in the second-year clerkship than in the first-year clerkship (4.1 ± 0.9 vs. 3.6 ± 0.8, p < 0.0001 and 4.2 ± 0.8 vs. 3.5 ± 0.7, p < 0.0001, respectively). By selecting specific subcompetencies of the ACGME IM Milestones to assess the daily clinical activities of medical students, we successfully applied the Milestones as a formative assessment for the IM clerkship. As anticipated, our study demonstrated that the subcompetency levels of medical students were generally around 3, indicating they were progressing and improving their performance as defined by the ACGME IM Milestones 1.0 version . The assessment results also disclosed the weakness of medical students in their performance. Additionally, our study revealed that the Milestones assessment effectively illustrated the trajectory of competence development from the stage of medical students to PGY-1 residents. By incorporating the Reporter, Interpreter, Manager, Educator (RIME) model and the Core Entrustable Professional Activities (EPAs) framework, 13 core EPAs for preparing students to enter residency have been introduced in UME in the United States . The analysis of the results of the EPAs assessment identifies three clusters of EPAs . These clusters consist of EPAs that align well with existing curricula and provide limited opportunities for practice due to their infrequent occurrence and some EPAs that need to be included or developed in the current curricula. This result emphasized aligning workplace-based assessment contents with the curriculum for meaningful evaluation. For our IM clerkship, we selected relevant subcompetencies from the ACGME IM Milestones to assess the daily activities of medical students, such as gathering information for defining the problem (PC-1), management planning (PC-2), bedside skills (PC-4), clinical knowledge (MK-1), and diagnostic knowledge (MK-2). Additionally, we aimed to foster team collaboration and professional behaviors by incorporating patient transition (SBP-4) and professional and respectful interaction (PROF-1) as learning objectives, recognizing that assessment drives learning. The contents of the RIME model encompass gathering a history and performing a physical examination, documenting a medical record, providing an oral presentation, prioritizing a differential diagnosis, interpreting common diagnostic and screening tests, and recognizing an urgent or emergency patient as EPAs designated for reporters and interpreters . This arrangement in choosing these daily activities for assessing medical students aligns with our strategy in selecting observable subcompetencies in this study. As part of the National Accreditation System, Milestones ratings were expected to vary by subcompetency, assuming independent assessment of each subcompetency’s performance. The SLS rate was considered an indicator of assessment quality . Our study found a decreasing SLS rate from the clerkship to PGY-1 training, with medical students predominantly rated at level 3 and PGY-1 residents more frequently rated at level 4. These discrepancies in rated levels may be explained by clinical teachers’ preconceived notions of the overall competence of medical students and PGY-1 residents or the halo effect, meaning observed competence levels extrapolated to less observed levels. Another plausible explanation might be that teachers were unfamiliar with using Milestones assessments. To further explore the reason, we collected and analyzed data on the SLS rate of the same group of attending physicians for PGY-1 residents in the academic years 2019, 2020, and 2021 (data not shown in the results). We found that the SLS rates were 6.7%, 8.9%, and 9.9%, respectively. As a result, it was less plausible to attribute the decline in the SLS rate to teachers gradually becoming more acquainted with Milestones assessments. We considered that along with students transitioning from the clerkship to PGY training, their duration of IM practice increased. Owing to the increased frequency of clinical activities, the clinical teachers were enabled to observe performance closely, resulting in the improved quality of assessments and a reduced SLS rate. In Taiwan, the interpretation of each rating scale on the mini-CEX differs from the original version in that a rating of 4 signifies performance that meets the standards expected of clerks, a rating of 5 indicates performance that meets the standards for interns, and a rating of 6 represents performance that meets the standards for residents (or PGY-1 residents). Our mini-CEX assessment results revealed that the ratings for medical students mostly ranged from 4 to 5. In contrast, our Milestones assessment results indicated that students’ subcompetency levels were primarily at level 3, signifying that they were progressing and improving their performance. When attending physicians assessed students using the mini-CEX, they evaluated their performance by comparing it with their peers’ performance or relying on their perceptions of how students should perform. Conversely, attending physicians assessed students’ performance by selecting specific behaviors described in each Milestone level. Although based on different criteria, these two assessments led to similar results, indicating the viability of our approach in selecting observable subcompetencies from the ACGME IM Milestones for a formative assessment. When comparing the assessment results of the seven subcompetencies, it was evident that the PC-2 subcompetency, which involves applying knowledge learned in the classroom to make diagnostic and therapeutic plans, consistently lagged behind other subcompetencies, from medical students to PGY-1 residents. Similarly, previous studies have reported that many junior doctors and medical students feel ill-prepared when developing care plans . This result suggested the importance of dealing with real-life situations and on-the-job learning to acquire implicit knowledge. Our report showed that the SBP-4 subcompetency received significantly higher ratings in the second-year clerkship compared to other subcompetencies, which may be attributed to the increased hospital stay duration of patients and the frequency of handovers required. As mentioned by Hauer et al., the context is a crucial factor influencing ad hoc entrustment decisions . When designing an EPA assessment, it is essential to specify the “specifications and limitations” of the task to ensure the scope of the context when assessing this EPA. Milestones assessment lacks descriptions regarding the context at the time of evaluation, which should be a factor to consider when interpreting Milestones assessment results. In the two-year IM clerkship, the characteristics of assigned patients cared for by the students changed from shorter hospital stays and stable conditions to more extended stays and acute illnesses. This change in context may explain the stationary or regress in competency levels. Unlike the patients cared for by the first-year clerkship students, who already had preliminary treatment plans, the students in the second-year clerkship encountered patients needing multiple tests and complex treatment options due to acute conditions, resulting in a decline in the PC-2 subcompetency (management planning). Assignment of responsibility helps the development of competency. The social cognitive theory suggests that learners should be allowed to observe and model responsible behavior. In addition, the constructivism theory indicates that assignment of responsibility is crucial for learners to develop a deep understanding of concepts and skills. When comparing medical students functioning as frontline care providers under close supervision to PGY-1 residents having obtained their medical licenses, the latter is entrusted with making the majority of decisions independently in primary care. Our IM Milestones assessment tracing showed significant improvements in all seven subcompetencies for the same PGY-1 resident compared to their IM clerkship learning period. The overlaid learning curves illustrate the complete variation in the learning trajectories of a group of learners within a specific learning domain. Instructors can utilize learning curve information to allocate educational resources to individuals needing support or intervention . Creating a learning curve requires a fine-grained collection of data. Despite utilizing Milestones as a formative assessment to evaluate learners within a short rotation period, our overlaid learning curves still revealed a divergent trajectory in the PC-2 subcompetency. Our findings require a more extended observation to validate the use of such assessment results in constructing learning curves. However, our results may provide learners and instructors with self-directed learning and education management opportunities. The increased transparency of performance expectations in Milestones offers a comprehensive and structured approach to feedback . Through our Chinese 1.0 version of the IM Milestones assessments using the Chinese spoken language, our students could directly compare behavior descriptions selected by attending physicians with higher-level expectations displayed on the e-portfolio system. It helped them understand areas where improvement was needed and facilitated them to discuss with their supervising attending physicians how to achieve the desired behaviors at an advanced level. Our results showed that the number of Milestones level-1 ratings was reduced from the first-year clerkship to the second-year clerkship. Like the satisfaction of medical students from other schools implementing Milestones, our end-rotation survey revealed that as students became more acquainted with the IM Milestones, there was a notable increase in satisfaction with the feedback received from the results of Milestones assessment and alignment of Milestones assessment results with self-assessment results . Due to the desire for students to have more experience in diverse subspecialties in the IM clerkship, our curriculum design allowed them to rotate every two weeks. Assessing student performance in short rotations presents challenges, and a practical approach, as we did, involves selecting subcompetencies closely linked to the clinical activities that supervising attending physicians can observe daily. Accordingly, we have chosen subcompetencies primarily derived from the competencies of patient care and medical knowledge for our assessment contents. We may excessively focus on the two competencies and need to assess more competencies for implementing CBME. Instead of using Milestones as a summative assessment, we conducted Milestones assessments every two weeks and employed them as a formative assessment, similar to the ad hoc EPAs. Its value as an information source for the clinical competency committee must be further validated. However, considering the 13 core EPAs in UME, not covered by all UME curricula , our approach of using the seven instead of all 22 subcompetencies of the ACGME IM Milestones to assess the competencies of medical students is worth continuing. Our finding was a single-hospital experience. Medical education systems vary significantly from country to country, and factors like rotation duration, the ratio of preceptor to student, and the number of patients cared for by the students may also differ. Therefore, when applying our assessment strategy, adapting and validating it according to the unique conditions and specific requirements within each context is essential. Our study demonstrated that selecting specific subcompetencies from the ACGME IM Milestones as a formative assessment for medical students is feasible. In addition to giving feedback, these Milestones can also disclose the competence levels of medical students and their developmental trajectories. Implementing the ACGME IM Milestones in clerkship will improve the UME curriculum and align the blueprint for competency development from UME to GME.
Effect of a patient education video and prehabilitation on the quality of preoperative person-centred coordinated care experience: protocol for a randomised controlled trial
cd8ef2ff-40cd-4cf7-b1ec-71c2ba19c6a7
9528577
Patient Education as Topic[mh]
Multimodal prehabilitation is an emerging field within the Perioperative Medicine specialty. It includes individualised structured exercises, nutrition counselling and supplementation and psychological support through standardised multimedia patient education. The goal of multimodal prehabilitation is to optimise the patient’s health status in the 4–8 weeks before surgery to withstand surgical stress. Major surgery is associated with a 40% reduction in physiological reserve. Many ‘high risk’ surgical patients have low physiological reserves from being older, malnourished or frail with multiple comorbidities. These patients also have several modifiable lifestyle factors, such as physical inactivity, obesity, smoking, hazardous alcohol drinking and poor nutrition. When all these risk factors are combined, its association with the risk of postoperative complications is higher. The interval between diagnosis and hospital admission is an ideal opportunity for promoting behavioural risk modifications for long-term health benefits that goes beyond surgery itself—offering an ideal ‘teachable moment’. Thus, multimodal prehabilitation provides a unique opportunity to optimise the patient’s physiological reserve to withstand the surgical stress response. In one study, most patients (83%) were unfamiliar with the concept of prehabilitation but were interested in participating in such a programme after explanation. The primary motivation (62%) for patient participation in prehabilitation programmes was to be physically prepared for surgery and most patients (81%) felt supported by the multidisciplinary healthcare team. Our systematic review of seven randomised controlled trials (RCTs, 726 cardiac surgical patients) showed that physical prehabilitation may improve postoperative functional capacity and slightly shorten the length of hospital stay (mean difference: −0.66 days, 95% CI −1.29 to −0.03; I 2 =45%; low-certainty evidence). However, none of these studies examined the level of patient-centred coordinated care experience associated with multimodal prehabilitation. Our systematic review (34 trials, 3742 surgical patients) on patient education formats for reducing perioperative anxiety showed that multimedia formats were associated with increased knowledge more than text, which in turn increased knowledge more than verbal formats. As a component of a cardiac surgical prehabilitation programme, our multifaceted patient education programme (video and intensive care unit tour for patients and their family members) was associated with higher overall patient and family satisfaction scores, and lower patient anxiety scores. Significance of the present study Study objectives and hypotheses The primary objective of this RCT is to evaluate the effect of prehabilitation (patient education video and multimodal prehabilitation) on the preoperative patient-centred coordinated care experience. The secondary objective is to assess the effect of prehabilitation on preoperative anxiety and depression levels, quality of recovery and days alive and at home within 30 days after surgery (DAH 30 ). The primary hypothesis is that prehabilitation (patient education video and multimodal prehabilitation) is associated with a better patient-centred coordinated care experience than standard care. The secondary hypothesis is that prehabilitation is associated with lower preoperative anxiety and depression levels, higher quality of recovery and higher number of days alive and at home within 30 days after surgery. Despite previous studies focusing on the effect of prehabilitation education, there are no local ‘prehabilitation videos’ available for current patients receiving physical and nutritional prehabilitation before elective surgery. Prehabilitation programmes are not widely used in Hong Kong and patient education is usually not standardised across different surgical patients. Given that multimodal prehabilitation is a complex intervention requiring a high level of coordination between anaesthetists, surgeons, nurses, physiotherapists and dieticians with patients, measurement of the quality of patient-centred coordinated care is essential for quality improvements in Perioperative Medicine. Conceptually, person-centred (patient-centred) coordinated care is when care and support have been guided by and organised effectively around the needs and preferences of individuals. The five domains of person-centred coordinated care include (1) information and communication processes, (2) care planning, (3) transitions (continuity of care), (4) goals and outcomes, and (5) decision-making. Study design Study setting and population Eligibility Blinding Interventions Outcome measures Other variables in data collection Sample size Statistical methods Monitoring and data management Patient and public involvement Ethics and dissemination Before obtaining written informed consent ( ), the purpose of the study, procedures, risks and benefits of participation and the time commitment involved will be explained to eligible patients by study research staff. The same study research staff will obtain patient’s written informed consent to participate at the outpatient preadmission clinics. Patients allocated to the intervention group will be reimbursed for the number of prehabilitation sessions attended to encourage high compliance with the programme. 10.1136/bmjopen-2022-063583.supp1 Supplementary data Patients may withdraw from the study without prejudice at any time during the study. Data will be kept confidential on password-protected files and computer, and in secure offices of the Department of Anaesthesia and Intensive Care, with access limited to study research staff. Only group data will be published in a peer-reviewed journal publication. Approval for the project (protocol version 2.0, 21 September 2021) was obtained from The Joint Chinese University of Hong Kong-New Territories East Cluster Clinical Research Ethics Committee (CREC Ref. No. 2021.518-T). Any protocol modifications will be communicated to the local research ethics committee and clinical trials registry in a timely manner. The study will adhere to local laws, Declaration of Helsinki and institutional policies. The study design is a single-centre, two-group, parallel, superiority, single-blinded randomised controlled RCT. Patients will be randomised to receive either preoperative patient education comprising of a video and prehabilitation programme with standard care (intervention) or standard care (control). Block randomisation with 1:1 allocation will be carried out according to a computer-generated sequence to be performed by one of the authors (AL) not involved in the screening, patient recruitment, clinical care or data collection, using 2019 Power Analysis and Sample Size (PASS) Software (NCSS, LLC. Kaysville, Utah, USA). Sequentially numbered, opaque, sealed envelopes will be used to conceal the sequence until the interventions are assigned at an outpatient preoperative clinic. The study has been designed with reference to the CONsolidated Standards Of Reporting Trials statement, and reported according to the Standard Protocol Items: Recommendations for Interventional Trials statement. An overview of the study design is provided in . The study will be conducted at the Prince of Wales Hospital in Hong Kong, a 1807-bed teaching hospital. Currently, there are approximately 500 adults undergoing major to ultramajor elective surgical procedures performed per month. Patients meeting the inclusion criteria will be recruited. Inclusion criteria Exclusion criteria Contraindications for prehabilitation, such as those with cognitive deficits who are unable to comply with study procedures, physical limitations that would preclude prehabilitation and inability to regularly attend prehabilitation sessions, such as those who are severely frail (CFS 7–9). Adults (>18 years old) undergoing major to ultramajor elective surgery cardiac (coronary artery bypass graft (CABG)±valve/valve only) surgery. Adults (≥50 years) undergoing major colorectal, hepatobiliary-pancreatic or urology surgery Primary language is either English or Cantonese. Prefrail to moderately frail patients with a Clinical Frailty Scale (CFS) of 4–6 at the time of being accepted for surgery at the outpatient surgical/nurse clinic. Patients with estimated ≥4 weeks of surgical waiting list time. To minimise measurement bias, study research personnel collecting the outcome measures will not be aware (blinding) of the treatment allocation performed by another member of research staff. Due to the nature of the intervention and requirements of informed consent, trial participants will not be blinded to the treatment allocation. Control arm: standard care Intervention arm: video and prehabilitation (+standard care) Patients randomly allocated to the intervention group will receive the same standard care provided in the control group. They will also view a 10 min patient education video about prehabilitation before receiving physical prehabilitation with a registered physiotherapist. All participants undergoing major elective surgery will also receive nutritional assessment/counselling with a registered dietician. The prehabilitation will be conducted in the 4–8 weeks before elective surgery following existing prehabilitation protocols. The video will describe the concept and benefits of prehabilitation, the flow of current prehabilitation exercise programmes and basic nutritional information. The patient education video will be in Cantonese, the predominant language used in Hong Kong, but with subtitles in English. The information covered in the 10 min video include the following: Introduction to prehabilitation Aims of prehabilitation. Benefits of prehabilitation. General ‘generic’ complications and conditions (eg, malnutrition) after surgery. Exercise in prehabilitation Aims and benefits of exercise in prehabilitation. Tests of physical fitness (eg, 6 min walk test). Structure, contents and methods of prehabilitation. Safety measures during training. Importance of home exercise. Diet in prehabilitation Importance of a healthy diet. Components of a healthy diet. Strategies to eating well. Components of the physical prehabilitation (1–3 times/week) include the following: Warm up activities (5–10 min). Aerobic exercise in the form of walking/running, stepping, arm cycling and leg cycling (training intensity between 40% and 80% of oxygen uptake reserve for 20–30 min). Resistance training for major muscle groups of upper and lower limbs. Cool down activities (5–10 min). Education on breathing techniques and daily activities. Re-enforcement of advice on nutrition, smoking cessation and positive psychological support. Patients in the control group will receive the standard preoperative consultations by surgeons and anaesthesiologists. Unstructured information about life style modifications patients can undertake at home, such as exercise and enhanced nutrition, will be given to patients and family members at the discretion of healthcare staff in the usual manner, often on an ad hoc basis. All patients will receive standardised surgical processes and perioperative care under existing protocols. Anaesthesia techniques, postoperative pain management, early postoperative mobilisation and physiotherapy and postoperative nutrition will follow existing Early Recovery After Surgery protocols where appropriate. rimary outcome Secondary outcomes quality of preoperative healthcare experience from the patient’s perspective will be assessed using the Person-Centred Coordinated Care Experience questionnaire (P3CEQ) in both treatment and control groups. The P3CEQ is a valid and reliable measure of patient-centred coordinated care in primary healthcare services in the UK. The English P3CEQ is a 10-item questionnaire that includes two domains of person-centred and care coordination factors, with a total score ranging from 0 to 30 where a higher score represents better experiences of person-centred care. One optional question about the involvement of family member/carer is not included in the final scoring system as the item exceeded the acceptable missing response threshold (>15%). However, as Confucian family values are important in medical decision-making in the Chinese culture, we will include this question in our scoring system. The English P3CEQ has been translated into Hong Kong Chinese for psychometric validation in another study (unpublished). The Hong Kong Chinese version will be used on the day before surgery on hospital admission, which is the common timepoint shared between the control and intervention groups. Hospital Anxiety and Depression Scale (HADS) Quality of Recovery (QoR-15) Days (alive and) at home within 30 days after surgery (DAH The change in anxiety and depression levels will be measured using the Hong Kong Chinese version of the Hospital Anxiety and Depression Scale (HADS) questionnaire. This is a valid and reliable tool, with seven questions relating to anxiety and seven questions relating to depression. The subscales of anxiety and depression ranges from 0 to 21, with higher scores indicating higher severity of disorder. Patients will be asked to complete the HADS at the time of randomisation. The blinded outcome assessor will ask patients to complete the HADS on the day before surgery on hospital admission. The Chinese version of the 15-item Quality of Recovery (QoR-15) will be used on postoperative day 3. The QoR-15 includes the items measuring pain, physical comfort, physical independence, psychological support and emotional state. The QoR-15 score ranges from 0 to 150 and takes about 3 min to complete. The validity (convergent, construct and discriminant), reliability (internal consistency, split-half and test–retest), responsiveness, acceptability and feasibility properties have been well established. A poor symptom state (recovery) after surgery has been defined at a cut-off of <118. Depending on patient’s postoperative status, QoR-15 assessment may be deferred if patient is unwell or unavailable when outcome assessor collects the data. QoR-15 assessment will be conducted at a later date after obtaining patient’s agreement. The exact date of actual QoR-15 assessment will be recorded by the blinded outcome assessor. 30 ) The DAH 30 is a patient-centred, generic outcome measure that will be used to measure the patient’s overall recovery profile at 30 days after surgery. DAH 30 is a composite measure that incorporates the details on postoperative hospital length of stay, discharge to rehabilitation centre or nursing home, hospital readmissions and postoperative deaths. Half a day difference is considered clinically meaningful. We will extract data from the electronic patient medical record to estimate the DAH 30 . Baseline demographic characteristics (age, sex, education level and living at home with family member status) will be recorded at the time of randomisation. From the patient’s medical record, we will collect the following data: prehabilitation compliance rate with various elements of prehabilitation and number of sessions attended in the intervention group, CFS at time of randomisation and before elective surgery, American Society of Anesthesiologists Physical Status Classification, surgical and anaesthetic details, duration of intensive care unit admission, severity of illness using (APACHE II) in critically ill patients requiring postoperative care, predicted mortality risk in cardiac surgical patients (logistic EuroScore), duration of postoperative stay, hospital readmission, hospital discharge destination and vital status (dead/alive) at 30 days after surgery. Group sample sizes of 45 (intervention) and 45 (control) will achieve 80% power to reject the null hypothesis of zero effect size when the population effect size is 0.60 (medium to large effect size) and the significance level (alpha) is 0.050 using a two-sided two-sample equal-variance t-test. To allow for 10% loss to follow-up, we will recruit 50 patients in each arm; total sample of 100. Missing data will be checked and imputed using the most common category value for categorical variables or median for continuous variables if there is <10% missing data. Otherwise, multiple imputation techniques will be used. Shapiro-Wilk’s test will be used to check data for normality. Appropriate independent Student’s t-test or Mann-Whitney U test will be used appropriately to compared group differences for P3CEQ, QoR-15 and DAH 30 . The mean difference in HADS scores between groups over time (interaction group*time) will be assessed using the generalised estimating equation with a Gaussian distribution, identify-link function, exchangeable correlation with robust standard errors. Both intention-to-treat and per-protocol analyses will be performed. The two-sided level of significance will be set at p<0.05. SPSS V.27.0 (IBM, Armonk, NY) and Stata V.17.0 (StataCorp, College Station, TX) will be used to performed data analyses. Study data will be collected and managed using REDCap electronic data capture tools hosted at The Chinese University of Hong Kong; (2) audit trails for tracking data manipulation and export procedures; (3) automated export procedures for seamless data downloads to common statistical packages and (4) procedures for data integration and interoperability with external sources. No interim analysis has been planned. There will be no formal data monitoring committee. However, the study progress and any unanticipated serious adverse events will be reported as part of an annual renewal application for local research ethics committee approval. Anonymised data set will be available after the publication of the completed study, following the deposition of the data set into The Chinese University Research Data Repository ( https://researchdata.cuhk.edu.hk/ ). Patients and the public were not involved in the development of the research question, the design of the study nor did they contribute to the editing of this document for readability or accuracy. Study participants will receive a one-page plain language summary of the results on completion of the study as part of the knowledge translation approach. With Hong Kong’s ageing population, the demand for prehabilitation before complex high risk surgical procedures is expected to increase. Our development of a prehabilitation video was based on our previous positive experience with a multifaceted preoperative patient education programme and recent findings from qualitative studies measuring patients’ and caregivers’ perspectives of important elements in prehabilitation. Videos taken in real environment with clear explanations about the prehabilitation and expected postoperative recovery processes were common priorities identified in both studies. Prehabilitation could improve patient satisfaction through enhanced and continuous engagement with and support from healthcare providers during the presurgical period. As far as we are aware, no studies have measured the quality of patient-centred coordinated care associated with prehabilitation programmes. The results of this two-group, parallel, superiority, single-blinded RCT will enable us to quantify the incremental level of preoperative patient-centred coordinated care with prehabilitation over standard care in adults undergoing a range of major to ultramajor elective surgery. If favourable results are associated with prehabilitation, the video can be distributed to other public hospitals in Hong Kong with prehabilitation programmes for wider patient education dissemination. However, a limitation of the study is that it may not be generalisable to other surgical specialities outside our inclusion criteria and in settings with vastly different structured multimodal prehabilitation programmes outside Hong Kong. As multimodal prehabilitation is a complex intervention, the exact attribution (%) of patient education video, exercise prehabilitation and nutritional prehabilitation to the overall effect on preoperative person-centred coordinated care experience may be difficult to estimate with the proposed sample size of 100 participants. Nonetheless, the findings will be presented at scientific meetings, in a peer-reviewed journal and to study participants to address the paucity of preoperative patient-centred coordinated care experience studies. Trial status Patient recruitment will start in mid-2023 after the Chinese version of the P3CEQ tool has undergone sufficient psychometrical validations in another study we are currently conducting. We expect patient recruitment and 1 month of follow-up to be completed by the end of 2024. Reviewer comments Author's manuscript
Prognostic Relevance of Type 4a Myocardial Infarction and Periprocedural Myocardial Injury in Patients With Non–ST-Segment–Elevation Myocardial Infarction
247da7d0-aba2-41a6-ac40-8fb25adb061f
11913249
Surgery[mh]
This study is the first to investigate the incidence and prognostic impact of periprocedural myocardial injury (PMI) with and without type 4a myocardial infarction after non–ST-segment–elevation myocardial infarction (NSTEMI) in patients undergoing percutaneous coronary intervention (PCI). A considerable number of patients with NSTEMI developed PMI with or without type 4a myocardial infarction after PCI. These events were associated with a significantly increased risk of 1-year adverse outcomes and occurred more frequently than in patients with chronic coronary syndromes. A post-PCI change in troponin I >40%, along with an absolute postprocedural value of ≥5 times the 99th percentile upper reference limit, was identified as the optimal threshold for defining prognostically relevant PMI. These findings suggest that PMI with or without type 4a myocardial infarction significantly affects outcomes in patients with NSTEMI undergoing PCI. The newly identified post-PCI troponin I change threshold >40% for diagnosing a prognostically relevant PMI might aid risk stratification and management of patients with NSTEMI. The suggested criterion for defining prognostically relevant PMI, a >40% increase within 3 to 6 hours post-PCI combined with an absolute postprocedural value exceeding 5 times the 99th percentile upper reference limit, could serve as a valuable clinical end point in future research on the management and outcomes of patients with NSTEMI. Study Design and Population Measurement of cTn levels Definitions of PMI and Type 4a MI Follow-Up and End Points Statistical Analysis The present study is a prespecified subanalysis of the observational prospective registry AMIPE (Acute Myocardial Infarction, Prognostic and Therapeutic Evaluation; NCT03883711), evaluating the outcomes of patients admitted with acute MI to S. Orsola-Malpighi and Maggiore Hospitals of the Bologna metropolitan area. In the present analysis, we included consecutive patients with NSTEMI undergoing PCI between January 1, 2017, and April 30, 2022. The diagnosis of NSTEMI was based on the Fourth UDMI, and patients were managed according to current guidelines. , , From hospital admission, all patients underwent serial measurements of cTn I (cTnI). As required for the diagnosis of periprocedural MI or myocardial injury, patients were eligible if pre-PCI cTnI levels were stable (variation ≤20%) or falling. Exclusion criteria for the present study were unavailability of serial cTnI measurements, detection of rising pre-PCI cTnI levels (unstable, variation >20%), incomplete data at the 1-year follow-up, and lack of informed consent. All the results were internally validated in a cohort of consecutive patients with NSTEMI undergoing PCI enrolled at the same 2 centers between May 1, 2022, and April 30, 2023. The protocol was approved by the institutional review board (registration No. 600/2018/Oss/AOUBo). The present study was conducted according to the Declaration of Helsinki; all patients were informed about their participation in the study and provided informed consent for the anonymous publication of scientific data. Details of data collection are provided in the Supplemental Methods . The datasets used From January 1, 2017, to September 4, 2018, conventional cTnI levels were measured by chemiluminescent immunoassay for antigen detection (Beckman Coulter Access AccuTnI+3 assay), with a 99th percentile URL of 40 ng/L for both men and women. From September 5, 2018, onward, high-sensitivity cTnI (hs-cTnI) was collected with the Access hsTnI assay (Beckman Coulter), which has a URL of 19.8 ng/L for men and 11.6 ng/L for women. Before PCI, cTnI measurements were performed at hospital admission (0 hours), every 3 hours until the cTnI peak was reached, and within 1 hour before coronary angiography (baseline cTnI). After PCI, ≥3 measurements of cTnI were obtained: at the end of PCI and after 3 and 6 hours. In case of an increase in post-PCI cTnI or if clinically indicated (eg, new ischemic symptoms or electrocardiographic changes), repeated measurements were obtained every 3 hours to assess peak post-PCI levels within the first 48 hours after PCI. Preprocedural and postprocedural cTnI levels were used for the current analysis. Post-PCI ΔcTnI (Δ%) was calculated as [(post-PCI peak cTnI – baseline cTnI)/baseline cTnI]×100. Type 4a MI and PMI were adjudicated by 2 independent experts (P.P. and A.S.) using all clinical and instrumental information collected during the index hospitalization. According to the Fourth UDMI, in patients with elevated baseline cTnI who had stable (variation ≤20%) or falling cTnI levels, the post-PCI cTnI increase >20% with an absolute postprocedural value of ≥5 times the 99th percentile URL was used for the definition of PMI, taking into account the differences between conventional and high-sensitivity troponin assays, as well as sex-specific differences for hs-cTnI. Type 4a MI was diagnosed in the presence of PMI plus 1 of the following elements: (1) new ischemic ECG changes, (2) development of new pathological Q waves, (3) iorigin, or (4) angiographic findings consistent with a procedural flow-limiting complication such as coronary dissection, loss of a side branch, slow flow, thrombus, or distal embolization. Per study protocol, all patients of the final study population underwent a standard 12-lead ECG at the time of first medical contact, on arrival at the cardiac intensive care unit, before PCI, after PCI within 1 hour (on return to the cardiac intensive care unit), and every morning until discharge. All patients also underwent at least one 2-dimensional transthoracic echocardiography at the time of NSTEMI diagnosis and at least one 2-dimensional transthoracic echocardiography within 48 hours after PCI. Furthermore, if clinically indicated, the patient underwent additional ECG and echocardiography. Further details were specified in the Supplemental Methods . Based on the above definitions, patients were divided into 3 subgroups: (1) PMI with type 4a MI, (2) PMI without type 4a MI, and (3) no PMI. Patients were followed up after discharge through outpatient visits or telephone contacts using a standard questionnaire. The primary end point of the study was 1-year all-cause mortality. Cardiovascular deaths were defined as all deaths except those in which the exclusive primary underlying cause was noncardiovascular. The secondary end point was a composite of major adverse cardiovascular events (MACEs) at 1 year, including all-cause mortality, nonfatal reinfarction, urgent revascularization, nonfatal ischemic stroke, and hospitalization for heart failure. Patients were followed up until the first event for the calculation of MACE rates. The secondary end point definitions are reported in the Supplemental Methods . Continuous variables were summarized as mean±SD or median and interquartile range according to the normality of the frequency distribution; categorical variables were summarized as absolute and percentage frequencies. The normality of the frequency distribution was assessed with the Shapiro-Wilk test. Event-free survival was estimated with Kaplan-Meier curves and compared between groups with the log-rank test. Unadjusted and adjusted hazard ratios (HRs) for 1-year mortality and MACEs were calculated with Cox proportional hazard models. Age and peak pre-PCI cTnI levels, the most important known predictors of outcome in patients with NSTEMI, were used for adjustment. Age was considered a continuous variable, whereas peak pre-PCI cTnI concentration was log-transformed for normalization and expressed as a multiple of the URL. Other variables for adjustment (ie, confounders) were identified as the demographic and clinical characteristics associated with both outcome and exposure (the categorical variable no PMI/PMI without type 4a MI/PMI with type 4a MI). As potential confounders, we selected the main known predictors of outcome in patients with NSTEMI, specifically sex, diabetes, chronic obstructive pulmonary disease, complex PCI (the definition of complex PCI is provided in the Supplemental Methods ), baseline creatinine, GRACE (Global Registry of Acute Coronary Events) score (coded as <140 or ≥140), left ventricular ejection fraction, complete revascularization (coded yes/no), prior MI, prior stroke, and peripheral artery disease. The identified confounding factors associated with both outcome and exposure were complete revascularization and complex PCI. Receiver-operating characteristic curve analysis was performed to determine which measure—pre-PCI peak cTnI levels, post-PCI ΔcTnI, or post-PCI peak cTnI levels—was most accurate in predicting 1-year mortality for both assays, with the areas under the curves being compared using the DeLong test. The optimal cutoff for post-PCI ΔcTnI, balancing sensitivity and specificity, was identified using the maximum Youden index (sensitivity+specificity−1). In addition, a sensitivity analysis was conducted on patients undergoing hs-cTnI measurements and on those with an absolute postprocedural cTnI value ≥5 times the 99th percentile URL. Univariable and multivariable logistic regression analyses were used to identify the baseline clinical and angiographic variables independently associated with PMI and type 4a MI. Variables with a significance level of 0.1 in univariable models were included in multivariable models. All analyses were replicated in the internal validation cohort to test the robustness of the results obtained. Statistical analyses were performed with SPSS Statistics version 28.0.1.1 (IBM) and Stata version 17 (StataCorp). The significance level was set at P <0.05. As shown in the study flowchart ( Figure S1 ), 1581 patients admitted for NSTEMI undergoing PCI and included in the AMIPE registry were potentially eligible for the present study. Among those, 133 patients were excluded because of elevated and unstable cTnI levels (variation >20%) at the time of PCI; 25 patients because of unavailability of serial cTnI measurements; and 11 patients because of incomplete 1-year follow-up. The final sample consisted of 1412 patients with NSTEMI with stable or falling cTnI levels at baseline, all of whom had serial cTnI measurements, ECGs, and echocardiograms performed at the time points specified in the Methods section and complete 1-year follow-up data. Baseline, angiographic, and procedural characteristics of patients with stable or falling pre-PCI cTnI levels compared with patients with unstable pre-PCI cTnI levels or no serial cTnI measurements are provided in Tables S1 and S2 . Incidence of Periprocedural Ischemic Events According to Current Definitions Prognostic Relevance of Periprocedural Ischemic Events According to Current Definitions Defining the Optimal Prognostic Threshold for PMI Internal Validation Cohort Sensitivity Analysis in Patients Undergoing hs-cTnI Testing Predictors of Periprocedural Ischemic Events After NSTEMI Variables associated with the occurrence of PMI and type 4a MI (as defined by the Fourth UDMI) in the primary cohort study are comprehensively detailed in Table S18 . Independent predictors of PMI included creatinine at admission (aOR, 1.20 [95% CI, 1.06–1.35]; P =0.003), coronary bifurcation PCI (aOR, 2.62 [95% CI, 1.96–3.51]; P <0.001), and total stent length ≥60 mm (aOR, 1.93 [95% CI, 1.39–2.69]; P <0.001). Of these variables, only coronary bifurcation PCI (aOR, 3.71 [95% CI, 2.70–5.10]; P <0.001) and total stent length ≥60 mm (aOR, 1.87 [95% CI, 1.30–2.68]; P <0.001) remained independent predictors of type 4a MI, along with age (aOR, 1.02 [95% CI, 1.01–1.04]; P =0.006). Finally, the procedural risks shown to be independent predictors of PMI defined according to the Fourth UDMI were also independently associated with PMI redefined as a post-PCI ΔcTnI >40% together with an absolute postprocedural value of ≥5 times the 99th percentile URL ( Table S19 ). According to the Fourth UDMI, PMI occurred in 37.4% of patients (n=524), of whom 240 (17% of the overall cohort) met the criteria for type 4a MI. The remaining 62.6% of patients (n=884) did not experience any periprocedural ischemic events. Tables and show baseline clinical, angiographic, and procedural characteristics and discharge details of the 3 subgroups. Periprocedural ischemic events were assessed with hs-cTnI in most patients (n=755 patients, 53.5%); the remaining were evaluated with conventional cTnI (n=657 patients, 46.5%). The occurrence rate of PMI with and without type 4a MI did not differ between the periods of conventional cTnI testing and hs-cTnI testing ( Table S3 ). Among the 240 patients who developed PMI with type 4a MI: (1) 78.3% (n=188) exhibited new ischemic electrocardiographic changes or Q waves; (2) 27.9% (n=67) showed echocardiographic evidence of new loss of viable myocardium or new regional wall motion abnormality consistent with an ischemic origin, of whom 24 patients experienced a decrease in left ventricular ejection fraction ≥10% between the pre-PCI and post-PCI echocardiography; and (3) 79.6% (n=191) had angiographic findings indicating procedural flow-limiting complications. Fifty-two patients (21.7%) had evidence of periprocedural myocardial ischemia at ECG, echocardiography, and invasive coronary angiography. The angiographic findings associated with type 4a MI are presented in Table S4 . Distributions of cTnI concentrations during hospitalization, expressed as a multiple of the URL on a log scale, were presented by violin plots stratified according to the occurrence of periprocedural events (Figure ), showing no significant differences in cTnI levels among the 3 subgroups at any time before PCI. Further details on pre-PCI and post-PCI cTnI values in the 3 subgroups are shown in Tables S5 and S6 , stratified according to the specific type of cTnI measured. The overall incidence of the primary and secondary end points was 7.2% and 15%, respectively. Patients who experienced PMI with type 4a MI had higher rates of all-cause mortality and MACEs at the 1-year follow-up compared with patients with PMI without type 4a MI criteria and those without periprocedural ischemic events (Table ). Figure shows Kaplan-Meier curves for the primary and secondary end points at the 1-year follow-up, illustrating worse outcomes for patients with PMI compared with those without PMI and for patients with adjudicated type 4a MI compared to those without adjudicated type 4a MI among patients with PMI. In the multivariable Cox regression model, patients with PMI had a 3-fold increased risk of all-cause mortality (HR, 3.21 [95% CI, 2.14–4.82], P <0.001; adjusted HR [aHR], 2.68 [95% CI, 1.77–4.04], P <0.001) and an elevated risk of MACEs (HR, 1.54 [95% CI, 1.30–1.83], P <0.001; aHR, 1.39 [95% CI, 1.17–1.65], P <0.001) at the 1-year follow-up compared with those without periprocedural ischemic events. Furthermore, among patients with PMI, those meeting the criteria for type 4a MI had a 3-fold increased risk of all-cause mortality (HR, 3.23 [95% CI, 1.90–5.51], P <0.001; aHR, 2.94 [95% CI, 1.71–5.06], P <0.001) and a 2-fold increased risk of MACEs (HR, 2.48 [95% CI, 1.70–3.63], P <0.001; aHR, 2.25 [95% CI, 1.53–3.30], P <0.001) at the 1-year follow-up compared with those with PMI but without type 4a MI criteria (Table ). In the receiver-operating characteristic curve analysis, post-PCI ΔcTnI predicted the primary outcome more accurately than either pre-PCI or post-PCI peak cTnI levels ( Figure S2 ). In the overall cohort, the optimal threshold for post-PCI ΔcTnI to predict 1-year all-cause mortality was >40%. This cutoff point provided an optimal balance between sensitivity (54.9%) and specificity (80.4%; Table S7 ). All patients were subsequently reclassified with these thresholds. The occurrence rates of post-PCI ΔcTnI levels of ≤20%, >20% but ≤40%, and >40% were similar between the periods of conventional cTnI testing and hs-cTnI testing ( Table S8 ). Among 528 patients in the PMI group defined according to Fourth UDMI criteria, 312 (22.1% of the overall population) had a post-PCI ΔcTnI >40%, and the remaining 216 patients had a post-PCI ΔcTnI >20% but ≤40%. Figure illustrates the cumulative incidence curves for the primary and secondary end point at the 1-year follow-up, stratified by post-PCI ΔcTnI levels of ≤20% (no PMI subgroup), >20% but ≤40%, and >40%. The plots demonstrate worse outcomes for patients with a post-PCI ΔcTnI >40% for both end points. Similarly, Cox regression analyses showed significantly higher risks for 1-year all-cause mortality (HR, 4.68 [95% CI, 3.07–7.12], P <0.001; aHR, 3.94 [95% CI, 2.57–6.04], P <0.001) and MACEs (HR, 3.02 [95% CI, 2.27–4.03], P <0.001; aHR, 2.79 [95% CI, 2.08–3.73], P <0.001) for patients with this newly defined PMI compared with those without periprocedural ischemic events (post-PCI ΔcTnI ≤20%). Conversely, patients with post-PCI ΔcTnI levels between 20% and 40% did not show an increased risk of 1-year all-cause mortality (HR, 1.25 [95% CI, 0.64–2.46], P =0.510; aHR, 1.04 [95% CI, 0.53–2.04], P =0.920) or MACEs (HR, 1.07 [95% CI, 0.69–1.67], P =0.748; aHR, 0.94 [95% CI, 0.60–1.46], P =0.782) compared with patients with post-PCI ΔcTnI levels ≤20% (Table ). These findings remained consistent in the sensitivity analysis on the subset of patients with an absolute postprocedural cTnI concentration ≥5 times the 99th percentile URL (81.2%, n=1146), as shown in Figure S3 and Table S9 . Among these patients, the proportion with an adjudicated diagnosis of type 4a MI was 0% (0/622) in those with a post-PCI ΔcTnI ≤20%, 26.3% (56/213) in those with a post-PCI ΔcTnI >20% but ≤40%, and 59.2% (184/311) in those with a post-PCI ΔcTnI >40%. The internal validation cohort consisted of 305 patients with NSTEMI undergoing PCI with stable or decreasing levels of hs-cTnI at the time of PCI. Baseline, angiographic, and procedural characteristics of this population are detailed in Tables S10 and S11 . The incidence of periprocedural ischemic complications was comparable to that observed in the primary cohort study, with PMI (defined according to the Fourth UDMI criteria) occurring in 37% of patients (n=113), of whom 48 (15.7% of the validation cohort) had adjudicated type 4a MI, whereas the remaining 63% (n=192) did not experience any periprocedural ischemic events. The overall incidence of primary and secondary end points was 7.5% and 15.1%, respectively ( Table S12 ). Figure S4 shows the cumulative incidence curves for the primary and secondary end points at the 1-year follow-up in this patient cohort. Similar to the primary cohort study, patients with PMI had a higher risk of 1-year all-cause mortality and MACEs compared with those without in the validation cohort. In addition, among patients with PMI, those with adjudicated type 4a MI demonstrated an elevated risk for both outcomes compared with those with PMI but without type 4a MI criteria ( Table S13 ). After stratification of patients by post-PCI ΔcTnI levels of ≤20%, >20% but ≤40%, and >40%, only a post-PCI Δhs-cTnI >40% was independently associated with a higher risk of all-cause mortality (HR, 4.76 [95% CI, 1.97–11.48], P <0.001; aHR, 4.15 [95% CI, 1.67–10.30], P <0.001) and MACEs (HR, 3.30 [95% CI, 1.79–6.09], P <0.001; aHR, 2.78 [95% CI, 1.48–5.21], P =0.002) at the 1-year follow-up ( Table S14 ; Figure S5 ). Among the subgroup of patients with an absolute hs-cTnI concentration ≥5 times the URL (84.6%, n=258), the proportion of patients with an adjudicated diagnosis of type 4a MI was 0% (0/147) in those with a post-PCI ΔcTnI ≤20%, 4.9% (2/41) in those with a post-PCI ΔcTnI >20% but ≤40%, and 65.7% (46/70) in those with a post-PCI ΔcTnI >40%. In the sensitivity analysis of patients undergoing hs-cTnI measurements, PMI was associated with significantly higher risks of 1-year all-cause mortality and MACEs. Patients with PMI fulfilling the type 4a MI criteria had an even higher risk compared with those with PMI but without type 4a MI ( Figure S6 ; Table S15 ). Receiver-operating characteristic curve analysis confirmed that a post-PCI Δhs-cTnI >40% was the optimal threshold for predicting 1-year all-cause mortality, demonstrating comparable sensitivity and specificity ( Table S16 ). After stratification of patients by post-PCI Δhs-cTnI values, Cox regression models confirmed that only a post-PCI Δhs-cTnI >40% was significantly associated with worse outcomes ( Figure S7 ; Table S17 ). The main findings of this first study investigating the incidence and prognostic impact of PMI with and without type 4a MI after NSTEMI are as follows. (1) A significant proportion of NSTEMI patients experienced PMI, as this complication was observed in approximately 4 out of 10 subjects, with no differences between the conventional cTnI and hs-cTnI test periods. (2) PMI was associated with an increased risk of 1-year all-cause mortality and MACEs. (3) Type 4a MI had a significantly increased risk of 1-year all-cause mortality and MACEs, even compared with PMI without type 4a MI criteria. (4) A post-PCI ΔcTnI >20% but ≤40% was associated with 1-year outcomes similar to those observed in patients with post-PCI ΔcTnI ≤20%. Conversely, a post-PCI ΔcTnI increase of >40%, combined with an absolute postprocedural value of ≥5 times the 99th percentile URL, was identified as the best threshold for diagnosing a prognostically relevant PMI. (5) Patients exceeding this threshold had a significant 4-fold and 3-fold increased risk of 1-year all-cause mortality and MACEs, respectively. (6) All of these findings were further validated in an internal cohort and in the sensitivity analysis performed on patients undergoing hs-cTnI measurements. Incidence of Periprocedural Ischemic Events in NSTEMI Prognostic Impact of Type 4a MI Optimizing PMI Diagnosis Study Limitations Conclusions PMI with and PMI without type 4a MI, as defined by the Fourth UDMI criteria, are 2 common PCI-related complications in patients with NSTEMI, with type 4a MI being associated with worse outcomes. We identified a post-PCI change in troponin threshold >40%, combined with an absolute postprocedural value of ≥5 times the 99th percentile URL, as the optimal criterion for diagnosing prognostically relevant PMI. This threshold was strongly linked to increased mortality and MACE, independent of the presence of new myocardial ischemia. These findings may improve risk stratification, guide more tailored management strategies, and ultimately enhance outcomes for patients with NSTEMI undergoing PCI. Our results showed that patients with NSTEMI are frequently susceptible to PMI with and without type 4a MI during the periprocedural period, underscoring the need for a more tailored management approach. Patients with NSTEMI have a remarkably higher incidence of type 4a MI than patients with chronic coronary syndromes, suggesting that specific factors related to the acute setting and subsequent revascularization procedure contribute to the increased risk. , In fact, the acute phase of NSTEMI is associated with active inflammatory response, plaque instability, and endothelial dysfunction, all of which favor the development of a prothrombotic state. , During PCI, disruption of a vulnerable plaque and subsequent mechanical injury to the coronary endothelium may trigger angiographically evident thrombus formation, as observed in our population, leading to type 4a MI. A previous study by Lee et al revealed that patients with NSTEMI who developed post-PCI cTn elevations exhibited coronary lesions with greater lipid length and a higher prevalence of thin-cap fibroatheroma analyzed with optical coherence tomography. During PCI, these high-risk lesions were more frequently associated with distal embolization, contributing to microvascular obstruction and myocardial injury. In addition, the acute inflammatory environment in patients with NSTEMI characterized by the release of inflammatory mediators such as cytokines and chemokines may contribute to an increased susceptibility to myocardial injury by promoting endothelial dysfunction, increasing microvascular resistance, and impairing myocardial perfusion, increasing the risk of no reflow during PCI. Overall, these factors might contribute to myocardial injury, even in the absence of overt procedural complications (eg, no reflow, coronary dissection, occlusion of a side branch). Furthermore, the frequent anatomical complexity of coronary lesions in patients with NSTEMI (eg, a higher prevalence of multivessel or left main disease, calcific lesions requiring plaque modification techniques, bifurcations, diffuse disease) might contribute to a higher risk of procedural complications. Indeed, the high burden and severity of atherosclerotic disease might make PCI technically challenging and increase the likelihood of distal embolization, coronary dissection, or side-branch occlusion, potentially resulting in PMI with or without type 4a MI. Finally, patients with NSTEMI are typically characterized by a complex clinical profile with multiple comorbidities (age, diabetes, hypertension, chronic kidney disease, etc) and may therefore be more susceptible to periprocedural ischemic events after PCI. In the present study, patients with NSTEMI experiencing PMI with type 4a MI had a significantly increased risk of adverse clinical outcomes, including mortality at the 1-year follow-up, also compared with patients with PMI without type 4a MI criteria. Type 4a MI represents a critical myocardial insult, triggering an enhanced inflammatory response and potentially exacerbating underlying disease processes. These factors might ultimately lead to progressive myocardial dysfunction, heart failure, and an increased risk of mortality over time. The key feature of type 4a MI is the coexistence of elements indicating the onset/exacerbation of myocardial ischemia related to the PCI procedure. Indeed, our data confirm that the detection of new ischemic changes on ECG or echocardiography and angiographic evidence of the development of a flow-limiting complication provides additional prognostic information compared with the increase in post-PCI cTn levels alone. The strong prognostic impact of type 4a MI after NSTEMI underscores the need for tailored effective risk stratification and targeted interventions to improve outcomes in this specific patient population. However, it remains uncertain whether mortality and MACEs after type 4a MI in patients with NSTEMI are a consequence of the complexity of the procedure, the vulnerability of the patients, or the extent of iatrogenic cardiac injury. In our study, a post-PCI ΔcTnI increase of >40% was identified as the optimal threshold for diagnosing prognostically relevant PMI. Notably, redefining PMI using this threshold, even when combined with an absolute postprocedural cTnI value ≥5 times the 99th percentile URL, was associated with a 4-fold increase in the risk of 1-year all-cause mortality, regardless of the presence of new myocardial ischemia (electrocardiographic, imaging, and angiographic criteria). In contrast, a post-PCI ΔcTnI between 20% and 40% was associated with the same 1-year all-cause mortality and MACE risk as in patients without PMI (post-PCI ΔcTnI ≤20%). This finding could be attributed to a higher rate of false-positive identifications, including patients with mild cTn elevations but without clinically significant myocardial injury. Consequently, including patients with minor cTn elevations, up to a post-PCI change of 40%, in the PMI category could attenuate the association between myocardial injury and adverse outcomes, ultimately reducing its prognostic significance. Furthermore, patients with NSTEMI often have comorbidities such as chronic kidney disease, heart failure, or advanced age, which could account for increases in post-PCI cTn values without necessarily indicating new myocardial injury. However, this variability did not affect outcomes; the HRs for adverse events at 1 year were adjusted for the major known predictors of outcome in patients with NSTEMI. The higher threshold identified in our study compared with that proposed by the Fourth UDMI aligns with recent evidence in the context of chronic coronary syndromes. As a result, the current cTn cutoffs for the diagnosis, which are based on expert consensus rather than robust scientific evidence, may be overly sensitive and might warrant revision in the light of currently available data. This threshold likely represents a critical point of myocardial damage after PCI that cannot be justified solely by comorbidities or reduced cTn clearance. A substantial rise in cTn levels after PCI, even in the absence of clear evidence of new myocardial ischemia, carries considerable prognostic weight. Nonetheless, so far it is still unclear whether management changes might be necessary and which kind of changes might be necessary after the proper identification of these events. In this regard, it could be hypothesized that establishing a ΔcTn threshold associated with adverse outcomes could aid in identifying patients who would benefit the most from changes in management/treatment strategy (eg, prolonged monitoring, intensification of secondary prevention therapy). Furthermore, a post-PCI ΔcTnI >40% for diagnosing PMI could be crucial in identifying patients who may require imaging to confirm or rule out a diagnosis of type 4a MI. Further studies with an appropriate randomized design are needed to address these questions. Last, the proposed definition of prognostically relevant PMI, a >40% increase 3 to 6 hours after PCI, along with an absolute postprocedural value of ≥5 times the 99th percentile URL, could serve as a valuable clinical end point in future studies exploring management strategies and outcomes in patients with NSTEMI. Our findings should be interpreted considering some limitations. First, 1-year follow-up may be considered relatively short to assess clinical outcomes; a longer follow-up might provide further insights into long-term outcomes and confirm the observed associations. Moreover, 2 different cTnI assays (conventional cTnI and hs-cTnI) were used during the study period. However, because each patient was consistently assessed with the same assay over time, the analysis of ΔcTnI variation does not appear to be influenced by the type of cTnI assay used. Indeed, in our study, the post-PCI ΔcTnI values of patients analyzed with conventional cTnI are similar to those of patients analyzed with hs-cTnI. Furthermore, there was no change in the frequency of PMI with or without type 4a MI between the period of conventional cTnI assay and hs-cTnI assay ( Tables S3 and S8 ). Finally, our results were validated only in an additional cohort of patients from the same sites; external validation is needed to confirm our findings. Sources of Funding None. Disclosures None. Supplemental Material Supplemental Methods Tables S1–S19 Figures S1–S7 References – None. None. Supplemental Methods Tables S1–S19 Figures S1–S7 References –
The Altered Lipid Composition and Key Lipid Metabolic Enzymes in Thiacloprid-Resistant
a1f33324-6911-4923-a18e-838e0c4e76fe
11594901
Biochemistry[mh]
The green peach aphid Myzus persicae (Sulzer) is one of the most economically important agricultural pests. It can feed on more than 400 plant species belonging to 50 families, including potato, tobacco, and eggplant. The GPA is notorious because it can act as a vector and transmit 115 different plant viruses, accounting for 67.7% of aphid vector viruses . Insecticides are the foundation of the management of the GPA in China and other countries. Neonicotinoids, which target the nicotinic acetylcholine receptors (nAChRs) of insects, mainly include acetamiprid, clothianidin, dinotefuran, imidacloprid, thiacloprid, and thiamethoxam, and they have become the most widely used insecticides for the control of M. persicae . However, resistance to neonicotinoids in M. Persicae has been reported worldwide and is a growing concern [ , , ]. Knowledge on the mechanisms of resistance of M. Persicae to neonicotinoids can help in establishing rational control strategies to delay the development of resistance and extend the service life of neonicotinoids. For several decades, research on neonicotinoid resistance in pests has mainly focused on target-site and metabolic resistance . The overexpression of metabolic enzymes, such as P450 genes in resistant pests, plays a crucial role in the enhanced detoxification of neonicotinoids. However, the evolution of insecticide resistance in pests is a complex genetic phenomenon, and a large number of enzymes are involved. Enzymes related to lipid metabolism have been proven to be associated with insecticide resistance in pests. For instance, Anophleles arabiensis and Anopheles gambiae populations that overexpress CYP4G16 and CYP4G17 show a higher deposition of cuticular hydrocarbons, which are linked to both resistance to insecticides and improved mating success [ , , ]. Carboxylesterases derived from the αEsterase gene cluster, including αE7 from Lucilia cuprina and Drosophila melanogaster , have a crucial physiological function in lipid metabolism and are involved in the biodegradation of organophosphate (OP) insecticides. However, the function of some insect lipid metabolic enzymes in insecticide resistance is overlooked. In the healthy nervous system, insect lipids, similar to vertebrate lipids, play a role in hormone synthesis and coordinate their metabolism with detoxification enzymes and antimicrobial peptides. The dysfunction of lipid metabolism enzymes disrupts normal lipid metabolism, thus causing functional disorders in insect bodies. Many studies have examined the effects of pesticides on lipids and their metabolism in non-target insects, such as Aphis mellifera and D. melanogaster , and it has been found that they mainly alter the lipid constituents of cells, leading to tissue and organ function disorders. A previous study found that low doses of spinosad, a microbial insecticide targeting the α6 subunit of nAChRs, triggered lipid dysregulation in D. melanogaster , as well as increasing lipid stores in the fat body and reducing lipid droplet numbers in the Malpighian tubules. This also indicates that the knockout of α6 from the membrane precipitated by spinosad exposure in wild-type flies leads to their death . Another study found that exposure to neonicotinoid pesticides alters the lipid composition in insects. Lipid metabolite ratios significantly differed between control and imidacloprid-exposed A. mellifera . Additionally, Cook reported that high-dose clothianidin, a neonicotinoid, reduced the lipid content in bees . Furthermore, after exposure to neonicotinoids, a reduction in lipid peroxidation (LPO) was observed in many non-target insects, such as Chironomus riparius and A. mellifera . As the second largest component in insects, lipids comprise a chemically diverse group of fatty acids, glycolipids, glycerophospholipids, sphingolipids, sterols, and phenols . These lipids are indispensable for maintaining the normal physiological functions of insects. Lipid metabolism involves many enzymes which play a crucial role in maintaining the balance of lipids in insects. For example, fatty acids (FAs) are a type of aliphatic hydrocarbon chain with a carboxyl group at one end. Their biosynthesis is a complex multi-step reaction process starting from acetyl-CoA ( ), mainly carried out by enzymes such as fatty acid synthase (FAS), the elongase of very-long-chain fatty acids (ELO), fatty acid desaturase (FAD), and thioesterase . Additionally, other enzymes are involved in fatty acid hydrolysis, such as fatty acyl-CoA reductase (FAR), which converts fatty acid to fatty aldehyde. Furthermore, phospholipases, such as phospholipase A2 (PLA2), can catalyze phospholipids (PLs) to release arachidonic acid (AA), a fatty acid which is usually used for the biosynthesis of eicosanoids in insects. Previous studies have mostly focused on the short-term effects of insecticide exposure on insect lipid composition and metabolism. However, insecticide resistance in insects evolves under the selective pressure of insecticides. During this process, insects adapt to the constantly changing environment, leading to the emergence of new traits. Apart from studies on specific lipid composition and metabolism, other aspects of lipid composition and metabolism related to insecticide resistance have been rarely investigated. The emergence of new research methods, such as metabolomics and transcriptomics, has helped in clarifying lipid metabolism associated with insect resistance to insecticides. In this study, we obtained M. persicae with high thiacloprid resistance through continuous screening with thiacloprid in the laboratory. Metabolomics and transcriptomics were applied to determine the differences in lipid composition and related metabolic enzymes between thiacloprid-resistant and -susceptible M. persicae . We also conducted functional validation to provide a reference for understanding the role of lipid metabolism in M. persicae resistance to neonicotinoid insecticides and for identifying new targets for insecticides. 2.1. Comparison of Lipid Metabolites Between THG-R and FFJ-S Populations 2.2. Characteristics and Expression Patterns of MpFASs in THG-R and FFJ-S Populations 2.3. Characteristics and Expression Patterns of MpELOs in THG-R and FFJ-S Populations 2.4. Characteristics and Expression Patterns of MpFADs in THG-R and FFJ-S Populations 2.5. Characteristics and Expression Patterns of MpTEs in THG-R and FFJ-S Populations 2.6. Characteristics and Expression Patterns of MpPLs in THG-R and FFJ-S Populations 2.7. Induction of Expression of Significantly Overexpressed Genes in THG-R via Neonicotinoids 2.8. The Effect of MpTHEM6 Gene Knockdown on the Sensitivity of THG-R to Neonicotinoids 2.9. Effect of MpTHEM6 Gene Knockdown on Adult Longevity and Offspring Production The adult longevity of M. persicae and the fecundity per female at 21 °C were investigated after RNAi of MpTHEM6 . Compared with FFJ-S, the apterous adult aphids of THG-R had a significantly reduced lifespan and a notable decrease in nymph production. The THG-R apterous adults treated with dsRNA-MpTHEM6 had a significantly shorter longevity than those treated with DEPC or dsRNA-GFP ( B). Additionally, the THG-R aphids treated with dsRNA-MpTHEM6 had the lowest fecundity (26.3 offspring), significantly lower compared to those treated with DEPC (49.3 offspring) or dsRNA-GFP (51.5 offspring) ( D). As shown in E, the THG-R apterous adults injected with dsRNA-MpTHEM6 produced many nonviable nymphs, leading to a significant reduction in adult fecundity. In both negative and positive ion modes, metabolomics revealed a total of 148 lipid metabolites in the aphids ( ). The lipids identified in the negative ion mode were classified into five categories: fatty acids, glycerophospholipids, polyketides, prenol lipids, and sterols. Glycerophosphocholines were the predominant compounds, with 17 distinct species, followed by fatty acids and conjugates, with 16 metabolites ( ). The lipid metabolites identified in the positive ion mode were further classified into six categories: fatty acids, glycerophospholipids, polyketides, prenol lipids, sphingolipids, and steroids. Within these subclasses, “fatty acids and conjugates” represented the largest group, with 13 metabolites, closely followed by glycerophosphocholines with 12 metabolites. A quantitative analysis revealed differences in the lipid metabolites between the THG-R and FFJ-S populations ( ). Of all the lipid metabolites, 90 were upregulated and 58 were downregulated in the THG-R population ( ). The metabolites with the most significant differences (Log2Fold ≥ 0.5) are shown in . The results indicate that compared with the lipid metabolites in FFJ-S, in the nega2), namely, arachidonic acid, (±)11-HETE (Log2Fold = 3.6, p < 0.0001), eicosapentaenoic acid (Log2Fold = 3.2, p < 0.0001), genistein, and lithocholic acid. In the positive ion mode, the resistant population THG-R had significantly increased levels of five lipids (Log2Fold > 1), namely, prostaglandin B1, hexanoylcarnitine, prostaglandin G2, PC (9:0/9:0), and PC (10:0/10:0). In the negative ion mode, the resistant population THG-R had significantly decreased levels of five lipids (Log2Fold < −0.9), namely, oxoadipic acid, sebacic acid, 8,15-dihete, prostaglandin A1, and tetrahydro-PGDM. In the posi1.5), namely, 6-keto-prostaglandin f1 alpha, dehydroepiandrosterone, androsterone, dehydrocholic acid, and N-acetylsphingosine. Among the top 20 lipid metabolites with the greatest differences, 6 were related to prostaglandin metabolism. Those significantly upregulated in THG-R were arachidonic acid (Log2Fold = 4.49, p < 0.0001), prostaglandin B1 (Log2Fold = 2.03, p < 0.001), and prostaglandin G2 (Log2Fold = 1.45, p < 0.0001), while those significantly downregulated were prostaglandin A1 (Log2Fold = −2.03, p < 0.0001), 8,5-diHETE (Log2Fold = −3.07, p < 0.01), tetranor-PGDM (Log2Fold = −3.07, p < 0.0001), and 6-keto-prostaglandin f1 alpha (Log2Fold = −1.57, p < 0.0001). This finding indicates a significant difference between the two groups in terms of the AA metabolic pathways associated with prostaglandins ( ). Regarding glycerophospholipids, a total of 29 compounds belonged to phosphatidylcholine (PC) and lysophosphatidylcholine (LPC), with 19 upregulated and 10 downregulated metabolites. Regarding phosphatidylethanolamine (PE) and lysophosphatidylethanolamine (LPE), 21 metabolites (i.e., 16 up- and 5 downregulated metabolites) were included ( ). The significantly upregulated PEs and LPEs in THG-R included LPE 18:0 (Log2Fold = 0.68, p < 0.0001), PE (16:1/16:1) (Log2Fold = 0.65, p = 0.031), LPE 18:1 (Log2Fold = 0.52, p < 0.01), LPE 20:0 (Log2Fold = 0.47, p < 0.0001), LPE 19:0 (Log2Fold = 0.43, p < 0.001), LPE 22:1 (Log2Fold = 0.38, p = 0.031), PE (18:1/18:2) (Log2Fold = 0.37, p = 0.074), LPE 16:0 (Log2Fold = 0.36, p < 0.01), LPE 15:0 (Log2Fold = 0.29, p = 0.028), LPE 18:2 (Log2Fold = 0.25, p < 0.0001), LPE 18:3 (Log2Fold = 0.23, p < 0.001), and LPE 16:1 (Log2Fold = 0.13, p < 0.001). In addition, a few PC and LPC metabolites were also significantly increased in THG-R, including LPC 20:4 (Log2Fold = 1.23, p < 0.0001), LPC 20:3 (Log2Fold = 1.11, p < 0.0001), PC (9:0/9:0) (Log2Fold = 1.1, p < 0.0001), PC (10:0/10:0) (Log2Fold = 1.07, p < 0.0001), PC (16:1/18:2) (Log2Fold = 0.81, p < 0.01), PC (14:0/18:2) (Log2Fold = 0.7, p < 0.001), LPC 18:0 (Log2Fold = 0.59, p < 0.0001), LPC 19:1 (Log2Fold = 0.55, p < 0.001), LPC 15:0 (Log2Fold = 0.53, p < 0.0001), and PC (8:0/8:0) (Log2Fold = 0.49, p < 0.0001). Five LPE metabolites (LPE 17:2 (Log2Fold = −0.21, p < 0.01), LPE 20:1 (Log2Fold = −0.34, p < 0.0001), LPE 14:1 (Log2Fold = −0.42, p < 0.001), LPE 19:1 (Log2Fold = −0.43, p = 0.011), PE (8:0/8:0) (Log2Fold = −1.22, p < 0.0001)), and five LPC metabolites (LPC 14:0 (Log2Fold = −0.17, p = 0.012), LPC 20:1 (Log2Fold = −0.3, p < 0.01), LPC 17:2 (Log2Fold = −0.26, p < 0.001), LPC 17:1 (Log2Fold = −0.43, p < 0.0001), and LPC 14:1 (Log2Fold = −0.45, p < 0.0001)) exhibited a significant decrease in THG-R. A total of 11 candidate FAS genes were identified in the aphid genome. A phylogenetic tree analysis indicated that these 11 MpFAS genes belonged to four categories, with most being in Clade Ⅰ, totaling 6, followed by Clade Ⅳ and Ⅲ with 2 each and Clades Ⅱ with only 1 ( A). Except for the gene MpFAS3, which contained 9 motifs, the other MpFASs contained 10 motifs, including the functional catalytic motif “GSVKS” (motif 4) ( A,B). An analysis of the gene domains revealed that there were 18 domains among the 11 MpFAS genes. Additionally, eight MpFAS genes contained the “PksD superfamily” and “NADB_Rossmann superfamily” domains, and seven genes contained the “PksD” domain ( A). Based on the transcriptomic data, a transcriptional expression analysis of the 11 MpFAS genes in the peach aphid was conducted, and it revealed that the expression level of MpFAS1 was the highest, while that of MpFASSshowed that among the second-instar nymphs, the expression levels of seven genes, namely, MpFAS1 , MpFAS4 , MpFAS5 , MpFAS7 , MpFAS8 , MpFAS10 , and MpFAS11seven genes, namely, MpFAS1 , MpFAS2 , MpFAS3E). We detected 16 MpELO genes in the aphid genome. A phylogenetic tree analysis indicated that these 16 MpELO genes belonged to five categories, with most being in Clade I, totaling 11, followed by Clade V with 2 and Clades II, III, and IV with only 1 each ( A). Among the 16 MpELO genes, the minimum number of motifs contained was 10, such as KXXEXXDT, HXXMYXYY, TXXQXXQ, and HXXHH (motif 1), a histidine-box motif that is conserved in all elongases ( A,B). An analysis of the gene domains revealed that all 16 MpELO genes contained only the “ELO” domain ( A). Using FPKM as the standard, the transcriptomic data indicated that among the 16 MpELO genes in the peach aphid, the expression levels of MpELO1 , MpELO2 , MpELO3 , and MpELO4 were all relatively high, while the expression level of MpELO11 was the lowest ( C). The transcriptomic data also showed that the expression levels of the MpELO genes in the THG-R population were lower than those in the FFJ-S population. The fluorescence quantitative PCR results revealed that among the second-instar nymphs, the expression levels of 11 genes, namely, MpELO1 , MpELO3 , MpELO5 , MpELO6 , MpELO7 , MpELO8 , MpELO9 , MpELO10 , MpELO11 , MpELO13 , and MpELO14 , were significantly lower in the THG-R population than in the FFJ-S population ( D). Among the adult females, the expression levels of 12 genes, namely, MpELO1 , MpELO2 , MpELO3 , MpELO4 , MpELO5 , MpELO7 , MpELO8 , MpELO9 , MpELO10 , MpELO11 , MpELO12 , and MpELO14 , were significantly lower in the THG-R population than in the FFJ-S population ( E). The differences in the expressions of the other MpELO genes were not significant. A total of 12 MpFAD genes were detected in the aphid genome and classified into five clades, with most being in Clade IV, totaling 4, followed by Clades I and II with 2 each and by Clades III and V with only 1 each ( A). Among the 12 MpFAD genes, the minimum number of motifs contained was eight, with MpFAD11 and MpFAD12 containing only motif 8 ( A,B). An analysis of the gene domains revealed that among the 11 MpFAD genes, 9 contained only one domain, and 4 contained the “Delta9-FADS-like” and “OLE1” domains. Unlike the other genes, MpFAD11 and MpFAD12 shared two domains: “Cyt-b5” and “DesA superfamily” ( A). Based on the transcriptomic data, a transcriptional expression analysis of the 12 MpFAD genes in the peach aphid showed that the expression levels of MpFAD1 and MpFAD2 were the highest, while the expression level of MpFAD11 was the lowest ( C). The transcriptomic data also indicated that the expression levels of the MpFADseven genes, namely, MpFAD1 , MpFAD3 , MpFAD4 , MpFAD6 , MpFAD7 , MpFAD8 , and MpFAD10eight genes, namely, MpFAD2 , MpFAD3 , MpFAD4 , MpFAD6 , MpFAD7 , MpFAD8 , MpFAD9 , and MpFAD11FAD genes were not significant. We identified 12 thioesterase genes ( ). A phylogenetic analysis revealed that these genes belonged to seven distinct clades encompassing prominent thioesterase families, such as acyl-protein thioesterases (three members), THEM6 genes (two members), ubiquitin thioesterases (two members), and acyl-CoA thioesterases (two members) ( A). The smallest number of motifs in these thioesterase genes was found to be 20 ( B). A domain architecture analysis further revealed that ten MpTEs only possessed a single domain, which belonged to the “Abhydrolase superfamily”, “Abhydrolase_2”, “Palm thioesterase”, “4HBT_2”, “Paal_thioesterase”, and “PLN02647 superfamily” categories ( A). The transcriptomic data of M. persicae revealed the transcriptional expression profiles of the 12 thioesterase genes, indicating that MpAPT1 and MpOPT1 exhibited higher expression levels, whereas MpACOTs13a , MpPpt2 , and MpTHEM6b showed lower expression levels ( C). The transcriptomic data also demonstrated that the expression levels of MpTHEM6a and MpPpt1 were higher in the THG-R population than in the FFJ-S population, while the expression levels of MpAPT1l , MpTRABID , MpACOTs13b , and MpAPT1 were lower in the THG-R population than in the FFJ-S population. The quantitative fluorescence PCR results further confirmed that among the second-instar nymphs and female adults, the expression level of MpTHEM6a was significantly higher in the THG-R population than in the FFJ-S population, whereas the expression levels of MpACOTs13b and MpAPT1 were significantly lower in the THG-R population than in the FFJ-S population ( D,E). In an analysis of the gene conservation regions in the aphid genome, a total of 12 phospholipase (PL) genes were identified. A phylogenetic tree analysis revealed that these 12 MpPL genes belonged to five distinct categories ( A). The PLCB2 category contained the highest number of genes, with four genes, while PLCB1 , PLCD , PLCA2 , and PLCABHD each contained two genes ( A). The smallest number of motifs in these 12 MpPL genes was found to be 15 ( B). An analysis of the gene domains showed that the 12 genes collectively possessed 12 domains, namely, “Phospholip_B”, “Phospholip_B superfamily”, “PFU”, “PUL”, “WD40”, “Phospholipase_B_like”, “DDHD”, “WWE”, “Abhydrolase superfamily”, “Patatin_and_cPLA2 superfamily”, “ANKYR”, and “YheT” ( A). Ten of the genes contained only one domain. The transcriptomic data of M. persicae revealed the transcriptional expression profiles of 12 MpPL genes, indicating that MpPLBr , MpPLB1a , MpPLA2p , MpPLA2 , MpPLDDH2 , MpPLA2B , and MpPLABHD3b exhibited higher expression levels, whereas MpPLBr2 showed a lower expression level. The transcriptomic data also demonstrated that the expression levels of the MpPLC genes were lower in the THG-R population than in the FFJ-S population ( C). The quantitative fluorescence PCR results further confirmed that the expression levels of all 12 MpPLC genes were lower in the THG-R population than in the FFJ-S population. Specifically, in the second-instar nymphs of the THG-R population, the expression levels of MpPLB1a , MpPLA2p , MpPLA2 , MpPLDDH2 , MpPLA2B , MpPLABHD3b , MpPLB2a , MpPLDDHD1 , and MpPLB1b were significantly lower than those in the FFJ-S population. Additionally, in the female adults of the THG-R population, the expression levels of MpPLBr , MpPLDDH2 , MpPLA2B , MpPLB2a , MpPLDDHD1 , MpPLB1b , MpPLABHD3a , and MpPLB2b were significantly lower than those in the FFJ-S population. The differences in the expressions of the other MpPLC genes were not significant ( D,E). We also searched for the prostaglandin H synthase (cyclooxygenase) ( COX ) genes in the M. persicae genome and detected a total of two (LOC111037985 and LOC111037852). However, the transcriptomic data indicated that the two COX genes were expressed at very low levels in the adult aphids (FPKM < 0.1), and the only prostaglandin E synthase (PGES) gene (LOC111031794) was significantly downregulated in the THG-R population. We did not detect any prostaglandin D synthase genes and did not pursue further studies on them ( ). The above results indicate that MpTEM6a is significantly upregulated in the THG-G population. We also evaluated the induction of the expression of this gene after adult females were exposed to three neonicotinoids—thiacloprid, imidacloprid, and thiamethoxam—at the LC 50 doses ( ). The results showed that thiacloprid, imidacloprid, and thiamethoxam all significantly induced the expression of MpTEM6a in the THG-R population. The expression of MpTEM6a significantly increased at 2, 12, and 24h after treatment with thiacloprid and imidacloprid and at 2, 12, and 72h after treatment with thiamethoxam. To evaluate the functional roles of MpTHEM6 in the resistance of THG-R to neonicotinoids, the MpTHEM6 gene was knocked down by RNA interference in this population, and the toxicity of thiacloprid, imidacloprid, and thiamethoxam was evaluated after RNAi. After fourth-instar THG-R larvae were injected with dsRNA-MpTHEM6, the transcript levels of MpTHEM6 at 24 h, 48 h, and 72 h significantly reduced by 0.31-, 0.33-, and 0.41-fold, respectively, compared with those of the control, which contained dsGFP ( A). Under the LC 50 doses of thiacloprid, imidacloprid, and thiamethoxam, the mortality rates of the THG-R aphids injected with dsRNA-MpTHEM6 were 70%, 66%, and 64%, respectively, which were significantly higher than those injected with DEPC (50%, 48%, and 50%) and dsRNA-GFP (45%, 48%, and 47%) ( C). The rapid development of metabolomics has laid the foundation for the rapid and accurate identification of insect metabolites. In this study, we utilized UHPLC-MS/MS non-targeted metabolomics techniques to identify lipid metabolites in adult female peach aphids. We identified a total of 148 lipid metabolites; this number is significantly higher than that found in adult fruit flies analyzed using LC-MS/MS (78 metabolites) and in the fireflies Aquatica leii and Lychnuris praetexta analyzed using UHPLC-MS/MS (53 metabolites) . However, it is lower than that found using high-resolution shotgun mass spectrometry in fruit flies across 27 developmental stages and raised on four different diets (250 metabolites). This discrepancy may be due to our analysis focusing solely on one insect stage, the adult stage of female aphids. The lipids that we identified belonged to 6 major classes and 15 subclasses, which is more than the 5 major classes (prenol lipids, steroids and steroid derivatives, fatty acyls, sphingolipids, and glycerophospholipids) found in fireflies . The primary role of glycerophospholipids is to constitute the cellular membranes in all organisms and subcellular organelles, including PC, LPC, PE, LPE, PS, and PI. These cell membranes are composed of phospholipid bilayers, where the hydrophobic fatty acid chains face inward toward each other, while the hydrophilic polar head groups are positioned outward, interacting with the aqueous environment. The lipid composition of the honeybee brain is predominantly composed of glycerophospholipids, which make up approximately 88.89% of the total lipids . Additionally, we identified a total of 84 glycerophospholipid species in M. persicae , including 28 PC species and 21 PE species, indicating a greater abundance of PC in the aphid’s body. Gao et al. identified a total of 248 glycerophospholipid metabolites in Aphis gossypii parasitized by Lysiphlebia japonica . However, in insect neural cells, such as in the fruit fly’s membrane, PE is more prevalent than PC , and the PE content is greater than the PC content; for example, in the brain of bees, the PE content is 38.44%, while the PC content is 19.03% . Insects differ from mammals in that this ratio is reversed in mammalian cells . In the THG-R population, the majority of PE and LPE metabolites, as well as PC and LPC metabolites, exhibited significantly elevated levels compared with the FFJ-S population. As the current study did not examine alterations in the metabolite profiles of the peach aphid following treatment with neonicotinoid insecticides on the THG-R strain, it is impractical to ascertain whether this discrepancy is correlated with the enhanced resilience of M. persicae to high concentrations of neonicotinoids. Nevertheless, when A. mellifera are exposed to sublethal doses of neonicotinoid insecticides , there is a notable increase in the concentrations of various PE and PC metabolites in their brains, such as LPE 18:1, PC (18:1/18:1), and LPE 18:0. Furthermore, the rise in brain LPE 18:0e levels after exposure to neonicotinoid treatment in bees has been implicated in the induction of intense self-grooming behaviors. In insects, the polyunsaturated fatty acid arachidonic acid (AA) is used to synthesize eicosanoids, which play several key roles in insect physiology and immunology, and its metabolic pathway is called the AA pathway. We found a significant difference between the THG-R and FFJ-S populations in terms of AA and its metabolites, such as PGs, EET, and HETE. Prostaglandins (PGs) are essential in modulating various facets of insect reproduction, encompassing oocyte development and oviposition-related behaviors, and several prostaglandins (PGs) in the THG-R population were found to be much higher than in the FFJ-S population. Insect tissues can produce a broad variety of PGs. Destephano et al. confirmed that PGE2 production occurs in the male reproductive tract of Acheta domesticus . Using radioimmunoassays, Murtaugh and Denlinger measured the relative amounts of PGE2 and PGFα (PGFα) in six distinct insect species. It was discovered that the hemocytes and fat bodies of Manduca sexta larvae can biosynthesize several PGs, such as PGA2, PGE2, PGD2, and PGF2α . A total of nine prostaglandin compounds were discovered in M. persicae , namely, 16,16-Dimethyl PGA1, 8-iso PGA2, 15-Deoxy-Δ12,14-PGA1, PGB1, PGE2, PGE1, 6-keto-prostaglandin f1alpha, PGG2, and PGF3α, and they are involved in the body’s natural defense mechanisms. Nonetheless, PGB1 and PGG2, which have not been documented in other insects, were found to be elevated in the THG-R population. The literature indicates that PGB1 is a metabolite of PGA1 and an inhibitor of PLA2 activity . Recently, it has been found that PGB1 remarkably increases in response to abiotic stress in some organisms. A metabolomic analysis revealed that the notable elevation in the differential metabolic markers PGB1 and AA facilitates marine shellfish larvae in acclimating to various artificial light at night (ALAN) conditions . PGB1 was also found to be highly expressed in the urine of rats treated with high iA (100 mg/L NaAsO 2 ) . PGD2 and PGH2 were not detected in the adults of M. persicae . Tetranor-PGDM is a metabolite of PGD2, and due to its relatively stable chemical properties, it has been widely used as a biomarker for human disease diagnosis . In this study, the level of tetranor-PGDM in the THG-R population was significantly lower than that in the FFJ-S population. However, the relationship between this low level and the aphids’ resistance to neonicotinoid insecticides requires further investigation. Moreover, PGs can influence gene expression. Stanley and colleagues found that PGA1, PGA2, and PGE1 can modulate gene expression in Helicoverpa zea cells, with 15 mM of PGA1 and PGE1 considerably enhancing the expression of HSP genes (heat shock proteins). In our recent analysis comparing transcriptomes, it was found that 28 out of 29 HSP genes in THG-R had reduced expression relative to those in FFJ-S (unpublished); this is potentially linked to the significant downregulated levels of PGA1 and PGE1 in THG-R compared with in FFJ-S. PGE2, among several PGs, has been extensively researched in insects and is crucial for numerous physiological activities, including reproduction, fluid secretion, aging, and immunological responses . It has been determined to govern oviposition in S. exigua . PGE2 plays a role in various aspects of ovarian development in female insects. The mPGES2 gene in D. melanogaster has been shown to affect male fertility . PGE2 is also involved in egg formation in specific species, including Rhodnius prolixus . In this investigation, the PGE2 levels in THG-R were considerably lower than in FFJ-S. This study found that the egg-laying capacity of THG-R female adults was significantly inferior to that of FFJ-S female adults, potentially due to the reduced levels of PGE2. shows the main enzymes involved in lipid metabolism. We identified their genes in the aphid genome based on their structural characteristics and found 11 MpFAS genes, 16 MpELO genes, 12 MpFAD genes, 12 MpTE genes, 22 MpFAR genes, and 12 MpPL genes. Due to space limitations, we did not construct a phylogenetic tree of these genes with related genes in other insects. The number of these genes in aphids differs significantly from that in other insects. From previous reports, it is known that the number of FAS genes in other insects, such as Ae. aegypti (five FASs ), D. melanogaster (five FASs), A. mellifera (two FASs ), and Locusta migratoria (two FASs ), is lower than that in M. persicae . The number of ELO genes in M. persicae is lower than that in D. melanogaster (20 ELOs ) and Tribolium castaneum (18 ELOs ) but higher than that in Tenebrio molitor (2 ELOs) . The number of FAD genes in M. persicae is not significantly different from that in other insects, such as Acromyrmex echinatior (15 FADs ), Acyrthisiphon pisum (13 FADs ), and D. melanogaster (10 FADs ) . The expression levels of all MpFASs , MpELOs , MpFADs , and MpPLs in the second-instar nymphs and adults of THG-R were not significantly higher than those of the related genes in FFJ-S, which suggests that these genes may not directly participate in the resistance of M. persicae to neonicotinoids. They may be indispensable for insect development; in insects, gene expression leading to lipid accumulation can affect growth and development . A previous study found that compared with a dsGFP injection group, the survival rate of S. Litura larvae decreased sharply after RNAi of the SlFAS1 gene . Yang et al. reported that knocking down the FAR genes LmFAS1 and LmFAS3 led to approximately 80% mortality in migratory locusts. In Tenebrio molitor , the RNAi silencing of TmELO1 led to an increase in mortality , and in D. melanogaster, RNAi of the ELO gene CG6660 also resulted in a similar lethal phenotype , indicating that ELO is indispensable for insect survival. These genes are vital for maintaining the normal growth and development of insects; knocking down several FADs in N. Lugens nymphs significantly increased nymph mortality . Compared with FFJ-S, the lifespan and fertility of THG-R significantly decreased; this may be the “fitness cost”, and it may be caused by the significantly reduced expression of these genes in THG-R. However, further experiments are needed to verify the specific effects on the biology of neonicotinoid-resistant M. persicae in terms of certain lipid-related genes. We found that the AA content in the THG-R population was much higher than in the FFJ-S population, indicating that the AA metabolic pathway in THG-R can help us determine the reasons for this metabolic difference ( ). Studies on the AA pathway in mammals have shown that there are three pathways for AA synthesis: one is the hydrolysis of esterified AA on the inner surface of the cell membrane by phospholipase A2 ( PLA2 ) into a free form. Additionally, PLA2 involvement in AA synthesis has been widely reported. We identified 12 PL genes in M. persicae , comprising 5 MpPLA2 genes, 5 MpPLB genes, and 2 MpPLD genes. It has been reported that in bacteria, PLB genes have the function of PLA2 genes . However, compared with the FFJ-R population, the expression of these PL genes in the THG-R population was significantly downregulated or not significantly different, indicating that the rise in AA levels in the THG-R population is not caused by the overexpression of PLA2 . In addition, AA can also be generated through the hydrolysis of the arachidonyl-CoA (AA-CoA) pathway through thioesterases, and acyl-CoA thioesterase 7 ( ACOT7 ) is a key enzyme in humans for the hydrolysis of arachidonyl-CoA (AA-CoA) to generate AA [ , , ]. However, this pathway has not been reported in insects. We identified 13 thioesterases in the peach aphid, of which 5 similar to ACOT7 displayed acyl-CoA thioesterase activity. However, none of these thioesterases were significantly overexpressed in THG-R. We detected the overexpression of a thioesterase gene, MpTHEM6a, in THG-R. Although lacking a defined biological role, THEM6 has recently been categorized in the thioesterase superfamily because of the presence of a “HotDog” domain, an evolutionarily conserved region anticipated to exhibit thioesterase activity . The THEM6 gene has not been previously reported in insects, and its association with insecticide resistance in pests has also not been previously reported. In this study, in addition to being expressed at higher levels in the THG-G population than in the FFJ-S population, it was also found that the overexpression of the THEM6 gene in peach aphids could be induced by thiamethoxam, imidacloprid, and clothianidin. Recently, the human THEM6 gene has become a research hotspot, as its expression is higher in colorectal, gastric, and breast cancer tissues than in normal tissues, and it has been considered a potential biomarker for these cancers [ , , ]. The overexpression of THEM6 has also been shown to promote the growth and migration of prostate cancer . Functional studies found that knocking out THEM6 inhibits tumor growth and that high levels of THEM6 are associated with poor clinical outcomes and elevated UPR activation levels. These studies also demonstrated a significant association between high THEM6 levels and high Ki67 expression in two groups of prostate cancer patients, indicating that THEM6 is highly expressed in highly proliferative tumors . THEM6 is capable of regulating lipid metabolism, and knocking out the THEM6 gene in 22rv1 cells leads to profound remodeling of the cellular lipidome. THEM6 depletion is associated with a significant reduction in intracellular levels of various triglycerides (TGs) and ether lipid species, including ether TGs, ether PCs, and ether PEs. In contrast, the number of ceramides in THEM6 knockout cells increases, and, in addition to causing specific lipid changes, knocking out THEM6 also significantly affects the total amounts of TGs, ether TGs, and ceramides in 22rv1 cells . In this study, when we knocked down MpTHEM6a , the toxicity of the neonicotinoid insecticides significantly increased, but this did not affect the lifespan of the adults. Further research is needed to understand how MpTHEM6a increases aphid resistance to neonicotinoid insecticides by regulating lipid synthesis. 4.1. Insects 4.2. Metabolite Extraction and Analysis 4.3. A Preliminary Search and Identification of Key Lipid Metabolic Enzymes in the M. persicae Genome 4.4. The Construction of a Phylogenetic Tree and a Protein Domain Analysis of the Lipid Metabolic Enzymes for M. persicae 4.5. The Transcriptome Profiles of the THG-R and FFJ-S Populations 4.6. Quantitative Real-Time PCR Analysis of Different Enzyme Genes in Both THG-R and FFJ-S Populations and Expression Induction of Selected Upregulated Genes in THG-R Population via Nicotinoid Exposure 4.7. RNA Interference wo M. persicae populations (FFJ-S and THG-R) were used in this study. The FFJ-S population was susceptible to neonicotinoids, and the LC 50 values of this population for thiacloprid (97.5% purity; Bayer AG, Leverkusen, Germany), imidacloprid (97% purity; Bayer AG, Germany), and thiamethoxam (98% purity; Syngenta Group, Dielsdorf, Switzerland) were 1.89, 1.09, and 2.57 mg L −1 , respectively . The THG-R strain was established from the FFJ-S strain via successive screening with thiacloprid for more than 50 generations in the laboratory, and the LC50 values of this population for thiacloprid, imidacloprid, and thiamethoxam were 2270, 974, and 36.5 mg L −1 , with a 1200-, 890-, and 11-fold resistance when compared with FFJ-S. Both GPA strains were reared on pepper seedlings, Capsicum annuum L., under controlled conditions of 19–22 °C, a 60% relative humidity, and a photoperiod of 16:8 h (light–dark). Approximately 1200 apterous female adults were collected from the FFJ-S and THG-R strains. Each sample consisted of 200 aphids (six replicates), which were individually ground with liquid nitrogen, and the resulting homogenate was resuspended in pre-chilled 80% methanol using a vortex mixer. The samples were then incubated on ice for 5 min and centrifuged at 15,000× g and 4 °C for 20 min. A portion of the supernatant was diluted to a final concentration with 53% methanol using LC-MS-grade water. The diluted samples were transferred to new Eppendorf tubes and centrifuged again at 15,000× g and 4 °C for 20 min. The final supernatant was then analyzed using the LC-MS/MS system, as described by Want et al. . UHPLC-MS/MS analyses were conducted using a Vanquish UHPLC system (Thermo Fisher, Lenexa, KS, USA) coupled with an Orbitrap Q Exactive TM HF mass spectrometer (Thermo Fisher, Lenexa, KS, USA) at Novogene Co., Ltd. in Beijing, China. The samples were injected into a Hypesil Gold column (100 × 2.1 mm, 1.9 μm) and separated using a 12 min linear gradient at a flow rate of 0.2 mL/min. For the positive polarity mode, the mobile phase consisted of eluent A (0.1% formic acid in water) and eluent B (methanol). In the negative polarity mode, eluent A was 5 mM ammonium acetate at pH 9.0, and eluent B was methanol. The solvent gradient was programmed as follows: starting at 2% B for 1.5 min, ramping up to 85% B over 3 min, holding at 100% B for 10 min, returning to 2% B in 0.1 min, and finally maintaining at 2% B for 12 min. The Q Exactive TM HF mass spectrometer was operated in both positive and negative polarity modes with a spray voltage of 3.5 kV, a capillary temperature of 320 °C, a sheath gas flow rate of 35 psi, an auxiliary gas flow rate of 10 L/min, an S-lens RF level of 60, and an auxiliary gas heater temperature of 350 °C . The initial data files, produced by Ultra-High Performance Liquid Chromatography–Mass Spectrometry/Mass Spectrometry (UHPLC-MS/MS), underwent processing with Compound Discoverer 3.1 (CD3.1, Thermo Fisher) for the purpose of peak alignment, detection, and quantification of each metabolite. The key parameters were configured as follows: a retention time tolerance of 0.2 min; an actual mass tolerance of 5 ppm; a signal intensity tolerance of 30%; a signal-to-noise ratio of 3; and a minimum intensity threshold, among others. Subsequently, peak intensities were normalized relative to the overall spectral intensity. This normalized data were then utilized to predict the molecular formulae by analyzing adduct ions, molecular ion peaks, and fragment ions. Following this, the peaks were correlated with the mzCloud ( https://www.mzcloud.org/ , accessed on 11 May 2024), mzVault, and MassList databases to achieve precise qualitative and relative quantitative outcomes. Statistical evaluations were conducted using the statistical software R (version R3.4.3), Python (version 2.7.6), and CentOS (version 6.6). In instances where data did not exhibit a normal distribution, normalizations were attempted through the application of an area normalization method . We used a keyword search, the Hidden Markov Model (HMM), and the Basic Local Alignment Search Tool (BLAST) to search for six enzymes related to lipid metabolism in peach aphids, namely,thioesterase (TE), and phospholipase (PL). The genes for these lipid metabolic enzymes in M. persicae are abbreviated as follows: “ MpFAS ” for fatty acid synthase, “ MpELO ” for the elongase of very-long-chain fatty acids, “ MpFAD ” for fatty acid desaturase, “ MpTE ” for thioesterase, and “ MpPL ” for phospholipase. For genes that already had assigned names in NCBI, we used those existing names. Preliminary data on these enzymes were collected using the following steps: (1) A keyword search was conducted for the above-mentioned enzymes in the Myzus persicae database ( https://bipaa.genouest.org/sp/myzus_persicae_g006/ , accessed on 8 April 2024). (2) The heme peroxidase protein-conserved domain model ( MpFAS : PF14765; MpELO : PF01151; MpFAD : PF00487; MpTE : PF13279 and PF02338; MpPL : PF12796 and PF09070) was downloaded from the Pfam library ( http://pfam.xfam.org/ , accessed on 12 April 2024) using HMMER 3.4_Windows software ( http://hmmer.org/ , accessed on 12 April 2024) . (3) The M. persicae database ( https://bipaa.genouest.org/sp/myzus_persicae_g006 , accessed on 13 April 2024) was accessed to retrieve the gene protein sequences, followed by the application of the BLASTP method and the elimination and combination of duplicated genes. The multiple-sequence alignments of the six enzyme proteins in M. Persicae were analyzed via MEGA 11.0 software using the Muscle algorithm. Then, the neighbor-joining (NJ) method was employed, along with 1000 bootstraps, to construct an evolutionary tree. The remaining parameters were set to their default values [ , , ]. The total RNA from approximately 600 apterous adult aphids per treatment, with a cumulative total of 1800 aphids across three biological replicates, was isolated using the TRIzol TM reagent (Invitrogen, Carlsbad, CA, USA) in accordance with the manufacturer’s specified protocol. The subsequent steps of RNA purification, cDNA synthesis via reverse transcription, library construction, and high-throughput sequencing were conducted at Shanghai Majorbio Bio-pharm Biotechnology Co., Ltd. (Shanghai, China), following the standardized procedures provided by the service provider. The sequencing library was prepared on the state-of-the-art NovaSeq X Plus platform (PE150) with the corresponding NovaSeq Reagent Kit. We subsequently performed a comparative analysis of the transcriptomic profiles of the selected genes across the distinct treatment conditions to elucidate differential gene expression patterns. To discern the differential expression of the six lipid metabolic enzyme genes between the THG-R and FFJ-S populations, we quantified the expression level of each transcript using the FPKM (fragments per kilobase of transcript per million fragments mapped) metric. RSEM (RNA-Seq by Expectation Maximization) was employed for the estimation of gene abundance. A differential expression analysis was conducted using either DESeq2 or DEGseq . The expression levels of the six enzyme genes in the treated aphids were quantified using a reverse transcription quantitative polymerase chain reaction (RT-qPCR) with SYBR ® Green Supermix (Thermo Fisher, Waltham, MA, USA) on a qTOWER 2.2 real-time PCR system (Analytikjena, Jena, Germany). Total RNA extraction and quantification were performed as previously described, using a ScanDrop 100 spectrophotometer (Analytikjena, Jena, Germany) in accordance with the manufacturer’s instructions. RNA was diluted to a concentration of 0.8 μg/μL with diethyl pyrocarbonate (DEPC)-treated water, and 0.8 μg of RNA was reverse-transcribed in a 20 μL reaction volume using a TUREscript 1st Strand cDNA Synthesis Kit (Aidlab, Beijing, China), with the actin gene serving as an internal control (NCBI gene ID: 836110). Each RT-qPCR reaction consisted of a 20 μL mixture comprising 1 μL of sample cDNA, 1 μL of each primer at a concentration of 200 nM, 6 μL of DEPC-treated water, and 10 μL of 2 × SYBR ® Green Supermix. The qPCR cycling conditions were as follows: initial denaturation at 95 °C for 3 min, followed by 39 cycles of denaturation at 95 °C for 10 s, and annealing/extension at 58 °C for 30 s. A plate reader was employed for data analysis. A melting curve analysis was conducted from 60 °C to 95 °C. Primers for these genes were designed using Primer Express 3.0 software, based on the target gene sequences available in the NCBI database, and they are provided in . Meanwhile, the significantly upregulated genes in THG-R were selected for expression induction studies. THG-R apterous adults were transferred to pepper leaves treated with LC 50 doses of thiacloprid (2270 mg L −1 ), imidacloprid (974 mg L −1 ), and thiamethoxam (36.5 mg L −1 ) for 2, 12, 24, 48, and 72 h to determine the effects of thiacloprid on the expression of significantly upregulated genes. The experiment included three replicates, and each replicate contained 30 adults. The insects were frozen in liquid nitrogen and stored at −80 °C. The relative gene expression was calculated automatically using qPCRsoft 3.2 software. The relative gene expression 2 −ΔΔCt method was used to calculate the relative fold gene expression of the samples . MpTHEM6a double-stranded RNAs (dsRNAs) were obtained by using a T7 high-yield transcription kit (Invitrogen, USA) in accordance with the manufacturer’s instructions. The primers utilized for dsRNA synthesis are listed in . A total of 20 ng/μL of the dsRNA that targeted the desired gene was injected into the apterous adults with a Nanoject III TM nanoliter injector (Drummond Scientific Company, Broomall, PA, USA). DEPC and dsRNA-EGFP were employed as controls . After the injection, the aphids were transferred to pepper seedings. Three biological replicates were used in the experiment, and each included thirty aphids. RT-qPCR was used to assess the efficacy of the dsRNA in suppressing the expression of the MpTHEM6a genes after 72 h. To assess the susceptibility of M. persicae to thiacloprid, imidacloprid, and thiamethoxam following RNA interference (RNAi) targeting of MpTHEM6a, we administered the recommended doses of these three neonicotinoids to the apterous adults 48 h post-treatment. Control groups consisted of adults treated with DEPC water and dsRNA-GFP. The mortality of M. Persicae was evaluated 48 h after neonicotinoid treatment. Furthermore, we monitored the longevity of the adults and the fertility of each female following the RNAi treatment. The injected adults were then placed on pepper seedlings to determine the duration of survival and offspring production until their deaths. Mortality assays were replicated five times, with ninety treated apterous adults used for the analysis of adult longevity and fecundity. Through a comparative metabolomics analysis, we found significant differences between THG-R and FFJ-S in terms of lipid metabolites, mainly phospholipids and fatty acids. The metabolites related to AA metabolic pathways, such as AA and prostaglandin compounds, showed considerable differences. In THG-R, AA, (±)11-HETE, and prostaglandin B1 were significantly upregulated, while prostaglandin A1, tetranor-PGDM, 8,15-diHETE, and (±)11(12)-EET were significantly decreased, and these metabolites could all serve as biomarkers. To further clarify the causes of these differences, we selected several major key enzymes involved in the metabolic process of fatty acid synthesis and used transcriptomic methods to determine the differences in the expression of these enzymes between the THG-R and FFJ-S populations. The results showed that most of the metabolic enzymes selected in THG-R were not overexpressed. However, the MpTHEM6a gene was significantly upregulated in THG-R. The overexpression induced by neonicotinoid insecticides and the enhanced effects on neonicotinoid insecticides observed in the RNAi experiments both suggest that MpTHEM6a is associated with the resistance of peach aphids to neonicotinoid insecticides.
The effect of steatosis and fibrosis on blunt force vulnerability of the liver
3d2a5fed-1d8e-47d2-ae78-28c1efccaa5b
7181547
Pathology[mh]
The liver is the most commonly injured abdominal organ in trauma . Traffic accidents account for the majority of liver injuries . The incidence of traumatic liver injury in total population is 2.95–13.9/100.000 . Thirteen to 16% of polytrauma patients have liver injuries . A direct frontal blunt impact usually causes the injury of the left liver lobe mostly along the falciform ligament (segments II, III, and IV), while impacts coming from lateral directions mostly affect the right lobe (segments V–VIII) . Liver injuries can be caused by acceleration, deceleration, and compression/crush/ mechanisms . The minimal impact velocities which can lead to liver injuries are predicted 5–8 m/s . However, the mechanical vulnerability of tissues can show large individual differences, and these differences influence whether a blunt force results in an injury or not. The possible role of these individual differences has to be assessed in forensic situations many times. The pathological changes of liver caused by diseases and/or dietary differences are very common. The fatty liver prevalence (alcoholic and non-alcoholic combined) is around 45% , and it increases with age. It is also common in children and young adults, reaching 17.3% for ages 15 to 19 years . The estimated prevalence of hepatic fibrosis is around 3% . Normal (healthy) liver contains 1–4% fibrous tissue, while cirrhotic contains 15–35% fibrous tissue. Normal human liver is estimated to contain approximately 5.5 mg/g of collagen, while cirrhotic liver contains approximately 30 mg/g . Apart from the overall collagen content, type I/type III collagen ratio increases in cirrhotic liver above 20 mg of collagen/g . Based on theoretical considerations, these structural changes should have a negative impact on the biomechanical properties and—more importantly in forensic aspects—on the vulnerability of the liver. Textbook-based received wisdom suggests that certain diseases (e.g. steatosis) increase the vulnerability of liver, but no experimental data are available on the possible connection between pathological liver changes and blunt force vulnerability of human liver. Biomechanical studies, compression, and strain test on liver samples have been performed by previous researchers . These experiments used in vivo aspiration or ex vivo methods to define the mechanical properties of liver, and to create a mechanical resilience model for human liver . However, no one of the previous studies examined the effect of different pathological conditions on the mechanical properties of liver samples. The degree of liver stiffness is determined by elasticity and viscosity. The liver matrix (collagen content) determines elasticity, while fatty infiltration, perfusion pressure, and inflammation determine viscosity . A previous in vivo and ex vivo aspiration test on human liver suggested that increased connective tissue content increase the stiffness of liver, resulting in an increase of the stiffness index . However, the results were inconclusive possibly due to the low number of samples. An animal study on rats indicated that chronic liver diseases increase liver stiffness . The forensic pathologist is frequently challenged to evaluate the effect of preexisting liver diseases on blunt force vulnerability of the liver. The previous studies—mostly aiming to develop better diagnostic procedures—can offer only limited data. Theoretically speaking, multiple factors can influence the blunt force vulnerability and resilience of the liver. These factors include liver size, liver weight, tissue density, tissue structure, capsule strength, and age. Liver size associated with steatosis may increase vulnerability , and larger organ weight causes larger forces during sudden deceleration, but a previous study suggested that steatosis does not increase the chance of blunt force liver injuries . The aim of our study was to examine the possible effect of liver diseases on the vulnerability of liver tissue. A test system was developed and set up to emulate impact/compression type blunt force injuries of liver tissue on human liver samples using quasi-static blunt force. Samples Mechanical tests Tissue sample classification Histological samples were taken from each tissue block, and the histological appearance was evaluated under microscope using haematoxylin-eosin (HE) staining. Based on the microscopic appearance, six groups were formed using a modified liver steatosis and fibrosis classification (Fig. a–f): Group 1: intact liver samples without any visible microstructural change (no steatosis or fibrosis); (Int) Group 2: mild steatosis (less than 1/3 of cells with steatosis); (Mil) Group 3: medium-grade steatosis (1/3 to 2/3 of hepatocytes with steatosis); (Med) Group 4: severe steatosis (more than 2/3 of hepatocytes with steatosis); (Sev) Group 5: perisinusoidal, periportal, or bridging fibrosis, with or without steatosis; (Fib) Group 6: liver cirrhosis (presence of nodules), with or without steatosis. (Cir) Aetiology had no role in the selection process (i.e. alcohol induced vs other causes). Sixteen samples were excluded from the original study population because of microscopic signs of putrefaction, cellular (cancer or inflammatory cells), or foreign body infiltration. The average age of all cases included in the study was 58.72 years (SD ± 18.79; min-max, 4–100); 90 liver samples were obtained from males and 29 from females. The post-mortem interval (PMI) of liver samples ranged from 1 to 20 days (mean 7.32, SD ± 4.04). One hundred thirty-five liver samples were examined from human autopsy cases of the Department of Forensic Medicine, Medical School, University of Pécs. Prior to the autopsy, the bodies were stored at 4 °C from the onset of death, and no cadavers or samples were frozen. Previous freezing may interfere with the tensile properties and mechanical strength of tissues . Cases with unknown time of death or showing any macroscopic sign of putrefaction were excluded from the investigation. The cases who suffered previous traumatic liver injury, high energy impact (e.g. car accidents or falling from heights), poisoning, or had sepsis at the time of death were also excluded. The tissue blocks were removed from the anterior surface of the eighth liver segment with a 3.5 × 3.5 × 2-cm-sized rectangular metal frame (Fig. ). The metal frame had a cutting edge allowing to take uniform-sized cubic-shaped samples. Considering its role in defending the liver parenchyma, the liver capsule was not removed from the tissue blocks. The tissue blocks were positioned into a 3.5 × 3.5 × 2-cm-sized sample tray connected to the top of a Mecmesin AFG-500 force gauge (0–500 N measurement range, 0.1 N resolution). The force gauge and the sample tray were incorporated into a test stand (Fig. ). The test stand was equipped with a downward facing rod with a square-shaped head with a 1-cm 2 -sized flat metal surface. A steadily increased pushing force has been applied on the capsular surface of the liver block. The breakthrough pressure resulting in the rupture of the capsule and lacerating the liver parenchyma was electronically registered as peak pressure (Pmax) by the force gauge. Statistical analysis was performed with SPSS 21 (IBM) statistical suit. Multivariate analysis (Kruskal-Wallis test) was used for comparison of max force between groups. Where statistically significant difference ( p < 0.05) was found, pairwise comparisons were performed to determine differences between relevant groups. In pairwise comparisons, significance levels were adjusted for multiple comparisons. Relation between max force and age was tested with linear correlation. R 2 was calculated. Level of significance was 0.05. The groups (1–6) were proven comparable by age and sex. Forty-one liver samples showed no microscopic sign of structural change (group 1), 33 samples showed mild steatosis (group 2), 12 samples showed medium-grade steatosis (group 3), 6 samples showed severe steatosis (group 4), 11 samples showed fibrosis (group 5), and 16 definite cirrhosis (group 6). Most of the fibrotic and cirrhotic samples also showed some level of fatty infiltration. The registered Pmax values ranged from 18.1 to 162.7 N (average 50.41 N, SD ± 23.63). The possible correlation between PMI and Pmax was analysed to assess the possible effect of PMI on blunt force vulnerability of liver tissue. No correlation was found between the PMI and the measured Pmax values ( p = 0.630) (Fig. ). No correlation was found between the PMI and Pmax in the intact liver group ( R 2 = 0.002 p = 0.592) (Fig. ). The effect of age on liver vulnerability was also assessed, and age of the deceased in the intact group showed weak correlation with the Pmax values ( R 2 = 0.122, p = 0.025) (Fig. ). Multivariate regression analysis of the complete dataset did not reveal previously unidentified correlations with regard to the parameters evaluated. The histological feature-based classification strongly correlates with the Pmax ( p < 0.001), while age and PMI have no significant effect on Pmax. Age and PMI comparison of different histological groups showed no significant differences. The average Pmax value was 34.1 N in intact liver samples, 45.1 N in mild steatosis, 55.4 N in moderate steatosis, 57.6 N in severe steatosis, 63.7 N in fibrosis, and 87.1 N in the case of definite cirrhosis (Table ). The Pmax values were significantly higher in samples with microscopic structural changes than in intact liver samples ( p = 0.023, 0.001, 0.009, 0.0001, 0.0001 between group 1 and groups 2 to 6 respectively). Significant difference was found between mild steatosis (group 2) and cirrhosis (group 6) ( p = 0.0001). The difference between mild, moderate, and severe steatosis (group 2–4) was not significant (Fig. ). Our study showed that the steatosis, fibrosis, and cirrhosis decrease the blunt force vulnerability of the liver tissue. There is a clear-cut gradual increase in Pmax with progression of degree of pathologies. The mechanical properties of the fat tissue content and the increase of fibrotic tissue (collagen) explain the increased stiffness, as well as the increased tissue resistance in liver diseases with these structural changes. The experimental data also suggest that the vulnerability of liver tissue increases slightly by age, but the underlying histological condition is much more important determining the resistance to blunt force injury. The data presented support statistical data from previous study . The steatosis or fibrosis may increase the chance of liver injury due to the increased organ size and weight, but also increase the mechanical strength of the liver tissue. The individual differences among the samples with similar histological appearance can be explained by multiple factors. The collagen content of tissues can differ slightly even when the histological appearance is similar, and the mechanical stiffness of the parenchyma can be masked by the stiffness of the capsule . Diseases with microstructural changes affect the parenchyma to a larger extent than the capsule. Previous experiments proved that the capsule plays an important role in the mechanical strength of liver , and thickness and collagen content of the capsule can affect its mechanical stiffness . The orientation of collagen fibres has also large effect on the biomechanical properties of the capsule , and therefore, it can be presumed that a similar effect is present in the parenchyma. In a living individual, also the actual blood perfusion can affect the mechanical properties due to its effect on viscosity . Theoretically, the impact speed also might influence the vulnerability of the liver affected by different parenchymal diseases. The evaluations of the role of further factors like capsule structure, collagen content, impact velocity, and angle were beyond the scope of the present study and are matters of ongoing research awaiting for publication. Our study using a compression-type blunt force in a quasi-static setting demonstrated that certain diseases of the parenchyma (steatosis, fibrosis) decrease the blunt force vulnerability of the liver tissue. The data contradict to the canonized teaching based on theoretical considerations as one may expect a more fragile, less mechanical stress-resistant liver of a cirrhotic patient. Previous statistical and experimental data are supported by our data. Our study gives useful data when the effect of structural diseases on liver vulnerability has to be assessed, but due to the limitations above explained, the effects cannot be quantified precisely. Further experiments assessing the role of overall collagen content of liver and the role of capsule thickness, capsule strength, and the use of dynamic forces can further detail, explain, and quantify the effect of pathological conditions on liver vulnerability. In general terms, it can be stated with quite good certainty during a forensic evaluation of a blunt force liver injury that a given blunt liver rupture is not negatively related to the victims with previously existing hepatic steatosis or cirrhosis.
Patients’ perspectives on the quality of care of a new complex psycho-oncological care programme in Germany – external mixed methods evaluation results
bb54ce87-733d-4115-9558-96a256699e4c
10349427
Internal Medicine[mh]
Many cancer patients suffer from distress, fatigue, anxiety, depression or posttraumatic stress . Emotional distress is recognised as ‘The 6th Vital Sign’ in cancer care and has led to the implementation of screening instruments and evidence-based psycho-oncological interventions worldwide . Psycho-oncological care includes a multidisciplinary approach, entailing psychological, social, behavioural, and ethical aspects . Although many psycho-oncological interventions have been developed, implementing them into practice still remains a challenge . Hence, research is needed that considers clinical, social and cultural context of cancer, including research on the dissemination and evaluation of interventions in different countries . Psycho-oncological care in Germany The isPO intervention programme and its evaluation Objective In Germany, only a fraction of cancer patients receives adequate psycho-oncological (PO) care , despite one in two cancer patients experiencing significant distress . Guideline-compliant provision and implementation of PO care is still considered challenging . First, there is currently no legal basis for uniform, area- and cost-covering financing . Furthermore, psycho-oncology is not offered nationwide in Germany; rural areas are especially underserved . Moreover, there is a strong sectoral separation of PO care structures between and within health and social services . There is a lack of nationwide expansion of cancer counselling centres regarding psychosocial area coverage, qualified counsellors, and equally secure funding . For these reasons, the national cancer plan calls for cross-sectoral and needs-oriented integration of PO care into oncological care. The new form of care, named ‘integrated cross-sectoral psycho-oncology’ (isPO), aims to follow the national cancer plan’s call by integrating such structures to reduce the described challenges in the future . The isPO intervention programme was developed, implemented, and externally evaluated between 2018 and 2022 . At a patient level, the programme aims to reduce anxiety and depression in newly diagnosed cancer patients based on their individual needs within a 1-year care period (stepped-care approach). At the health-system level, the isPO project aims to develop a high-quality PO programme that may be available as an integrated, cross-sectoral form of care for cancer patients for possible adoption into standard nationwide care. For this, multiple programme components (Table ) were developed that relate to different aspects of care (structural, processual, clinical, and legal): a stepped care concept, including new care pathways; newly established PO care networks and care process organisation; a newly developed information technology-supported care documentation and assistance system called ‘CAPSYS 2020 ’, which supports PO service providers with billing, care coordination, and documentation; and isPO-specific quality assurance and improvement structures . According to definitions on complex interventions , isPO can be considered a complex care programme. The two components ‘care concept’ and ‘care pathways’ represent the clinical care aspects, whereas the other components represent formal administrative aspects of PO care that aim to meet legal requirements for care in the German healthcare system . In 2019, programme implementation began in four newly established PO care networks in North Rhine-Westphalia. They each consisted of a cooperation between at least one certified oncological cancer centre hospital and local oncological practices (see Table ). Physicians referred patients who received their cancer diagnosis in the care networks to the isPO programme. During programme enrolment, patients’ degree of distress in terms of anxiety, depression, and psychosocial risk factors were screened to allocate them to a care level that was designed to meet their individual needs . Dependent on the assigned care level, various professions were involved in isPO care provision: licensed psychotherapists, psychosocial professionals, case managers, and specially trained cancer survivors called ‘isPO onco-guides’. Figure illustrates core isPO care pathways of the care concept. The isPO programme was externally evaluated . The evaluation process is based on the Medical Research Council framework for the analysis and assessment of complex interventions . A comprehensive study was interlinked with the programme to evaluate its effectiveness and quality of care. Quality of care is defined as the extent to which care is provided to patients in a matter that achieves the desired health-related outcomes and is consistent with current knowledge . In this regard, quality of care may be divided into structural, processual, and outcome quality . Quality of care can be assessed with a mixed-methods design . In this article, we report on the assessment of the quality of care of the isPO programme from patients’ perspectives, which is part of the external evaluation of isPO. We aimed to gain deeper insight into patients’ individual isPO programme experiences, how they assessed the programme, and whether assessment affected relevant patient-reported outcomes. An explanatory sequential mixed methods design with qualitative and quantitative methods was applied to assess patients’ perspectives on quality of care in isPO (Fig. ). During enrolment, patients could consent: (1) to participate in the isPO programme, (2) to share their data (e.g. documentation data or statutory health data) with the interlinked evaluation study as well as (3) to be contacted for evaluation surveys or interviews. Patients who enrolled in the programme, were allocated to a care level based on their individual needs, which were assessed with screening instruments. Screening results and any care documentation were saved in CAPSYS 2020 by the isPO service providers. Using multiple pseudonyms, CAPSYS 2020 data were linked with primary quantitative data collected via external evaluation whilst conforming to German data protection laws . Primary data collection included a patient survey with two measurement times. Furthermore, a sample of patients who finished their 1-year care in the isPO programme were interviewed about their individual care experiences. To gain differentiated insight in the care experiences, we aimed to consider the different periods of implementation, e.g. first year of implementation, after programme optimisations have been made or after supposedly a routinisation has occurred in care delivery. Additionally, the coronavirus disease 2019 (COVID-19) pandemic occurred in the beginning of the second year of implementation, which we also considered in the evaluation. For this, three patient interview waves were conducted. The study procedure was approved by the ethics committee of the Medical Faculty of the University of Cologne. Patient survey Measurements Statistical analysis Patient interviews isPO patients who enrolled and consented to being contacted for a survey or an interview were contacted by the isPO Trust Center. They were questioned twice during their 12 month-long care: 3 months into their care and at the end of their care (12 months). The isPO Trust Center contacted 1599 enrolled patients with a consent form and questionnaire by post 3 months after enrolment. Patients who wished to participate could return their completed questionnaire and consent form in two pre-stamped envelopes. Dillman’s Total Design Method was applied to achieve the highest possible response rate. For this, patients were contacted twice after initial send-out of the questionnaire. This means, that after 2 weeks, patients received a postcard with a survey reminder, and 3 weeks after the postcard, they received a reminder with a new questionnaire and consent form. The Trust Center allocated a survey pseudonym (SP) to each patient so that patients who participated in the first survey could be contacted for the second one. The survey data were imported into SPSS for data analysis. To measure quality of care, we considered variables regarding patients’ satisfaction with the different isPO service providers (case management, isPO onco-guides, psychosocial professionals, and psychotherapists) and general assessment of patient care in isPO (subjective effectiveness, satisfaction and needs orientation, frequency and duration of appointments). Furthermore, patient-reported outcomes (global health status, work ability, anxiety and depression) and sociodemographic variables were reported. Table provides an overview of the used measurements. For sample characteristics, the following variables were linked from the CAPSYS 2020 data set: age, sex, ISCED index, care network (pseudonymised via number 1 to 4), and anxiety and depression. Descriptive data analysis (frequencies, mean, standard deviation, minimum and maximum) was first conducted for patients’ satisfaction with their respective isPO service providers and the scales concerning their care in isPO in general. Next, to determine the differences in global health status, anxiety and depression, and work ability between the survey time points, t-tests for dependent samples were calculated. Finally, linear regression analyses were conducted to assess whether quality of care predicted or was associated with global health status, anxiety, depression, and work ability. Items concerning frequency and duration of appointments were initially categorical; however, the third category ‘too often’ and ‘too long’ was empty or only chosen by one person. Therefore, it was possible to dummy code it to 1’suitable’ and 0 ‘not suitable’. Data collection Qualitative analysis Two members of the evaluation team were involved in data analysis using the software programme MAXQDA 2018 . Qualitative content analysis was applied, which is a research question oriented and stepped systematic approach . In the first interview wave, a coding system with core and sub-categories was developed deductively based on the interview lead questions. The two analysers then coded the transcripts independently using MAXQDA. In addition, inductive categories were derived from the material. The new categories were discussed to achieve a profound understanding of patients’ experiences, and the final category system was agreed upon. Then, new coding was carried out using the final category system. The content of the statements belonging to a category was condensed. For the second interview wave, the coding system of the first wave was used to analyse the material. In addition, inductive categories were derived. Again, categories were discussed, and a consensus was reached on a final category system, which was used for final coding. The same procedure was applied for the third interview wave. Data collection was conducted until rich data on patient’ experiences was obtained. As we focus on subjective meaning in data analysis, we refrained from quantifying the results . However, we use phrasings like ‘some’, ‘all’, or ‘one patient’ to differentiate individual opinions and experiences from opinions of several or all interviewed patients. Due to the different stages of implementation and the COVID-19 pandemic, three interview waves were established. The first interview wave was conducted between April and July 2020 to consider care experiences made in the first year of programme implementation, during which programme optimisations were conducted. The second interview wave was conducted between November 2020 and March 2021 to consider the second year of implementation, which showed more routinisation in care delivery. Lastly, the third interview wave was conducted from April to June 2021 to consider care experiences made under pandemic conditions (COVID-19 pandemic). In all, we conducted 23 telephone interviews, of which 9 were in the first, 10 in the second, and 4 in the third interview wave. Because the pandemic started around March 2020 in Germany, all semi-structured interviews needed to be conducted via telephone. Purposeful sampling was applied. Patients were recruited from all care networks, with different cancer entities and according to gender, age, and intensity of isPO care (e.g. number of appointments or isPO care stage). Patients were first approached by their care service provider, e.g. psychotherapist, during a face-to-face or telephonic appointment and asked, if they wanted to share their care experiences with the external evaluation team. It was explained to them that the isPO programme is being evaluated on its quality of care and that the interviews are part of this evaluation process. If patients agreed, the isPO Trust Center organised a date for a telephone interview. Before the interview, the interviewer did not meet the interviewee. No other persons were present during the interviews. To begin the interview, an initial narrative question was asked: ‘Could you describe how you perceived the moment of receiving the diagnosis?’. After that, impulse giving guiding questions concerning quality of care were asked and if necessary, also deepening questions. See Additional File for the overarching guiding questions (e.g. ‘To what extent has isPO met your individual needs?’) that were included in the interview guidelines to gain insight into patients’ care experiences. The interview guideline was developed by the external evaluation team. It was piloted with three cancer survivors from the project partner, House of the Cancer Patient Support Associations of Germany (HKSH-BV). Data collection was conducted by the entire female evaluation team whose professional backgrounds included experiences and qualification from the areas of sociology, psychology, psychotherapy, nursing, public health, and health services research. Two team members hold a PHD degree, whereas the other two have a Master of Science degree and were in the process of obtaining their doctorate. Interviews were audio recorded and transcribed. Notes were taken during the interviews. Quantitative results In total, we contacted 1599 isPO patients from all four isPO care networks to participate in the first patient survey (T1), of which 62.2% ( n = 994) completed and returned their questionnaire. All patients who finished their isPO care until the end of April 2021 and participated in the first survey ( n = 867) were contacted for the second survey; 59.3% ( n = 514) of patients participated in the second survey (T2). Patients’ age ranged between 18 and 93 years (T1), with an average age of 56.88 years (T1). There were more female than male patients in the samples (T1: 64.7%, n = 637; T2: 67.8%, n = 345). Most patients were employed in the first survey sample (54.4%, n = 526), whereas slightly more patients were unemployed or retired in the second survey (50.8%, n = 248). Most patients were married or in a relationship (T1: 74.0%, n = 741; T2: 72.9%, n = 373). In the T1 sample, 58.05% of patients received care in Care Network 1 ( n = 577), while patients from the other three care networks were represented with a rate of 10.16% ( n = 100) up to 17.76% ( n = 176). In the second survey (T2), 53.50% of patients ( n = 275) received care in Care Network 1. Table presents descriptive results on the predictors and outcome variables (please see Additional File for frequencies of single items regarding satisfaction with care). On average, patients rated their satisfaction with the different isPO service providers positively (Table ). Satisfaction with care and orientation to needs was also rated positively, whereas rating of subjective effectiveness was slightly less positive (neutral to less satisfied). On average, frequency and duration of appointments were perceived as suitable. For 423 isPO participants, data on global health status were available for both survey points. The mean value of the global health status is higher at T2 than at T1. Paired difference is significant according to the t-test for dependent samples: t(1422) = -7.353, p < 0.001, 95% CI [-9.36, -5.41]. Linear regression analyses show that, except for therapeutic alliance and items regarding temporal framework of care, all other predictors significantly and positively predict or associate with global health status, with isPO onco-guides and psychosocial professionals is associated with higher global health status at the end of care (T2). Furthermore, higher satisfaction with the scales 'subjective effectiveness' and 'satisfaction and orientation to needs' is associated with higher global health status (T2). For 403 patients, data on work ability were available for both survey points. The mean value of work ability is higher at T2 than at T1. Paired differences is significant according to the t-test for dependent samples: t(402) = -8.11, p < 0.001, 95% CI [-1.48, -0.90]. Linear regression analyses show that, except for satisfaction with the psychosocial professionals, therapeutic alliance, and duration of appointments, all other predictors significantly and positively influence work ability (Table ). Higher satisfaction with case management (regarding health literacy-sensitive communication) and with isPO onco-guides is associated with higher work ability at the end of care (T2). Furthermore, higher satisfaction with 'subjective effectiveness', 'satisfaction and orientation to needs', and frequency of appointments is associated with higher work ability (T2). For 722 patients, data on anxiety and depression were available for all survey points. The mean value of HADS significantly decreased across all survey points over time (T0 to T1: t(681) = 6.96, p < 0.001; T1 to T2: t(681) = 2.83, p = 0.005; T0 to T2: t(721) = 8.37, p < 0.001). Linear regression analyses show that, except for satisfaction with therapeutic alliance, subjective effectiveness, and duration of appointments, all other predictors significantly and negatively influence anxiety and depressionlower anxiety and depression at the end of care (T2). Furthermore, higher satisfaction with 'satisfaction and orientation to needs' and frequency of appointments is associated with lower anxiety and depression (T2). Qualitative results Positive experiences and perceptions of the isPO programme Negative perceptions of the isPO programme and optimisation needs isPO in routine care All interviewed patients expressed a desire for the ‘ availability of isPO after the end’ of the project phase and the expansion of the programme ‘ for all cancer patients ’. They emphasised that this would require a cultural change in the perception of psycho-oncology in Germany. Lack of knowledge and stigma might lead to aversion towards PO care. One patient described it as a ‘ rethinking in society ’ that needs to take place. For this aim, ‘ comprehensive and continuous education ’ and marketing (e.g. through display walls, posters, radio, or advertisements) could be used. From the patients' point of view, local and nationwide implementation of PO care is only possible with a constant expansion of personnel resources, sufficient qualification opportunities for staff, and balanced financial resources. It is therefore important that sufficiently qualified psycho-oncologists are available for needs-based PO care (i.e. that staff positions are created). One patient understood the problem like this: ‘ It's no use, if I offer this and don't have the staff to take care of it in peace. […] and you only have one contact person. So he is hopelessly overworked and that doesn't help either. ’ Patients emphasised that long-term and continuous PO support provided stability and was therefore highly relevant. An ‘ abrupt end’ to PO counselling after inpatient medical treatment should be avoided, and continuation of needs-oriented care, such as in isPO for 12 months, is desirable. Several of the interviewed patients identified the need for sustainable structures to achieve this purpose. They expressed the need for ‘ every hospital that treats oncological patients’ to offer PO services in a way that is financed and can therefore be expanded and/or maintained. Furthermore, some patients called for a good interdisciplinary cooperation between oncology and psycho-oncology to enable comprehensive cancer treatment. A few patients expressed the need for better visibility and knowledge about PO care within the health system, for example, among general practitioners. In their opinion, communication about available support programmes should be enhanced through professional articles in journals and through presentations of the study results of programmes like isPO. The interviewed patients showed awareness that the described aspects are a societal and cultural task that can only be implemented through recognition at the political and societal levels and through measures (implementation of guidelines) and the commitment of all those involved. From the patients’ point of view, different but focused actions (e.g. information week for psycho-oncology, advertisements) might be necessary. Thirty-eight patients were approached for data collection, from which twenty-three agreed to participate. Reasons for not participating included feelings of emotional instability, ongoing cancer treatment or suffering from physical strains. Sixteen patients identified as female and seven male, and age ranged between 32 and 65 years. Seventeen patients were employed. The number and type of isPO care ranged between low-intensity to high-intensity care (see Additional File for more details). All four isPO care networks and thirteen different cancer entities are represented in the data. The interviews took between 31 and 85 min. In all, the interview material comprises 21 h and 40 min. The final coding system consists of four levels. The head codings (first two levels) concerning quality of care are presented in Fig. . Condensed results are first presented according to positive experiences and perceptions of the isPO programme, followed by negative perceptions and optimisation needs. Finally, patients’ attitudes towards isPO in routine care are described (see Additional File for quotes). All interviewed patients perceived the isPO programme as useful for their individual recovery. Receiving PO care as a fixed care component parallel to medical oncological therapy appeared to be meaningful to them. The main goals of the isPO programme were identified by the patients as ‘ professional support’ and the patients being ‘closely supported’ which was considered important. Patients’ access to the programme facilitated their programme acceptance. They found that ‘ being approached ’ was especially promising for enrolment. Most found it crucial that their treating physician (e.g. oncologist) approached and recommended programme enrolment and that isPO was offered in the same institution where the medical treatment took place (e.g. hospital). Timely access to care was considered necessary because it gave patients a feeling of security. Moreover, it had a supportive effect, as they had an obvious contact person for potential stressful experiences. However, it was argued that deciding to participate in the programme later should be ‘ handled flexibly ’. Offering information and education on the programme at different levels (e.g. flyers, posters, in conversation with the case manager) with visual aids was considered motivating and necessary because ‘ a new programme like isPO is not self-explanatory ’ and PO is ‘ unknown to many patients and possibly has negative connotations ’. Overall, patients felt that isPO supported them individually and that the content of the counselling, type of support, and the interprofessional work of the isPO service providers were valuable. The opportunity for outpatient care especially was perceived as crucial for securing their PO care. The timeframe of care (up to 12 months) and the flexible intensity of care, which is oriented towards the programme’s stepped care concept, was perceived as needs-oriented. It was perceived as valuable that sudden support needs could be flexibly addressed individually. Furthermore, some patients rated positively ‘ that there were hardly any waiting times ’ and that appointments were ‘ possible at short notice ’. They described the cross-sectoral continuity of care as a ‘ safety anchor ’. Fears often only arose after discharge from the hospital. Therefore, the continuity of care experienced in isPO from inpatient to outpatient setting and having a fixed contact person were considered helpful. This implied that patients knew that the isPO service providers were familiar with their individual case and that therefore, there was no information gap. Within professional PO support, the formation of a good therapeutic alliance was essential, leading to a feeling of being ‘ understood, supported and cared for ’. The provision of information by the psycho-oncologists on further support options and the involvement of relatives was also positively received. Overall, patients described flexible handling of the care setting as helpful. Over the course of the project, the way in which the appointments were conducted became more flexible and adapted to the respective patient’s contextual circumstances. isPO was offered in different ways: (1) face-to-face in the rooms of the hospital itself, which was in line with the basic isPO idea, but also increasingly (2) by telephone, (3) online, or (4) via e-mail due to pandemic-related contact restrictions. However, it was emphasised that the first conversation should take place face-to-face, where possible. Patients who utilised the low-threshold support offered by the isPO onco-guides, perceived it as ‘ very helpful ’ and complementary to their professional care in isPO (e.g. psychosocial or psychotherapeutical support). They appreciated that the programme enabled ‘ an encounter on a peer level ’ at a time when patients ‘ would most likely not have sought contact with self-help ’. Often, it was remarked that it ‘ felt good ’ to talk to a person who had ‘ gone through the same things ’. The interaction and communication on ‘ equal footing ’ felt ‘ liberating ’ and provided ‘ confidence and courage ’. Young patients especially found it helpful to be able to exchange experiences ‘ authentically ’.isPO service providers (e.g. isPO case management or psychologists) were perceived as very professional. Furthermore, the organisational professionalism (e.g. enrolment process with the health insurance companies, communication with the general practitioner at discharge) was highlighted. Here, isPO providers’ reachability was an important aspect, especially during the COVID-19 pandemic. The patients considered it reassuring that ‘ if there was a need to talk, you could always call, and someone would answer ’. Patients articulated that the implementation of isPO was inhibited by the stigma attached to PO care in Germany. In their opinion, there has not yet been a sufficient cultural change in society, and this is especially true for the older population. Directly related to this is the obstacle of explaining the programme coherently. As the isPO programme is complex, most patients find it difficult to understand it in detail, which may lead to programme rejection. This was especially noticeable in the interview data of the first two interview waves (early implementation phase). It improved with the programme’s progress, presumably when the care networks began to use optimised patient information materials (PIMs). In the third wave of interviews, after the optimised PIMs were utilised in all networks, patients no longer described any obstacles in this regard. Similarly challenging was the comprehensibility of the isPO onco-guide concept. Especially at the beginning of the implementation (first wave data), it was noticeable that patients expressed little need to make use of the isPO onco-guides. It became clear that this was mostly because the isPO service providers did not provide accurate information about the onco-guides' duty and role, which in turn led to misunderstanding. Furthermore, some patients were reluctant to meet with a onco-guide out of ‘ fear of being overloaded with other bad stories ’ or a ‘ desire for peace ’. However, some patients developed an openness to meet with an isPO onco-guide later in the course of their trajectory. During the pandemic, some patients refused the offer due to fears of infection with COVID-19 through face-to-face contact. Furthermore, the resource structure of the isPO onco-guides was perceived as partially hindering. In some settings, rooms for a ‘sensitive conversation’ were not available and they unfortunately ‘ had to move to the cafeteria ’. At the time of the COVID-19 pandemic, there were also ‘ bottlenecks in terms of staff ’, as isPO onco-guides also feared infection. Patients experienced that staff (e.g. physicians, nurses) in the oncological wards only knew little about the isPO care programme. Furthermore, the external marketing for the isPO programme was described by some patients as ‘ insufficient ’ and ‘ hardly available ’. Overall, many of the interviewed patients perceived the care period as not flexible enough. The abrupt end after 12 months was seen by some patients as ‘ questionable ’ because many patients were still undergoing medical therapy at this time; therefore, many might still have needed the support. They desired structured PO aftercare. Some patients had difficulty differentiating between the care options provided by isPO and other outpatient psychotherapeutic treatment services. This may have led to different expectations of patients and service providers in relation to the content of care. In addition, some patients confused the programme and study contents, which is probably due to the interconnection of programme and study during the project period. Some described this as ‘frustration ’ because the meaning of the different questionnaires was not clear to them. With the new PIMs (e.g., timeline), this perception was minimised and was no longer described in the final interviews. Further programme optimisation included the desire for the expansion of psychosocial and family support because ‘ cancer is a “we” disease ’. Quality of care in isPO Implications for quality of care in psycho-oncology Methodological strengths and limitations Within both methodological approaches (quantitative and qualitative), patients reported medium-to-high satisfaction with the care they received in isPO. Furthermore, we found significant improvements in patient-related outcomes (health status, work ability, and anxiety and depression) over time, and regression analyses indicate that satisfaction with quality of care influenced those outcomes. Already, low-threshold care provision (e.g. via isPO case management, isPO onco-guides, and/or psychosocial care) significantly and positively impacts these patient outcomes, which is in line with guidelines that recommend stepped and needs-oriented PO care . However, in our analyses, it is noticeable that therapeutic alliance (regarding psychotherapeutic care) does not show significant association with the outcome variables. We presume an indirect effect, with therapeutic alliance significantly affecting general care satisfaction (subjective effectiveness and needs orientation) , which in turn affects important patient related outcomes. Furthermore, satisfaction with the frequency of appointments positively affected patients’ work ability, anxiety, and depression. This underscores the importance of care continuity and outpatient care and might be of interest for health insurance companies for economic reasons. Patients who return to work earlier might financially relieve health insurance companies in the long term. Research based on German health insurance claims data revealed that psychotherapy significantly reduced care costs and days of incapacity to work (i.e. sick days) . Wittmann et al. even suggested that ‘every euro invested in outpatient psychotherapy’ pays off threefold for society under the premise that the therapy effect lasts for 1 year after the end of treatment. In addition, patients have reported that returning to work is an important aspect for a full recovery ; it positively affects quality of life and provides financial security and a sense of control . Therefore, needs-oriented, continuous outpatient PO care may help patients reach a level of functioning that enables them to return to work and improve their quality of life. The qualitative results allowed us to gain deep insights into patient experiences with isPO. Moreover, they represent the specific enablers and barriers to programme implementation and therefore also quality of care. Patients found outpatient care with a fixed contact person crucial for care continuity; this was also promoted by Fann et al. . The patients’ experiences highlight that needs-oriented care was not just achieved by allocating patients to the ‘right’ care level based on sole occurrence of symptoms; it was also connected to increased flexibility in care that took patients’ individual needs into consideration . For example, this was achieved by flexibly choosing the care delivery mode (e.g. phone, face-to-face, or virtual) and the appointment frequency. Our findings on satisfaction with appointment frequency aligns with further external isPO evaluation results, indicating that the number of appointments with psychotherapists (utilised by patients with higher care needs) significantly influence changes in anxiety and depression over time . Therefore, needs-orientation seems to be a key component in PO care. Patients identified other aspects that could be handled more flexibly in the care programme: time of programme enrolment, period of care, and extension of care to relatives. The patients’ relatives’ need for care is often neglected , even though they may also suffer from emotional and social impairments . Therefore, from a patient’s perspective, it may be recommendable to augment isPO with a component that aims to flexibly address their relatives’ support needs. Measuring patient-reported outcomes may enhance patient-centred care and be beneficial for clinical outcomes . The isPO programme endeavoured not only to close the healthcare gap of needs-oriented cross-sectoral PO care, but also to establish a structured, sustainable healthcare programme that includes an adequate quality management. Through external evaluation of the isPO programme, it became evident that it is crucial to include patients’ perspectives on quality of care. Investing in gathering patients’ perspectives offered the opportunity to gain specific and practice-relevant feedback on important optimisation needs and implementation enablers and barriers that were also experienced by other researchers . Patients provided feedback specific to the implementation site (e.g. medical personnel not knowing about the isPO intervention programme) in addition to general feedback (e.g. that the care period needs to be handled flexibly according to the patients’ needs). Therefore, structured quality management at each care site and site-overarching (e.g. benchmarking or quality workshops) should be implemented and maintained to facilitate patient engagement in their care reality. Perceiving and considering patients’ perspectives should be acknowledged as an important quality indicator for needs-oriented interventions . For the daily routine of quality management, specific PO quality indicators are required to sufficiently assess, monitor, and improve quality of care . Breidenbach et al. came to a similar conclusion; they exploratorily analysed audit data of cancer centres regarding the challenges of providing PO care and found diverse care barriers on both the patient and organisational level. They called for the identification and integration of processual measures that especially promote integrated PO care in routine oncological care . This aligns with Rubin et al.’s reported advantages of integrating process measures of quality. They highlighted that implementation of such quality indicators may empower service providers and clinicians to proactively influence patient-reported outcomes. Based on patients’ experiences in isPO, we formulated recommendations for the programme, especially regarding an adaptability to routine care (Table ). Furthermore, patients expressed diverse attitudes and recommendations that apply to PO care in general. Applying a mixed methods design is characterised as key to contextualising patient experiences in health care . It allowed us to use the strengths of both, quantitative and qualitative methodologies and offset their respective weaknesses Further, it provided a deep understanding of patients’ perspective on the complex isPO intervention programme and helped to include their perspectives early and continuously in the programme’s optimisation loops . Participative elements in a programme’s development have been considered helpful by other researcher also . Programme designers in isPO predominantly used a top-down approach during the development phase . However, by feedbacking our results (‘acute’ results were immediately articulated) at least once a year to the programme designers, programme optimisations could be initiated by the designers according to the end-user’s needs . This may make the programme more adaptable and tailored for routine care . Therefore, formative evaluation data on quality of care may function as an indicator for the maturity level of a complex intervention and may aid in early identification of strengths and weaknesses that are specific to the implementation setting . In addition, it is important to consider though that a comprehensive mixed methods design needs sufficient resources and adequate funding. Even though an exploratory sequential design might have been helpful for a broader generalisation of the results , the explanatory design allowed us to explain the quantitative results and gather in-depth information on quality of care. Moreover, the qualitative approach may address some of the limitations of patient-reported experiences, such as confounding by health outcomes or measuring expectations rather than actual experiences . Our quantitative data suggest that processual quality of care measurements affect patient-reported outcomes, also the primary outcome of isPO (anxiety and depression). Therefore, we promote that, during evaluation of a new complex interventions, quality of care should be considered an important study outcome, which was also pronounced by others . Furthermore, when assessing patient-reported outcomes, there is a risk for biases, such as social desirability, common method, or recall. However, at the same time, patient-reported outcomes are valuable indicators of quality of care. Given the absence of a control group, conclusions regarding causality may not be drawn. The synthesis of results on quality of care together with the results on a programmes’ effectiveness is therefore important when considering the complexity of a new form of care, like isPO (effectiveness results will be published elsewhere). Interviewing patients after they finished 1 year of care in isPO was additionally helpful as they were able to reflect on the entire isPO care trajectory. Considering different moments (e.g. early implementation) within the implementation process allowed us to observe the programme normalisation process in the different care networks, which was also promoted by May and colleagues . However, as the patient recruitment process for the interviews was initiated by service providers, this should be considered when interpreting the results. We might have an underrepresentation of patients who were (1) not satisfied with isPO, (2) timid, or (3) critically affected by oncological treatment. However, most patients we interviewed did not refrain from giving feedback on possibilities for optimisation. Finally, only patients who were mostly fluent in German were recruited for the interviews. Further studies exploring the needs of patients with limited language proficiency are indicated. Patients assess the isPO programme’s quality of care positively. Likewise, patients’ perspectives were crucial for identifying implementation enablers and barriers of this new form of PO care, reflecting the programme’s feasibility and possible fit for routine care. Our results suggest a positive relationship between patients’ satisfaction with quality of care and important patient-related outcomes (health status, work ability, anxiety, and depression). Therefore, investing in gathering data on patients’ perspectives while using a mixed methods design can be helpful for conducting comprehensive evaluation of complex interventions to assess their quality of care and thereby maturity. For designers, this data may support necessary programme optimisations, especially if participative elements were not considered in the project’s design phase . Even though the isPO programme is highly complex, with its various interacting programme components , patients’ experiences with the stepped and needs-centred care approach indicate that it is recommendable for routine care. However, the persistent programme optimisation should be conducted and integrated within structured quality management. Additional file 1: Table A. Impulse giving guiding questions of the interview guidelines concerning quality of care. Additional file 2: Table B. Frequencies of variables assessing patients’ satisfaction with their respective service providers and the isPO care in general Additional file 3: Table C. Characteristics of interviewed patients (table content first published in Krieger et al. 2022). Additional file 4: Table D. Exemplary quotations for the coding system.
Magnetic steering continuum robot for transluminal procedures with programmable shape and functionalities
9f2bb373-7709-44bd-994d-68885931bd14
11069526
Microsurgery[mh]
The lumina of the digestive, respiratory, and urogenital organs or vessels of the body transport air, blood, fluids, food, and other substances inside the body or between the body and the exterior, potentially allowing invasive access to various target tissues. Using lumina’s patency, transluminal procedures have been proposed, reshaping medical treatment through less invasive operations , . Small-scale medical devices , can navigate through the lumen of an organ or a vessel to surgical sites for the diagnosis and treatment of prominent abnormalities and diseases, reducing postoperative pain and the risk of incisional complications. Particularly, characterized by safety and adaptability due to their passive compliance, millimeter-scale soft continuum robots offer a promising approach to less traumatic and more flexible transluminal procedures, thanks to innovative developments in microfabrication techniques , , propulsion strategies , , distal actuation methods – , and mechanics-based kinematic models , . However, even the state-of-the-art small-scale soft continuum robots with actively steering tips necessitate interactions with surrounding lumina. Their bodies can only be passively flexed, relying on the force generated by interactions with the luminal walls , . This results in a range of challenges, from potential medical risks to restricted mobility. One pressing concern is the potential damage to fragile tissues in tortuous lumina due to the robot’s interactions with luminal walls, especially in complex, sharply curved pathways , . Another issue is the accumulated frictional resistance created by the forces described above, especially in tortuous vessels, which could inhibit movement and result in the sudden catastrophic release of stored elastic energy . The thermal response variable stiffness magnetic continuum robot has the advantages of small scale, flexibility, and accessibility – . It can reduce stiffness during deployment to minimize forces on tissues, but the forces still accumulate as the robot continues to deepen the lumen. This not only results in the risks described above but also leads to localized buckling, preventing the transfer of thrust from proximal to distal. Moreover, A further challenge arises when the robot navigates into open areas such as the heart, stomach, bladder, or vascular junctions, where its mobility becomes severely restricted due to the absence of anatomical structures for interaction , . Therefore, continuum robots should have the ability to actively comply with the features of the lumina and reach the target independently of tissue interaction. Robots exhibiting the ‘follow-the-leader’ (FTL) behavior offer a promising solution, as they can operate without relying on environmental interactions. This behavior enables FTL continuum robots to navigate through the patient’s anatomy adeptly while avoiding delicate regions. FTL behavior, as herein defined, denotes the movement of a robot body along a trajectory guided by its tip , which can be realized by integrating multiple independent active deformation segments to enhance degrees of freedom (DOF). However, the challenge lies in arranging these deformation segments as efficiently as biological muscles, consequently limiting FTL deployment to specific or pre-defined trajectories – . An alternative approach to FTL continuum robotics involves the deployment of two concentrically arranged snake robots with a shape-lock mechanism . Nonetheless, the intricate locking structure results in increased cross-sectional dimensions and restricted angulation range. Here, we put forth a millimeter-scale continuum robot with FTL behavior, capable of apical extension and secure structural stability (Fig. ). Utilizing a dual-component system based on phase transitions, the robot engages in periodic, tip-based elongation steered by a programmable magnetic field. Each of these motion cycles integrates a stable, solid-like backbone along with a liquid-like component for forward advancement, allowing the robot’s shape to be actively programmed or reprogrammed through trajectory planning of its tip, independent of environmental interactions. When integrated with advanced imaging technologies, our robot is capable of achieving precise magnetically guided navigation, akin to a bine threading through narrow and intricate lumina, while significantly reducing tissue damage and friction (Fig. ). Its mobility characteristics, including accessibility and dexterity, are unrestricted in open spaces (Fig. ). In addition to executing surgical functions through the incorporation of microsurgical tools (Fig. ), our robot is not merely a tool-carrier. Upon reaching expansive anatomical areas like the stomach, it has the capability to morph in situ into complex 3D structures that serve as either surgical instruments or sensing units. This overcomes the limitations of the natural narrow lumina in surgical tools’ geometries and functionalities. Uniquely, the robot can also sense external factors like pressure and self-form knots to enhance its sensitivity (Fig. ). We substantiate these capabilities through both ex vivo and in vivo studies, demonstrating the robot’s mobility, functionalities, and compatibility with existing medical technologies. Table compares the performance of our robot with existing continuum robots including commercial endoscopes. Working principle Thermal characterization and management Periodic numerical control of the robot for deployment Magnetic navigation along planned paths to the destination Formation of functional structures in situ Potential clinical applications assisted by ultrasound imaging Potential clinical applications assisted by endoscopic imaging Clinical applications in/ex vivo assisted by X-ray imaging Clinical safety and biocompatibility assessment Although we initially assessed tissue damage qualitatively in the porcine esophagus, we further conducted in vitro experiments to quantitatively compare our robot’s lateral force with that of a standard commercial catheter (ID = 3.5 mm, OD = 4.9, Fresenius Kabi India Pvt. Ltd.) in a large angular position (Fig. , and Supplementary Movie ). While our continuum robot can navigate a 180° bend without sidewall contact under precise positional control, we specifically tested the worst-case scenario where the robot remains close to the sidewall. The results show that the force exerted by our robot on the sidewall is nearly 100 times lower than that of the commercial catheter. Furthermore, we compared our robot’s sidewall force with an existing LMPA-based continuum robot with variable stiffness (Fig. ), where both PTCs were simultaneously softened, advanced, and the sidewall forces recorded. Our robot exerted 10–15 times less force than the conventional variable stiffness robot. Additionally, the phase transition minimizes the pressure of the softened PTC on the support PTC, hence reducing friction, further aided by the hydrogel lubrication layer. The continuum robot used in our study is 400 mm long, and throughout all experiments, we observed no issues with deployment due to frictional resistance. This length satisfies certain medical requirements. Moreover, as the Guider becomes flexible before extending in each motion cycle, due to the minimal flexural strength, it buckles under resistance, ensuring that tissue damage caused by excessive pressure and popping forward is avoided. Our experiments provide substantial evidence of the biocompatibility of our robot. Initially, as demonstrated in Fig. , the PTC with its lubrication layer has been verified to be biocompatible. In the unlikely event of the lubrication layer being compromised, the PTC itself does not exhibit cytotoxic properties. Moreover, even in scenarios of potential leakage, the LMPA used in our robot does not show significant cytotoxicity. An additional layer of silicone tubing can be employed to ensure that any leaking LMPA remains isolated from the environment. Furthermore, any leakage would result in an increase in the electrical resistance of the PTC, as the cross-sectional area of the LMPA decreases. This change in resistance can be monitored to assess the state of the continuum robot (as shown in Fig. ). It is also noteworthy that the design of our robot minimizes the presence of sharp edges that could potentially damage luminal structures. Additionally, the PTC’s thermal management capability ensures that the surface temperature remains below 42°C in air (refer to Fig. and Fig. ), which is even lower in liquid environments, thus keeping within the biocompatible temperature range . In summary, these features enhance the clinical safety of our robot, making it a promising tool for medical applications. The detailed structure of our continuum robot is depicted in Fig. . We achieved the FTL behavior of the proposed continuum robot by alternately advancing a couple of phase transition components (PTCs) that are constrained to move only axially relative to each other (Fig. ). These PTCs can transition between solid and liquid states, altering their stiffness to switch roles between support and movement. The PTC capable of active steering and navigation under the applied magnetic field is named the Guider, and the other following the trajectory of the Guider is called the Follower. Low melting point alloy (LMPA) was selected as the phase transition material, and magnetic actuation was used to navigate the robot. LMPA and heating circuit (Fig. ) were encapsulated by silicone tubes to obtain the primary PTC , , which can be fabricated with diameters of a few millimeters and lengths of several meters (Fig. ). The internal pressure adds self-healing properties to the PTC (Fig. ). Figure illustrates the preparation process of the robot (see detailed preparation process in Materials and Methods). The Guider can be obtained by embedding a tiny permanent magnet in the tip of the PTC, and the Follower can be obtained by bonding a silicone tube along the axial direction of the PTC (Fig. ). The Guider was mounted coaxially with the silicone tube of the Follower, allowing the two PTCs to slide only axially against each other. The friction that prevents relative sliding can be significantly reduced by curing hydrogel on the surface of the Guider and the Follower (Fig. ). The Guider and Follower operate in an alternating manner, forming a cooperative relationship as follows: (1) Initially, the Guider, heated to the phase-transition temperature, becomes flexible for forward movement, while the Follower remains rigid to provide support. With multiple orders of magnitude differences between their flexural strengths (Fig. , detailed structure analysis provided in Materials and Methods), the rigid Follower maintains the shape of the deployed segment of the continuum robot, serving as a conduit to deliver the flexible Guider forward under proximal thrust. The flexible outstretched segment of the Guider will be deflected toward the direction of the magnetic field. (2) Subsequently, the Guider and Follower switch states, transitioning their roles within the motion cycle of the robot. The now rigid Guider stabilizes the deployed shape, while its protruding segment forms a new conduit, constraining the shape of the propelled flexible Follower. The maximum cross-sectional size of the robot is ~3–4 mm, with a potential length extending up to meters. For demonstration purposes, we employ a robot with a length of ~400 mm (Fig. ). In the process of thermal management optimization, to obtain the minimum current required to heat the LMPA to the phase transition temperature in a fluid environment (water and air) at 37 °C, we simulated the stabilization temperature and the time to reach the melting point as a function of the magnitude of the current flowing through the resistive heater (Fig. ). The results show that due to the high thermal conductivity of water, a current of at least 0.9 A is required to change the stiffness of PTCs in water, while only 0.3 A is required in air (Fig. ). LMPA requires a response time of a few seconds in both environments. Although the increase in current can lead directly to a reduction in response time, the compact distance between the two PTCs causes the Joule heat to rapidly soften the other rigid PTC as well through heat transfer, resulting in the collapse of the entire shape. We simulated the correspondence between the average temperatures of the Guider and Follower when they were heated separately in a fluid environment at 37 °C (Fig. ) and obtained the maximum threshold of the two PTCs’ temperatures to keep the entire structure (Fig. and Fig. ). Proportional-integral-derivative (PID) control was applied to accurately control PTC temperature based on the mapping of temperature T to PTC total resistance change Δ R tot (the change in total resistance relative to 37 °C) . A sourcemeter was used to heat the circuit and detect the resistance. To investigate the relationship between the temperature and the change in total resistance, the PTC, without electrical heating, was placed in a thermostatic water bath, where the temperature of the PTC could be accurately set by changing the temperature of the water. The total resistance of the PTC corresponding to each temperature was captured by the source meter, subtracting the resistance in 37 °C to obtain Fig. . We considered only the completely rigid or flexible PTC case and adopted 4.3 mΩ and 31.8 mΩ as the bounds of complete phase transition. To record the Δ R tot over time (Fig. ), a PTC was electrically heated to be completely flexible and maintained at that temperature under PID control in a 37 °C environment (37 °C water bath or thermostat) and then naturally cooled to rigidity. The PTC can transform from a rigid into an utterly flexible state in ~5 s in air and 10 s in water. Stiffening ends about 15 s and 10 s after the start of cooling in air and water, respectively (Fig. and Supplementary Movie ). As can be seen from the heat transfer simulation (Fig. ), theoretically, a completely flexible PTC can cool down to a completely rigid state quickly. The difference between the experiment and the simulation may lie in the deviation of the robot preparation as well as the difference in the experimental environment. The surface temperature of the flexible PTC can be stabilized at 42 °C (Fig. ) within a biocompatible temperature range (i.e., below 50 °C) . Moreover, axial deformation does not affect the total resistance, which means that the bending of the PTC due to magnetic torque does not disturb the relationship between temperature and total resistance change. Experimental equipment includes the magnetic actuation system, the advancement unit, and the sourcemeter (Fig. ). The Guider is first pushed out of the silicone tube on Follower by the advancement unit, then heated and softened by the sourcemeter, and finally deflected by the magnetic field generated by the magnetic actuation system. Under the constant curvature assumption, the deformation of the Guider’s tip under a magnetic field can be quickly calculated (see kinematic modeling in Materials and Methods for details). To minimize the effect of gravity, the Guider should not advance more than 40 mm at a time (Fig. ). To maintain movement efficiency, the advancing distance in each step should be no less than 20 mm. Consequently, the accessible area and inclination angle of the Guider’s tip in a motion cycle can be obtained (Fig. ). Since the entire shape of the robot follows its tip trajectory, once the length of advance l and the magnetic flux density B in each motion cycle have been calculated, the robot can be deployed along the planned path. The motion of the robot in one cycle is shown in Fig. . The robot navigated under a series of pre-planned magnetic fields and formed a predetermined body shape (Supplementary Movie ). The time sequence of the magnetic actuation system, the advancement unit, and the sourcemeter was planned ahead of time. In the motion cycle, the Guider first undergoes a controlled heating process to achieve flexibility under PID control. Once its total resistance change stabilizes, indicating the attainment of the flexible state, the Guider is propelled forward by the advancement unit and steered in response to the applied magnetic field. Subsequently, both the advancement unit and the heating circuit are deactivated, while the magnetic field is maintained until the Guider transitions from flexibility to rigidity. Following this, the Follower is heated to flexibility and advanced along the trajectory of the Guider by the advancement unit. Acknowledging potential errors in the kinematic model and practical contingencies, priority is accorded to operator inputs to the experimental equipment during pre-programmed movements of the robot. Ultimately, our robot demonstrates the capability of follow-the-leader (FTL) deployment along arbitrary paths, exhibiting accessibility and dexterity reminiscent of a bine. It can even navigate complex terrains, including climbing and curling around branches (Fig. and Supplementary Movie ). As can be seen in the video, although the forces between the Guider and Follower increase, due to the twisting caused by the asymmetric extension of the effective cross-section when curling, this does not hinder the relative sliding of the two PTCs, since the hydrogel considerably reduces the coefficient of friction. Under magnetic steering, the proposed continuum robot, capable of FTL behavior, can navigate along planned paths through highly unstructured environments with little or no interactions (Fig. ). The working environment was divided into two layers: water and air. The robot needed to pass through the barriers in water to enter the second layer and then navigate through a channel in path 1 and a set of rings in path 2 to reach the destination twice in air, respectively (Fig. ). The applied magnetic field and the forward distance in each motion cycle were calculated in advance. The forward distance is adjusted to the complexity of the environment to balance the accuracy and efficiency of movement. The experimental demonstration of the fabricated prototype is shown in Fig. and Supplementary Movie . The magnetic steering continuum robot was able to follow a planned trajectory through the constrained environment in water to the top entrance. After entering the second layer in air, the robot followed path 1 to the destination and retracted to the top entrance. Then the robot followed path 2 through a series of loosely arranged circles to the destination again. The large angle between adjacent small rings makes it hard for existing small-scale soft continuum robots to pass through without causing pressure on the rings (Fig. Left). Finally, the advancement unit withdrew the deployed robot along the deployment path. Sometimes, the robot has to make localized contact with the environment to resist gravity-induced deformation due to long cantilevers, but the weight of the millimeter-scale robot does not result in tissue damage or non-negligible frictional resistance. Our focus extends beyond the inherent challenges and requirements faced by traditional soft continuum robots. We also explore the unique capability of our robot to actively and dynamically program and reprogram its entire body in situ, transforming into a functional instrument. Upon reaching the target site safely, such as the bladder, ventricles, or abdominal cavity, the robot can continue to progress along a predetermined path, morphing into complex and functional structures suitable for various surgical tasks or sensing applications. This capability allows the robot to overcome the limitations, that the small inner diameter of natural lumina or access ports imposed, on the geometries and functionalities of surgical tools (Fig. ). Compared with structures employing strategies based on elastic deformation or origami , , , our robot can form and re-form multiple functional structures in situ as needed without prior programming. Moreover, it has the rigidity to withstand disturbances to perform reliable surgical procedures that require large force and high accuracy. Coming out of the lumen, the magnetic steering continuum robot can form various simple structures in situ as demonstrated by letters (‘B’, ‘M’, ‘C’,’R’), a lasso, and an antenna (Fig. , and Supplementary Movie ). To achieve the functionality of the lasso, we added the capability of segmental variable stiffness to the robot. The internal heating circuit of the PTC is shown in Fig. : an additional resistive heater was added to the circuit. By selecting the resistive heater, the length of the variable stiffness segment of the PTC can be adjusted. Thus, the robot maintained the shape of the rigid anterior segment to capture the object when withdrawing (Fig. and Supplementary Movie ). Moreover, the initial resistive heater had a segment folded back at the tip of the PTC, which allowed segment 1 to be heated with twice the power of segment 2 (Fig. ). The current can be adjusted to make segment 1 flexible while segment 2 remains rigid (Fig. and Supplementary Movie ). We also demonstrated the ability of the robot to form relatively complex functional structures (Fig. and Supplementary Movie ): the robot first moved along a planned trajectory to form a loose knotted structure, and then its entire body became flexible and was pulled taut by force generated by the advancement unit and the gradient magnetic field. We used a 50 × 50 × 50 mm permanent magnet (N52) (Fig. ) to generate the gradient field. Magnetic flux density and magnetic flux density gradient around the permanent magnet were measured (Fig. ). The magnetic flux density gradient was a minimum of 1.5 T/m and a maximum of 23.1 T/m along the axis within 50 mm from the surface of the permanent magnet. We found through simulation that within 50 mm, the force could be generated to tie a knot (Fig. ). The knot can increase the sensitivity of the total resistance to radial deformation from pressure (Fig. ), which confers the potential for the robot to act in situ as a pressure transducer . We measured the sensitivity of the knot to the radial force (Fig. ). The applied force and the amount of resistance change were linear within the range of 0–0.4 N ( k = 41.24 mΩ/N, R 2 > 0.985) (Fig. ). It is, to the best of our knowledge, the only continuum robot so far that can actively tie itself in knots. To validate the potential for clinical translation, we combined the robot with common medical technologies to demonstrate the navigation of our robot in biologically relevant phantoms, organs ex vivo, and environment in vivo. First, we extended the demonstrated magnetic navigation in FTL manner to a more realistic, clinically relevant environment in vitro, assisted by ultrasound imaging. To this end, we used a scaled-down resin vascular model that replicated the complex anatomy around the aortic arch, placed in a 37 °C liquid environment. The robot followed the planned path from the arteriae radialis into the common carotid artery to create a channel for neurointervention. This path had continuous large angular turns at the aortic arch that would make it difficult for conventional catheters to pass through (Fig. ). The digital model of the phantom vasculature was first reconstructed by multiple parallel scans of an ultrasound (US) probe clamped by a 4-DOF platform (Fig. ). The path was then planned in the digital model, and control parameters in each motion cycle were calculated based on the kinematic model. As the robot navigated along the planned path autonomously, the ultrasound probe repeatedly scanned the same path to update the robot’s position in the vessel (Fig. and Supplementary Movie ) (see Supplementary Materials). The images acquired by the camera and US indicate that the robot passed through the tortuous vessel with no interactions with the vascular wall, reducing the risk of interventional procedures. Active deformation without relying on interactions with the vessel prevents the distal end of the robot from popping forward due to the unpredictable release of built-up forward pressure accumulated in frictional contact at multiple vascular bends, making the robot suitable for interventional procedures, such as cardiac radiofrequency ablation, thrombus removal, and aneurysmal embolization. We also explored the functionalities of the robot for clinical applications. This robot can actively form a hook with a size larger than the entrance size to grasp foreign objects in the gastric model (Fig. and Supplementary Movie ). The segmental variable stiffness kept the shape of the functional unit unchanged during withdrawal. Meanwhile, we characterized the precision and accuracy of the continuum robot by testing its localization accuracy for reaching the target position several times ( n = 5) in a row (Fig. and Supplementary Movie ). Figure demonstrated that with good visual feedback, the robot’s localization accuracy (Error + SD) was 0.791 ± 0.291 mm and the motion trajectory remained essentially unchanged over multiple deployments. As an example relevant to potential medical applications, we equipped the robot with a working channel (ID = 0.5 mm) and a minicamera with built-in illumination (OD = 1.2 mm) to demonstrate the concept of diagnosis and treatment of gastric lesions (Fig. and Fig. ). The supplementary tools are loaded in the Guider’s tip, which keeps the overall size from increasing (Fig. ). Figure is the experimental equipment for the demonstration and the practical environment containing the porcine stomach and esophagus (see Supplementary Materials). A minicamera was pre-fixed in the stomach to provide global images of the experiment. In the clinic, another robot can be used to arrange the minicamera in a suitable stance at the desired location to provide global images (Fig. ), because the magnetic field does not affect the rigid robot’s shape in the confined environment in vivo (Fig. ). Aided by on-board minicamera imaging, the robot guided by a magnetic field was able to easily slide through the esophagus into the relatively open stomach (Fig. and Supplementary Movie ). After finding the lesions (indicated by the red quadrilateral and triangle) in the stomach, the distal end was sequentially deployed close to the lesions, and the drug (indicated by blue and red ink) was sprayed onto the lesions through the manipulation channel. In practical medical applications flow control devices can be used to precisely control the amount of drug. The distal end finally moved toward the stomach antrum region to examine tissue near the pylorus. These tasks demonstrate that the distal end of the robot can be repositioned at different angles to accommodate further surgery , or minimally invasive bioprinting , . We further validated the potential for clinical applications of the magnetic steering continuum robot with the aid of X-ray imaging. The robot entered the porcine cadaver (50 kg; purchased from Harbin Veterinary Research Institute, China) through a vascular incision and passed through the inferior vena cava into the right atrium, which could potentially be applied in cardiac radiofrequency ablation (Fig. ). First, the vessels were perfused with iohexol contrast agent and imaged by X-ray. The robot was then inserted into the vessels and passed through the inferior vena cava into the thoracic cavity. Positioned by X-ray imaging, the robot can advance through the superior vena cava for neurointervention or can be deflected into the right atrium for cardiac interventions under the influence of a magnetic field generated by a permanent magnet (Fig. and Supplementary Movie ). As seen in the radiographic images, the robot easily passed through a large angular turn into the right atrium, reducing the intervention’s difficulty and allowing the interventionalist to focus on subsequent sophisticated surgical maneuvers. Using X-ray imaging, we also verified the capability of the robot to form functional units in the stomach of a live porcine model in vivo. Although Pronase Granules were swallowed preoperatively to inhibit the digestive juices secreted in the stomach, food residues still resulted in an impaired field of view. Therefore, we did not equip the minicamera for in vivo experiment but used X-ray imaging instead. As shown in Fig. , the pig was anesthetized with vital signs by infusion. The pig was then transferred to X-ray imaging equipment. The operator first inserted a silicone tube coated with hydrogel into the porcine esophagus using a laryngoscope. The robot was then inserted through the silicone tube into the gastrointestinal tract and passed through the cardia into the stomach. As shown in Fig. and Supplementary Movie , the robot was guided by the magnetic field generated by a permanent magnet to form a hook in situ in the porcine stomach with dimensions larger than the diameter of the cardia, which can be used as a surgical tool. Following in vivo experiments, the upper gastrointestinal tract of the live porcine model was examined using a commercial gastroscope (EV-230, Shenda Endoscope Co.) to evaluate the damage caused by this robot (Fig. ). No obvious tissue or mucosal damage was found. imperative need for small-scale soft continuum robots to autonomously extend apically while maintaining structural stability, without relying on forces generated by environmental contact, is paramount for enhancing surgical safety, reducing complexity, and expanding functional capabilities. In this study, we introduce a magnetic steering continuum robot exhibiting follow-the-leader (FTL) behavior to address this critical requirement. Our robot consists of two PTCs that can only move axially in relation to each other, alternately changing their stiffness and moving in periodic. Throughout each motion cycle, these components alternate between serving as a stable, solid-like backbone and a liquid-like element for progressive advancement. This design enables the robot’s shape to be actively and dynamically programmed or reprogrammed through trajectory planning of its tip, thereby enabling operation independent of environmental interactions. This robot’s structure, lubricating layer, working temperature, and control model have been optimized to satisfy the movement requirements. Combined with the clinical imaging technologies, it can low-invasively navigate along a planned path through a tortuous body lumen into a relatively open space of an organ, and then morph into functional structures in situ, which cannot be accomplished using existing small-scale soft continuum robots – , – , , , . The demonstrated capabilities in realistic, clinically relevant ex vivo and in vivo environments have illustrated the potential of the robot for medical applications. Furthermore, integrating the robot with existing or cutting-edge medical technologies will promote clinical translation in the future. US- and X-ray-based imaging are well-established techniques regularly used for clinical diagnostic purposes. We have integrated ultrasound imaging into our system, and the experiment in the phantom vasculature showed that US, with the high spatial and temporal resolution, provides real-time feedback on the overall position of the robot in deep tissue. Moreover, the experiments in the heart of a porcine cadaver and the stomach of a live porcine model exhibited that radiographic imaging can visually determine the in vivo environment as well as the location of the robot, so we will introduce X-ray imaging into our experimental system in future work. For more precise surgery in the complex environment in vivo, magnetic localization will also be combined to obtain the tip’s real-time attitude. The miniaturization of surgical instruments is also a proven technique for transluminal procedures. Depending on the working environment and the target, several combinations of functional micro units can be loaded onto the robot. With images captured by a miniature camera, surgeons can operate microsurgical instruments to biopsy and procedure a lesion. However, it should be noted that the large length-to-diameter ratio of structure and the small strength-to-density ratio of material result in a low flexural strength of our robot. Gravity deforms the robot, but this does not affect the robot’s ability to perform transluminal procedures. First, although the robot is in localized contact with the environment due to gravity during navigation, the small pressure does not result in tissue damage or non-negligible frictional resistance. Second, although the functional structures formed by the robot are also intuitively affected by gravity, their small size makes the effect negligible. Moreover, magnetic force can be used to reduce the effect of gravity for high-precision operations. In future clinical procedures, surgeons will use movable electromagnets or commercially available magnetic navigation systems to operate the magnetic steering continuum robot. Although the advanced hybrid control strategy will reduce the surgery time and lower the working temperature below ambient temperature , adding the miniature cooling structure is still challenging. In future work, we will develop corresponding preparation methods. We envisage a future where our improved robot can safely and rapidly pass through any lumen, carrying or in situ forming multiple surgical tools to accomplish clinical missions. This robot will enable a broader range of applications and a higher level of safety in transluminal robotic surgery. It will also provide more functional and intelligent tools for minimally invasive surgery. Fabrication and structure analysis PTC heating circuit Calculation of PTC dimension Heat transfer simulation Kinematic modeling Experimental equipment Magnetic navigation Experiment in the stomach of a live porcine model Ex vivo cytotoxicity test PTC was fabricated using commercially available materials, including Cerrolow 117 (Bolton), silicone tubes (Guofengyuan), a tiny permanent magnet, a resistive heater, silver wire, enameled wire, and polyvinylpyrrolidone hydrophilic coating solution (MediCoat, BioNational Biomaterials). The silicone tube’s thickness was 0.15 mm. Two sizes of silicone tubes (ID = 1 mm, OD = 1.3 mm, and ID = 1.7 mm, OD = 2 mm) were used for encapsulation and assembly, respectively. The cylindrical permanent magnets with a diameter of 1.5 mm and a height of 8 mm had a magnetization strength of ~0.8 MA/m. The resistivity of a resistive heater, silver wire, and enameled wire, all with a diameter of 0.1 mm, was 0.48 × 10 −6 , 1.586 × 10 −8 , and 1.7 × 10 −8 Ω·m at 20 °C respectively. Low melting point alloy (LMPA) was selected as the PTC’s phase transition material because of a maximum elasticity modulus of several of GPa and the most excellent modulus ratio (maximum to minimum) among reported and analyzed state-of-the-art variable stiffness materials (Table , Fig. ). Among them, Cerrolow 117 is used in medical applications due to the optimal phase transition temperature (47 °C) . And interface phenomena (e.g., oxidation, wettability) of the liquid alloy hamper re-soldering between fractured surfaces. During fabrication, the silicone tube propped up by more injected LMPA maintained a continuous pressure on the LMPA after encapsulation. The internal pressure adds self-healing properties to the PTC, rejoining the two sides of the fracture when the PTC is heated (Fig. ). We describe the preparation process in detail in the Supplementary Materials. A universal tensile testing machine (strain rate 0.05%/s, ambient temperature 25°C) was used to test the mechanical properties of LMPA used in our robots (Fig. ). Although the ultimate stress is about 35 MPa, a strain of 40% was reached before the fracture occurred. This high level of ductility suggests that a continuum robot made of this material, when properly handled, is highly unlikely to fracture during clinical applications (Fig. ). To lock the 3D shape of the already deployed segment of the robot, the rigid PTC must have sufficient flexural strength to resist gravity and restrain the other flexible PTC, which constrains the dimension of the PTC. Two metrics were calculated to represent the ability of a rigid PTC to maintain its shape (see calculation of PTC dimension in Materials and Methods for the exact procedure): (1) the maximum deflection of an 80 mm long robot supported by only one rigid PTC as a cantilever beam under gravity; and (2) the ratio of the flexural strength of a rigid PTC to that of a flexible one (RFS). The calculation shows that although the flexural strength of rigid PTCs of all diameters is much greater than that of flexible ones, the diameter still needs to exceed 1 mm to reduce the impact of gravity on the shape (Fig. ). However, if the effect of gravity on shape is not a concern, the radial dimensions of the PTCs can be further reduced for more minuscule and tortuous body lumina, such as cerebral vasculature. Due to the thickness of the silicone tubing encapsulating the LMPA, the ratio of flexural strength of the PTC in the rigid and flexible state reduces as the diameter decreases. To ensure the interlocking effect of the Follower and Guider, as well as the space required for the heating circuit, the diameter of the PTC should be larger than 0.5 mm. As shown in Fig. , the heating circuit contained a resistive heater, silver wire, LMPA, enameled wire, and a source meter. The resistive heater has high resistivity and a low temperature coefficient. When a current is loaded to the heating circuit, the energy is concentrated mainly on the resistive heater, and the temperature of LMPA increases due to the conduction effect of Joule heating released from the heating wire. Silver wire is used to ensure the robustness of the heating circuit. When the rigid LMPA breaks, the contact of the fractured surfaces is so unstable that it will prevent the heating circuit from melting the LMPA. Therefore, the silver wire is connected in parallel with LMPA to hold the whole heating circuit closed at all times. Silver wire with a small cross-sectional area has a much higher resistance than the LMPA core so the total resistance R tot can be approximated a{{{{{\rm{tot}}}}}}}={R}_{{{{{{\rm{Ew}}}}}}}+{R}_{{{{{{\rm{Rh}}}}}}}+\frac{{R}_{{{{{{\rm{LMPA}}}}}}}{R}_{{{{{{\rm{Aw}}}}}}}}{{R}_{{{{{{\rm{LMPA}}}}}}}+{R}_{{{{{{\rm{Aw}}}}}}}} \, \approx \, {R}_{{{{{{\rm{Ew}}}}}}}+{R}_{{{{{{\rm{Rh}}}}}}}+{R}_{{{{{{\rm{LMPA}}}}}}}$$\end{document} R tot = R Ew + R Rh + R LMPA R Aw R LMPA + R Aw ≈ R Ew + R Rh + R LMPA where R Ew , R Rh , R LMPA , and R Aw represent the resistance of the enameled wire, resistive heater, LMPA, and silver wire, respectively. The terminals of the source meter were connected to the enameled wire, and the total resistance was detected while power was supplied. The mapping between temperature T and the absolute change in total resistance R tot allows indirect detection of temperature. In addition, the change in LMPA’s resistance in response to stimulation can be detected. The density, modulus of elasticity, thermal conductivity, and constant pressure heat capacity of the PTC’s preparation materials were measured (Fig. ). We considered the following scenario to evaluate the effect of gravity: a horizontally placed robot ( l = 80 mm) with one end fixed, the Guider in a rigid state, and the Follower in a flexible state. Since only the rigid Guider maintains the shape of the robot, and the solid LMPA’s elastic modulus E LMPA is much larger than that of the silicone tube, the maximum deflection of the robot Y g can be approxima{{{{{\rm{g}}}}}}}=\frac{({m}_{{{{{{\rm{G}}}}}}}+{m}_{{{{{{\rm{F}}}}}}})g{l}^{3}}{8{E}_{{{{{{\rm{LMPA}}}}}}}{I}_{{{{{{\rm{LMPA}}}}}}-{{{{{\rm{G}}}}}}}}$$\end{document} Y g = ( m G + m F ) g l 3 8 E LMPA I LMPA − G where m G and m F are the masses of the Guider and Follower, and the areal moment of inertia of the Guider’s LMPA core with diameter d can be calculated by I LMPA-G = πd 4 / 64. We calculated the ratio of flexural strength RFS in this scenario as well. The rigid Guider ignores the effect of the silicone tube, while the flexible Follower ignores the impact of the LMPA and treats the liquid LMPA as an empty space FS=\frac{{E}_{{{{{{\rm{LMPA}}}}}}}{I}_{{{{{{\rm{LMPA}}}}}}-{{{{{\rm{G}}}}}}}}{{E}_{{{{{{\rm{sili}}}}}}}{I}_{{{{{{\rm{sili}}}}}}-{{{{{\rm{F}}}}}}}}$$\end{document} R F S = E LMPA I LMPA − G E sili I sili − F where E sili is the elastic modulus of the silicone tube and I sili-F is the area moment of inertia of the cross-section to the short axis of the Follower. Heat transfer simulation was implemented in the commercial finite element analysis software COMSOL. We performed steady-state and transient analysis using the 2D model in Figure to obtain the current-to-steady-temperature and current-to-response-time relationships. The LMPA was encapsulated by the silicone tube and placed in a fluid environment. Joule heat was generated to heat the LMPA by current flowing through a resistive heater. Figure shows the cross-section of the 3D model in the steady-state analysis. The areas highlighted in green and red respectively represented the Guider and Follower, surrounded by the fluid environment. After setting the temperature of the Guider or Follower, the other is heated due to heat conduction. Based on the Euler-Bernoulli beam model and assuming a constant curvature , we can obtain the following analytical expression for inclination angle θ of the Guider’s tip withl (see Supplementary Materials for detailed calculations\frac{lBm}{{E}_{{{{{{\rm{G}}}}}}}^{{{{{{\rm{f}}}}}}}{I}_{{{{{{\rm{G}}}}}}}}\,\sin (\gamma -\theta )$$\end{document} θ = l B m E G f I G sin ( γ − θ ) where B and m are the norms of the magnetic flux density and the magnetic moment, γ represents the magnetic field inclin{{{{{\rm{G}}}}}}}^{{{{{{\rm{f}}}}}}}$$\end{document} E G f is the equivalent Young’s modulus of the flexible Guider, and I G is the area moment of inertia of the Guider. Equation ignores the effect of gravity, but the long advance distance of the Guider in a single motion cycle will lead to a non-negligible effect of gravity, and the parameter interval in which the effect of gravity is neglected needs to be determined to control tip deformation accurately. A scenario was analyzed where the influence of gravity was most prominent: the flexible Guider with diverse protruding lengths was placed horizontally as a cantilever beam under both horizontal magnetic and vertical gravity fields. The deformation of the Guider in response to an actuation magnetic field was numerically simulated in the commercial finite element analysis software ABAQUS/Standard using the FEA model proposed by Zhao et al. , , and the results were in good agreement with the experimental ones (Fig. ). As the data summarized (Fig. ) shows, the Guider should only advance up to 40 mm in each motion cycle to minimize the interference of the gravitational field, where position errors within 10 mm are considered negligible. The 2D accessible area and tip inclination angle of the Guider’s protruding segment ( l ∈ , mm) at 30 mT magnetic flux density ( γ ∈ [0, π] rad) can be obtained from the solution of Eq. (Fig. ). The 3D accessible area is simply a revolution of the 2D accessible area, which can be obtained by rotating the actuation fields around the axes. Despite the high agreement between theory and experiment (Fig. ), the kinematic model has limitations because it does not consider the force generated by the magnetic gradient. Therefore, the magnetic field handle control has a higher priority in deployment. Unless otherwise specified, the above experiments were performed with the experimental equipment shown in Fig. . The equipment comprises three parts: (1) a magnetic actuation system, eight amplifiers (HEA-200C, Nanjing Foneng Technology Industry Co., Ltd.), and a control computer for generating the magnetic field, (2) an ultrasonic imaging device (E2, SonoScape, Inc.), a 4-DOF platform, and two cameras (acA2040-120um, Basler, Inc.) for imaging, and (3) an advancement unit (Fig. ) and source meter system (Keithley 2636B, Tektronix, Inc) for manipulating the robot. We built a magnetic actuation system for preliminary experiments in the laboratory. Spheres of 308 mm diameter could be placed in the device without mechanical collisions so that experimental animals, such as rats, rabbits, and pigs, could be accommodated. In practice, the robot was propelled by the advancement unit and deflected by the magnetic field generated by the magnetic actuation system. The ultrasound probe anchored to the 4-DOF platform enabled the localization in liquid environments. Since X-ray imaging equipment (IPET007, Shenzhen Bestcare Biotechnology Co., Ltd.) was not integrated into the above experimental system, the two in vivo experiments based on X-ray localization were not done in the system. As an alternative, we used a permanent magnet to generate the magnetic field and manually propel the robot forward. The magnetic actuation system places eight electromagnets at the vertices of the square hexahedron along the diagonal. The device can generate a uniform magnetic flux density of up to 43 mT in any direction within a 10 cm long cubic workspace with <10% inhomogeneity. Magnetic flux density B is controlled by regulating current I i in each electromagnet. The electromagnets all work in a linear area. They are at a sufficient distance to avoid mutual influence so that magnetic flux density B can be calculated by the superposition law{\bf{B}}}}}}=\sum \tilde{{{{{{{\bf{B}}}}}}}_{{{{\it{i}}}}}}\,{I}_{{{{\it{i}}}}}\,$$\end{document} B = ∑ B i ~ I iis the magnetic flux density generated per unit current excitation of the i -th electromagnet. After calibrating each electromagnet’s ability to generate magnetic flux d{{{{{\bf{B}}}}}}}_{{{{\it{i}}}}}}$$\end{document} B i ~ , the current I i of each electromagnet can be calculated based on the required magnetic flux density B . The animal model was a 15 kg male Large White pig fasted for 12 h before the experiment (Experimental Animal Center of Harbin Veterinary Research Institute). General anesthesia was administered by intramuscular injection of 0.4 ml of Zoletil 50, and the indwelling needle was placed into the ear margin vein. With the aid of a laryngoscope, a silicone tube coated with hydrogel was inserted into the esophagus. An air pump (Elveflow OB1 MK3 + ) with flow control inflated the upper gastrointestinal tract. The robot was then inserted through the silicone tube into the esophagus and through the cardia into the stomach. The moist environment in vivo activated the hydrogel on the surface, reducing mucosal damage caused by friction. The magnetic field generated by a permanent magnet was used for magnetic navigation in vivo. The shape formed by the robot in situ was continuously imaged by X-ray imaging. At the end of the experiment, the robot was withdrawn. After commercial gastroscopy, the pig was placed in a separate enclosure for awakening. To evaluate the cytotoxicity of our continuum robot, live/dead cell staining was used to detect cell viability. The human umbilical vein endothelial HUVEC cells were cultured in Dulbecco’s modified Eagle’s medium (DMEM) (Gibco) supplemented with 10% (v/v) fetal bovine serum (Biological Industries). PTC with hydrogel layers, PTC without layers, and LMPA as cytotoxicity assay objects were used as cytotoxicity assay objects and placed in the culture environment of HUVEC cells, respectively. Cells cultured in medium only were used as positive controls. All cells were grown at 37°C in the humidified incubator with 5% CO 2 . Cells around the test object were taken every 24 h and stained with Calcein AM. The cells were washed once with PBS and observed under the fluorescence microscope by the excitation wavelength of 488 nSupplementary Movie 1. Heating and cooling of PTC in water and air. Supplementary Movie 2. FTL deployment of the magnetic steering continuum robot. Supplementary Movie 3. A magnetic steering continuum robot climbs like a real bine. Supplementary Movie 4. The robot passes through the unstructured environment to reach the destination. Supplementary Movie 5. The Robot forms various simple structures in situ. Supplementary Movie 6. The formed lasso can be used for object capture. Supplementary Movie 7. Segmented variable stiffness. Supplementary Movie 8. The robot ties itself into a knot. Supplementary Movie 9. Magnetic navigation with US imaging in a phantom aortic arch. Supplementary Movie 10. A hook formed by the robot can be used to remove foreign bodies from the stomach. Supplementary Movie 11. Demonstration of precision and accuracy. Supplementary Movie 12. Magnetic navigation with endoscopic imaging for drug delivery to the gastric lesion. Supplementary Movie 13. In-ex vivo experiments assisted by X-ray imaging. Supplementary Movie 14. Comparison of 180XX Turns on Sidewall Force. Reporting Summary Source Data
null
9c1d9235-1e33-4ba7-9356-5d52ad2e97f2
11380503
Microbiology[mh]
The genus Paenibacillus was initially established by Ash et al . in 1993 for the taxonomic classification of 16S rRNA group 3 bacilli . Subsequently, various species originally classified under the genus Bacillus were reassigned to the genus Paenibacillus . The type species of this genus is P. polymyxa . At present, Paenibacillus is categorized under the family Paenibacillaceae , which belongs to the phylum Bacillota . This genus currently includes 399 species, 304 of which are validly published with correct names (accession date: April 04, 2024; https://lpsn.dsmz.de/genus/paenibacillus ). Paenibacillus species have been obtained from various sources, including soil, air, sediment, eutrophic lake, hot spring, freshwater, mountain, rhizosphere, phyllosphere, plant, seed, food, gut, insect, necrotic wound, and fecal samples . This study characterized and determined the taxonomic status of strain dW9 T in the genus Paenibacillus , which was isolated from a soil sample collected from the Republic of Korea. Isolation of Strains 16S rRNA Gene Sequence and Phylogenetic Analysis Genomic Analysis Morphological, Physiological, and Biochemical Analyses Chemotaxonomic Characterization Cellular fatty acid compositions were assessed after growing strain dW9 T and its closest reference taxa on R2A agar at 25°C for 3 days. After the late log phase of growth, the biomass of all strains was harvested and used for extracting fatty acids. The extracted fatty acids were analyzed and identified using the MIDI protocol . Peptidoglycans were analyzed as described previously . Quinones and polar lipids were analyzed using freeze-dried cells in accordance with previously described methods . Polar lipid spots on TLC plates were visualized by spraying with various reagents . Strain dW9 T was isolated from a soil sample collected from Gyeongsangnam in the Republic of Korea (35°28'48.0''N 128°13'12.0''E). The strain was isolated by the standard dilution plating technique using R2A media (MB Cell, Republic of Korea). After plating, the Petri dishes were placed in an incubator at 25°C for 7 days. Subsequently, white colonies were selected and repeatedly streaked on R2A agar. Pure colonies of strain dW9 T were obtained and temporarily stored at 4°C. After the completion of taxonomic analyses, strain dW9 T was preserved in glycerol stocks at −80°C and was submitted to the Korean Collection for Type Cultures and NITE Biological Resource Center. Genomic DNA from strain dW9 T was extracted using the HiGene Genomic DNA Prep Kit (BioFact, South Korea). PCR amplification of the 16S rRNA gene was performed using forward (27F) and reverse (1492R) primers . The amplified PCR products were sequenced and analyzed as described previously . The closest phylogenetically related taxa were sorted by analyzing and comparing the 16S rRNA nucleotide sequences using the EzBioCloud server . Phylogenetic trees were constructed with MEGA X software using the maximum likelihood (ML) , neighbor-joining (NJ) , and maximum parsimony (MP) algorithms . The topologies of phylogenetic trees were estimated using the bootstrap resampling method with 1,000 replications . The evolutionary distances were determined using Kimura’s two-parameter model . The genome was sequenced by the Illumina MiSeq sequencing technique, and raw sequences were assembled using Platanus-allee v. 2.2.2 and SPAdes v. 3.13.0 assembly tools. The quality of the genome sequence was assessed using the ContEst16S algorithm and BLAST-N tool . The annotation of the assembled genome sequence was performed using the Rapid Annotations using Subsystems Technology (RAST) server and the Prokaryotic Genome Annotation Pipeline (PGAP) . The DNA G + C content was directly determined from the genome sequence data. Biosynthetic gene clusters (BCGs) for various secondary metabolites were explored using antiSMASH 5.0 . The genomic similarities between strain dW9 T and reference species were calculated using the Genome-to-Genome Distance Calculator and the average nucleotide identity (ANI) tool . The phylogenomic tree was generated on the Type (Strain) Genome Server using FastME 2.1.6.1 . Cellular morphologies of strain dW9 T were analyzed by transmission electron microscopy (Talos L120C; FEI) after culturing the strain on R2A agar at 25°C for 5 days. The Gram stain reaction was determined using the Color Gram 2 Kit (bioMérieux, France). Anaerobic growth, motility, catalase, and oxidase tests were performed as described previously . Endospores were examined by phase-contrast microscopy using a BX53-DIC microscope (Olympus) . The temperature, pH, and NaCl ranges for growth were determined as described previously . Moreover, the ability to hydrolyze cellulose, casein, DNA, starch, and Tween 80 was assessed as illustrated previously . Various other biochemical, enzymatic, and carbon assimilation features were assessed using API ZYM, API 20NE, and API ID 32 GN kits (bioMérieux). The length of the 16S rRNA gene nucleotide sequence of strain dW9 T was 1,447 bp. Moreover, 16S rRNA gene analysis revealed that strain dW9 T belonged to the genus Paenibacillus . Its closest phylogenetic neighbors were P. filicis S4 T (97.4%), P. chinjuensis WN9 T (97.3%), P. validus JCM 9077 T (97.1%), P. mucilaginosus VKPM B-7519 T (97.0%), P. puerhi SJY2 T (96.8%), and P. cremeus JC52 T (963.8%). The 16S rRNA gene sequence identities between strain dW9 T and all other phylogenetically related taxa were below the cut-off value of <98.7% for species demarcation . This suggested that strain dW9 T could be considered a novel species in the genus Paenibacillus . Furthermore, ML and NJ phylogenetic trees depicted that strain dW9 T formed a clade with P. puerhi SJY2 T ( and ), whereas the MP tree revealed the formation of a clade with P. cremeus JC52 T . Quality assessment confirmed that the genome sequence generated from strain dW9 T was valid and contamination-free. The genome size of strain dW9 T was 7,787,916 bp with a DNA G + C content of 51.3%. The genome sequence of strain dW9 T was assembled in 71 contigs with an N50 value of 243,884 bp and genome coverage of 136.0× . The annotated data obtained using RAST revealed 326 subsystem features in the genome of strain dW9 T . The strain also contained numerous BGCs encoding various secondary metabolites, such as linear azol(in)e-containing peptides, type III polyketide synthase, cyclic lactone autoinducer peptide, thiopeptide, phosphonate, proteusin, and terpene . The dDDH and ANI values between strain dW9 T and its closest phylogenetically related taxa ranged from 19.2% to 21.6% and 69.6% to 73.9%, respectively . The genome relatedness values between strain dW9 T and its reference species were below the threshold values [dDDH (70.0%) and ANI (95.0%)], suggesting that strain dW9 T was genomically different from its closest members . Furthermore, the phylogenomic tree revealed that strain dW9 T formed a clade with P. cremeus JC52 T . The cells of strain dW9 T were rod shaped and flagellated . Moreover, strain dW9 T was motile. Catalase and nitrate reduction tests were positive, whereas the oxidase test was negative. Strain dW9 T could grow at a temperature of 20–37°C and a pH of 5.0–7.0 and could tolerate 2.0% (w/v) NaCl. It could hydrolyze esculin and Tween 80. β-galactosidase and β-glucosidase activities were positive. The strain could assimilate D-glucose, L-arabinose, D-mannitol, gluconate, and D-melibiose. Other distinguishing features of strain dW9 T are presented in the species protologue and provided along with those of its reference species in . All enzyme activity and assimilation data obtained using API kits are provided in . The sole respiratory quinone in strain dW9 T was menaquinone (MK)-7. Diphosphatidylglycerol, phosphatidylglycerol, phosphatidylmethylethanolamine, and phosphatidylethanolamine were the predominant polar lipids . One unidentified polar lipid (L) was also observed. Both respiratory quinone and polar lipid profiles of strain dW9 T closely resembled those of its related reference taxa . The peptidoglycan was identified as meso-diaminopimelic acid (DAP). The key fatty acids in strain dW9 T were antiso-C 15:0 (75.6%) and iso-C 16:0 (6.8%). The key fatty acid profiles aligned with those of the closest reference taxa. However, the composition of minor fatty acids differed proportionally between strain dW9 T and the reference species . Taxonomic Conclusion Description of Paenibacillus gyeongsangnamensis sp. nov. Paenibacillus gyeongsangnamensis sp. nov. (gyeong.sang.na.men'sis. N.L. masc. adj. gyeongsangnamensis , referring to Gyeongsangnam, the place of Republic of Korea). Cells are aerobic, Gram-stain-positive, motile, endospore-forming, rod shaped (4.9–5.1 × 1.3–1.5 μm), and flagellated. Colonies on R2A agar are white, circular (4.6–5.4 mm in diameter), and convex. Cells grow at temperature 20–37°C (optimum, 25°C), at pH 5.0–7.0 (optimum, 7.0), and at 0–2.0% NaCl concentration (optimum without NaCl). Positive for catalase and nitrate reduction tests, and negative for oxidase activity. Hydrolyse Tween 80 and esculin, but not starch, DNA, gelatin, casein, and urea. Positive for alkaline phosphatase, esterase (C4), esterase lipase (C8), leucine arylamidase, acid phosphatase, naphtol-AS-BI-phosphohydrolase, β-galactosidase, α-glucosidase, β-glucosidase, α-mannosidase, and α-fucosidase. Assimilates D-glucose, L-arabinose, D-mannitol, gluconate, salicin, D-melibiose, L-histidine, 2-ketogluconate, D-ribose, inositol, D-sucrose, and glycogen. The key fatty acids are antiso-C 15:0 and iso-C 16:0 . The sole menaquinone is MK-7, diagnostic peptidoglycan is meso-DAP, and major polar lipids are d. The DNA G+C content of the type strain is 51.3%. The type strain, dW9 T (=KCTC 43431 T =NBRC 116022 T ), was isolated from soil in Republic of Korea (GPS coordinates: 35°28'48.0"N 128°13'12.0"E). The GenBank/EMBL/DDBJ accession numbers for the 16S rRNA sequence and genome sequence of strain dW9 T are ON573456 and JAQAGZ000000000, respectively. On the basis of data presented here, we proposed strain dW9 T as a novel species in the genus Paenibacillus with the name Paenibacillus gyeongsangnamensis sp. nov.
Factors Influencing Implementation of the Commission on Cancer’s Breast Synoptic Operative Report (Alliance A20_Pilot9)
b485fa42-68cd-4fa4-b4b4-1a0e1514707e
11300652
Internal Medicine[mh]
From December 2021 to May 2022, we conducted in-depth semi-structured interviews with health care professionals involved in SOR implementation to investigate the factors influencing implementation of breast SOR. Site SelectionInterview Participant SelectionData CollectionData Analysisused purposive sampling to identify four CoC-accredited institutions that represented varied institution types, electronic medical record (EMR) vendors, and geographic regions of the country. Two sites were Comprehensive Community Cancer Programs (largest CoC accreditation category), which implies the sites have 500 or more new cancer cases each year, and two sites were NCI-Designated Cancer Programs to evaluate the hypothesis that these “high-resource” programs also will experience barriers to implementing the new surgical standards. We approached sites by email, selecting sites interested in participating and sites that had a champion willing to engage in the study. Within participating sites, we recruited key informants involved in breast SOR implementation including breast surgeons, cancer liaison physicians (CLP), cancer program administrators (including oncology data specialists), and information technology (IT) personnel from the health system. A CLP is a physician of any specialty and an appointed person within each CoC program to fulfil the role of the physician quality leader. We also used snowball sampling, asking interviewees to identify additional informants with relevant perspectives on SOR implementation at each site. Participants consented to participate in the study. Two researchers (K.P. and S.L.) conducted the virtual interviews (Zoom Video Communications, Inc. San Jose, CA, USA). To develop the interview guide, we used two widely used implementation science determinant frameworks: the Consolidated Framework for Implementation Research (CFIR) and the Theoretical Domain Framework (TDF). – The combined use of CFIR-TDF guided the assessment of implementation barriers and facilitators at the individual, organizational, and external levels, including the role of CoC. The interviews were audio-recorded and transcribed verbatim using NVivo transcription (Lumivero, Denver, CO, USA). We de-identified transcripts for analysis. Two qualitative analysts (S.L. and K.N.), with guidance (K.P. and T.J.P.), coded and analyzed the data using NVivo (version 12). Deductive coding and template analysis were used based on a priori themes from CFIR and TDF domains as our initial codebook (Supplemental file). , The two coders (S.L. and K.N.) coded a portion of the transcript independently, convened to discuss and clarify the codes, and resolved the discrepancies (with K.P.). If codes were found that did not fit into the a priori themes from CFIR and TDF, additional codes were allowed to emerge until the codebook was finalized. To measure the agreement between the coders, Cohen’s kappa coefficient was queried after the initial coding. Negative Cohen’s kappa values were reviewed, and discrepancies were resolved through discussion until the Cohen’s kappa values were greater than zero. Then, S.L. and K.N. identified themes within each code. Disagreements on thematic synthesis were resolved through team discussion and review of the transcript and coded data. The themes were further consolidated into four main categories. Data were reported according to the Standards for Reporting Qualitative Research (SRQR) guidelines. This study was approved by The Ohio State University institutional review board, and the study was conducted in accordance with the Declaration of Helsinki. We conducted 31 interviews with key informants from four CoC programs (Table ). The interviews include 10 surgeons, 4 CLPs, 11 cancer program administrators, and 6 IT personnel. Three of the sites (institutions B, C, and D) used Epic as the singular EMR system, and one site (institution A) had a combination of different EMR systems including Cerner. Despite CoC’s expectation that programs would have developed an implementation plan by 2022, during the period of the interviews (December 2021 to May 2022), none of the sites had implemented breast SOR, and no surgeons had previously used breast SORs. The CoC program leaders were aware of the breast SOR accreditation requirement but had not made any explicit implementation decisions. Informants reported barriers to implementing breast SOR across all CFIR domains and several TDF constructs. Representative quotes are outlined in the Supplemental file. Two additional themes emerged: one a facilitator (non-breast SOR) and one a barrier (uncertainty surrounding the accreditation requirements). First, informants referred to their “non-breast SOR” experience with implementing colorectal accreditation standards and creating a registry as a facilitator of the implementation of breast SOR. This was considered an attribute different from “knowledge,” “self-efficacy,” or “belief about capabilities” themes in TDF. Surgeon informants described their reading of synoptic pathology reports as another “non-breast SOR”-facilitating experience. Second, in describing a barrier, the CoC program leaders expressed uncertainty surrounding the exact requirements for the breast SOR and the timeline for meeting them (because the CoC was continuing to change them) and thought that the CoC had not provided sufficient information about implementation and monitoring compliance. One informant said: “So initially it was for sure lack of direction, and I think confusion because the Commission on Cancer kept talking about like an interface for an API with our electronic medical records, but never really talked through what that was and what that looked like. So that in my mind, it was like, what does that mean?” We identified the following four key overarching categories of factors influencing breast SOR implementation: behavior, the CoC’s reputation, resources, and flow of information (Figs. and ). Surgeon Behavior and Workflow Integration Influenced Implementation SuccessCoC’s Reputation Enhanced Buy-inPrioritization of Resources Was Necessary for Timely AdoptionReliable, Multidirectional Information Flow Was Needed to Support Implementation The unidirectional flow of information about SOR from the CoC, mainly through emails to the CLPs and communication with peer networks, influenced implementation. The CoC’s major announcements typically were distributed first to the CLPs, but the surgeons’ external networks also contributed as a source of information. It was the responsibility of the CLPs not only to distribute the information, but also to help design the implementation plan within their institutions. One cancer program administrator noted: “. . . The biggest is email communication [from the CoC]. We also get communication from our liaison physician; that’s Dr. [name of CLP].” Although webinars also were published by the CoC, one institution noted that some of the information was available only to those with CoC login access, which made it challenging to share the information with necessary institutional personnel involved with the implementation (e.g., IT personnel): “So that little brief video I found helpful to basically level set everybody: our chairman, all the physicians that participate, as well as the other members of Cancer Committee. So, we used that initial YouTube video that was, I believe I found that link on the CoC website, to just educate everybody as to what the expectation is for the next three years, roughly. And then from that, Dr. [name of CLP] and I met with our technology team, our IT team, and he had access that through data links that we shared our screen and showed them the whole video.” Informants felt they experienced great benefit networking with peer institutions who had already successfully implemented the breast SOR. According to IT personnel, opportunities were available through EMR vendors such as Epic and Cerner to be able to learn from peer institutions: “. . . If we can see what other folks have done, you know, we don’t have to recreate the wheel. . . . I think in our meetings with departments, one of the most common questions I hear is, well, what are other hospitals doing? . . . We hear that pretty frequently.” Surgeon behavior, workflow change, and willingness to change were barriers discussed by informants from all four participating institutions. Workflow changes involved switching from prose-type operative notes to a templated checklist note that incorporated the synoptic element. Key informants discussed multiple factors that influence change in behavior, including past experience with templates, perception of how easy it would be to change the existing workflow, and relationships with the person championing the breast SOR initiative at his or her institution. The following representative quotes reflect these factors: “I think that our EHR [electronic health record] systems and our ability to do them from a logistics standpoint is not—is not going to fit in immediately with our workflow. And so changing habits, especially, I think for surgeons is very difficult. For anybody it’s difficult, but I don’t know; I think surgeons are especially stubborn.” “We have talked about templating our notes in the past, and the reception for that was mixed because a lot of people have their notes in such a way that it works with their workflow, and they believe it’s more efficient.” “. . . All my notes are pre-templated and then I alter components of it that are different for each case. . . . I would just update that language to reflect the CoC's requirements. So, it would require me sitting there for maybe 30 minutes and updating my four templates.” “I guess at the institutional level through our cancer committee, just the people that I interact with a lot, a couple of them have been the ones who have reached out to me about [the SOR]. So, I would say . . . that’s had a positive influence because it's people that I'm working with all the time.” The CoC’s positive reputation circumvented a perceived lack of evidence supporting breast SOR and reinforced the need to implement SOR. Informants mentioned the potential for improving data collection and research in general. When asked about the evidence supporting the breast SOR, many discussed not knowing the explicit evidence on SOR improving quality of care: “I understand the value of discrete documentation and all of the downstream effects that make that beneficial. But in terms of clinical care, I can only assume. I can’t say that I’ve read, you know, because I don’t know that it’s implemented across the board to—to have evidence.” Despite not knowing the evidence to support breast SOR, informants discussed having positive impressions of the SOR because of the CoC’s reputation as an accreditation body that supports quality improvement: “They all know that, you know, like the recommendations are typically evidence based. So, there is respect that if the CoC is requiring certain things that there’s good cause for it, you know, so I think it actually ends up legitimizing some of the things that we want to change.” Resource-related requirements included not only having IT personnel with relevant experience, but also IT prioritization of the breast SOR initiative. Informants noted that the perspectives of institutional leaders did not always align with CoC initiatives because there were no explicit immediate monetary benefits to the organization obtaining CoC accreditation. Without explicit prioritization at the higher level, key informants noted needing to wait for the template to be built into the EMR because IT personnel are a shared resource throughout the entire institution: “You know, it was kind of a struggle . . . with the C-suite because [implementing the SOR] wasn’t identified as something that was very important, and it wasn’t clear to the C-suite that there was going to be a return on investment. . . . So, it’s understanding that the institution has a very concrete concept of return on investment, and it’s typically monetary. As physicians, I feel like we would do anything that’s the best for the patient. . . . We don't think about how much it costs. But the C-suite administrators think about the cost.” Additional personnel and time resource constraints were noted that were related to the COVID-19 pandemic and the necessity for hospitals to respond to the pandemic by redeploying personnel to tasks other than those to which they would normally be assigned. Given the competing demands for personnel and time resources due to the COVID-19 pandemic, some criticisms focused on the timing of the CoC’s launch of the new standards: “So, in the middle of that surge, how do you juggle, especially if you’re in a place that's canceling surgeries because of some of the things going on with COVID; how do you manage some of these educational things and making sure that they’re being done appropriately when half of your staff could be out based on so many things going on in the community? . . . It's just . . . a very interesting time to really be rolling out anything new on health care right now, especially something that may not have—would have been just as beneficial if you would have waited a little bit longer.” This study provides an in-depth exploration of factors that influence implementation of the new breast SOR accreditation standard. Informants provided rich insights into the multifaceted challenges associated with the implementation of SOR. Our examination of the findings underscored the critical role of anticipating surgeon resistance to workflow changes, capitalizing on CoC's reputation, anticipating resource needs, and facilitating the flow of information to support the successful integration of SOR. A substantial body of literature underscores the significance of standardized reporting as an important element in efforts to improve cancer care. In a U.S. study on surgery for rectal cancer, use of SOR-educated surgeons for the important elements in formal cancer resection ensured that the necessary steps of a sound cancer operation occurred by acting as a checklist of reminders. Our study's findings align with previous research on CoC standard implementation that highlights the pivotal role of behavior change and the need for dedicated resources to implement new practices. – A recent qualitative study of general surgeons in Iowa showed that they were unfamiliar with the CoC standards and expressed skepticism about the importance of the new surgical standards. Surgeons expressed concern about the organizational burden of maintaining CoC accreditation. The multifaceted nature of implementation challenges, spanning individual attitudes, institutional priorities, resource allocation, and communication strategies, is consistent with literature on implementing other complex interventions. Within health care, there is a tendency to focus on “education” when there are new initiatives underway. However, the recognition of behavior, CoC’s reputation, resources, and the flow of information as core determinants highlights that multi-dimensional strategies are needed to address implementation challenges. One of the main elements missing in the implementation discussion from all four institutions in this study, perhaps because of a focus on initial adoption and buy-in, was an explicit process for monitoring compliance with SOR use. Despite the great emphasis placed on designing and introducing the SOR into the surgeon’s operative report, a concurrent audit-feedback and monitoring plan was not discussed as an explicit part of the organizations’ initial implementation plan (even though programs need to demonstrate 80 % compliance to meet accreditation standards). It was evident that during the early implementation and the implementation-planning phase, the programs were focused primarily on achieving buy-in and initiating successful roll-out rather than on integration of workflow with an auditing mechanism and its effective use. A Cochrane review showed the benefit of audit feedback in increasing adoption of target behaviors by clinicians, especially when the baseline performance was low. The institution benefits when the source of feedback is a supervisor or colleague, the feedback is delivered more than once, the feedback is delivered in different formats (verbal and written), and the audit-feedback process includes both explicit targets and an action plan. Future studies evaluating audit feedback as an implementation strategy to augment breast SOR use are currently underway. Our findings should be interpreted together with several caveats including the nature of social desirability bias in key informants participating in the interview. The findings are from three institutions that use Epic, and the experiences elsewhere could be different. However, given the rigor of the frameworks used for analysis and the range of themes evaluated in this study, we believe the key elements elucidated have wide implications. This study also focused explicitly on the breast SOR standards and not on other surgical standards such as colorectal, thoracic, or melanoma standards, which were beyond the scope of this study. However, this study reported findings that are generalizable to other surgical standards (e.g., the limitation in IT resources and the CoC’s reputation enhancing buy-in). Because we anticipate that the workflow changes for breast surgery likely differ from those for colorectal cancer or melanoma, in the designing of implementation strategies, these differences should be taken into account. In conclusion, our study’s comprehensive exploration of factors influencing implementation of breast SOR sheds light on the intricacies of adapting standardized reporting practices in cancer care. The use of CFIR and TDF frameworks allowed for a comprehensive exploration of individual and organizational determinants, offering insights into the implementation process. Key insights gained from this qualitative assessment showed that CoC standards should be paired with explicit guidance for implementation tailored to the unique challenges associated with SOR implementation. This will require implementation research before issuance of new standards. The findings of this study can be transformed into potential courses of action such as implementation toolkits focused on guidance for changing clinical workflow, anticipating resource requirements, and capitalizing on the CoC’s reputation. Further work evaluating implementation strategies that may help increase the uptake and decrease the burden of implementing the CoC’s surgical standards currently is underway. Data-Sharing StatementDeidentified participant data and qualitative interview coding data will be available upon request sent to the PI ([email protected]) after publication. The data will be made available to researchers whose proposed use of the data has been approved and after the completion of a signed data access agreement. Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 80 kb)
Fungal community characteristics of the last remaining habitat of three
929ab74d-4e4b-4892-bd80-0b156ff5255d
11494054
Microbiology[mh]
Orchidaceae is among the largest families of higher plants and is recognized as one of the most morphologically diverse plant families in nature. There are approximately 25,000 to 35,000 species of Orchidaceae distributed across 800 genera worldwide , . Orchid organs are highly specialized, diverse, and adaptable to various environments, being widely distributed across different terrestrial ecosystems, with the exception of polar regions and extreme arid deserts , . These plants possess specialized mycorrhizal communities and unique pollination mechanisms , rendering them extremely sensitive to habitat changes , . Furthermore, orchids hold significant ornamental value and medicinal properties , establishing them as “flagship” species within their ecological contexts . Currently, wild orchids are under significant pressure to survive due to severe anthropogenic activities and the effects of global climate change. All wild orchids worldwide are protected under the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES), which explicitly prohibits their trade . The genus Paphiopedilum , belonging to the Orchidaceae family, is characterized by its large, slipper-shaped lips on the petals , . The shape of the flower resembles the slippers worn by noble European women during the Middle Ages, which has led to its common name, the slipper orchid. This group represents the most ornamental category within the Orchidaceae family and is highly regarded among international floral enthusiasts. As a result, this topic has attracted the attention of numerous scientific researchers and flower aficionados worldwide . Due to their ornamental value, the wild populations of Paphiopedilum are vulnerable to predatory mining and the smuggling trade . This group of plants encounters considerable challenges in reproducing in the wild, resulting in a rapid decline in both species diversity and population numbers, with some species potentially facing extinction . Currently, all Paphiopedilum species are listed on the CITES List and the IUCN Red List of endangered species. In China, they are also included in the State Key Wild Plant Protection List, underscoring their rarity among globally endangered plant groups . Southwest China serves as both the origin and differentiation center for Paphiopedilum , as well as a distribution hotspot. However, a recent survey has revealed that wild Paphiopedilum plants in Southwest China are experiencing severe damage, particularly species such as Paphiopedilum armeniacum ( P. armeniacum ), Paphiopedilum wenshanense ( P. wenshanense ), and Paphiopedilum emersonii ( P. emersonii ), which are among the most endangered . Currently, fewer than ten natural distribution sites for these three Paphiopedilum species have been identified, and their habitats are at significant risk of degradation and loss. These species are classified as having small populations on China’s wild plant protection list, which indicates an imminent risk of extinction in their natural environments. This classification underscores the urgency and priority of conservation efforts. Understanding the ecological habits of orchids is essential for effective scientific conservation efforts, particularly for the ex situ conservation of endangered species , . In their natural environment, orchid seeds are small, and their germ is underdeveloped and nutrient-poor; thus, seed germination relies on the assistance of fungi that provide critical resources such as carbon, nitrogen, and phosphorus . The entire growth cycle of orchids is contingent upon the presence of compatible mycorrhizal fungi in the soil, whether specialized or generalist. Given the significant dependence of Orchidaceae on mycorrhizal fungi, it can be inferred that their spatial distribution is largely influenced by the distribution of microorganisms in the habitat soil. Consequently, the composition of fungi in the habitat soil emerges as a key characteristic of the ecological habits of Orchidaceae plants , . Evidence suggests that alterations in mycorrhizal fungal communities, driven by habitat conditions, can directly or indirectly influence the distribution dynamics of terrestrial orchids , . Some introduction experiments indicate that orchid seeds can germinate outside areas where orchids are densely distributed, suggesting that these seeds may recruit fungi from the surrounding soil to establish symbiosis, germinate, and grow , . This process may gradually lead to the formation of a microbial community centered around the orchid roots. The varying demands for fungal community composition during different growth stages of orchids can alter the original composition of the fungal community. Therefore, the soil fungal community in the habitat may serve as the source of orchid mycorrhizal fungi, while the growth of orchids may also contribute to changes in the structure of the soil fungal community in their habitat , . Furthermore, some studies have demonstrated that areas adjacent to orchid growth are hotspots for fungal spore germination and the establishment of new plants . However, most current studies have concentrated on the endophytic fungi and rhizosphere fungal communities of orchids, while largely neglecting the fungal communities present in the surrounding soil – . This lack of understanding regarding the sources and processes involved in the construction of orchid mycorrhizal fungal communities hinders the development of simulated environments for ex situ conservation. Currently, the dwindling numbers of remaining wild populations and habitat degradation make it challenging to ensure the successful supplementation of seedlings, placing the survival of wild populations in jeopardy. In this context, it is both necessary and urgent to create an appropriate habitat for the growth of Paphiopedilum in a controlled artificial environment, thereby expanding its population size and ultimately facilitating its return to the wild. To achieve success, it is essential to thoroughly investigate the soil fungal communities in the last remaining habitats of the three Paphiopedilum species, including their spatial variations and responses to the biotic environment. Considering the unique growth environments of the three Paphiopedilum species, this study posits that these species share certain similarities in habitat preferences while also exhibiting distinct differences. Such differences contribute to variations in soil nutrients, soil fungal community structure, diversity, and the functional groups of soil fungi associated with each species. Uncovering these differences is crucial for understanding the ecological adaptability of the three Paphiopedilum species. In this study, P. armeniacum , P. wenshanense , and P. emersonii were selected as research subjects to comprehensively investigate the environmental characteristics of their remaining habitats. Soil samples were collected from the habitats of these three Paphiopedilum populations, and high-throughput sequencing methods were employed to analyze the soil fungi present. This analysis aimed to clarify the composition and structure of the fungal community, identify fungal functional groups, and assess fungal community diversity. The study revealed dynamic changes in soil fungi within the Paphiopedilum habitats and provided preliminary insights into how the fungal community structure responds to the biotic environment. This research addresses a gap in the understanding of habitat characteristics for the wild populations of these species. Additionally, the findings may serve as a reference for ex situ conservation, artificial breeding environment selection, and the cultivation of these plants. Sample collection Soil nutrient analysis Habitat soil fungal sequencing Data analysis Soil nutrient difference analysis Analysis of fungal community structure in the habitat In this study, we conducted a comprehensive investigation of the natural distribution of three Paphiopedilum species, selecting three sites with optimal growth and no anthropogenic disturbances for each species (Fig. ). P. armeniacum is found in Nujiang Prefecture, Yunnan Province; P. wenshanense is located in Wenshan Prefecture, Yunnan Province; and P. emersonii is situated at the junction of Guizhou Province and Guangxi Province. Initially, we observed the last known habitats of the Paphiopedilum species, recording data on latitude, longitude, altitude, slope, aspect, and vegetation type. Sampling was conducted 10 cm outside the dense distribution areas of the Paphiopedilum species. We established the direction of potential seed dispersal as the sampling direction, extracting 20 g of soil samples in three directions at a depth of 5 cm. Soil fungal samples were collected from 27 habitats across the three species. Prior to sampling, the air environment was disinfected with 75% medical alcohol to eliminate undecomposed litter. To ensure the collection of representative soil samples, the soil was transferred to a 4 °C refrigerator within 12 h post-sampling to minimize DNA degradation. For meta-barcode analysis, a small quantity of soil (< 250 mg) was placed in BashingBead™ Lysis Tubes (Zymo Research, Cambridge Bioscience, Cambridge, UK) to preserve environmental DNA for subsequent extraction and amplification. Soil nutrient test samples were collected following the aforementioned sampling method, with 200 g of soil (from a depth of 0 to 10 cm) obtained from each habitat, placed in sampling bags, and returned to the laboratory , . The soil physiochemical properties were analyzed according to Bao and are described briefly as follows . The pH was determined by the water: soil = 2.5:1 extraction pH meter (PHS-3G) method. The water content was determined by the 105 °C drying–weighing method. The organic carbon content was determined by the KCr2O7-H2SO4 external heating method. Total nitrogen was determined by the semimicro Kjeldahl method; total phosphorus was determined by the HClO4-H2SO4 digestion-molybdenum antimony colorimetric method. Total potassium was determined by NaOH melting-flame spectrophotometry. Available phosphorus was determined by a 0.03 mol/L NH4F-0.025 mol/L HCl extraction-molybdenum antimony anti-colorimetric method. Available potassium was extracted by a 1 mol/L neutral NH4OAC-flame photometer. Ammonium nitrogen and nitrate nitrogen were determined by a 2 mol/L KCl extraction-td., Shanxi, China). DNA was extracted from the samples by the CTAB method, and the extracted DNA was detected by 1% agarose gel electrophoresis. PCR extraction was performed using barcode primers and high-fidelity enzymes. The following primer sequences were used: 5’-AAGCTCGTAGTTGAATTTCG-3’ and 5’-CCCAACTATCCCTATTAATCAT-3’ , . The PCR products were mixed and detected by 2% agarose gel electrophoresis. The quantified PCR products were subjected tothen Cutadapt (version 1.9.1) was used to identify and remove the primer sequences . Subsequently, USEARCH (version 10) was used to splice the double-end reads and remove the chimeras (UCHIME, version 8.1) .simple Bayes classifier with UNITE as the reference database . The soil nutrients of the three Paphiopedilum species were statistically analyzed by SPSS 22.0. Univariate analysis of variance was used to analyze the means. When the variance analysis results of soil nutrients between different species were significant at the p < 0.05 level, the least significant difference (LSD) test was used to compare the mean values of the soil variables. FLASH (v.1.2.7) was used to merge forward and reverse double-ended sequences from the MiSeq platform. A paired-end reading was obtained for each sample based on a unique barcode sequence. QIIME software was used to remove data clutter, and 97% similarity was used as the standard to divide operational taxonomic units. Usearch was used to remove chimeras, and the RDP classifier Bayesian algorithm was used to perform taxonomic analysis on representative OTU sequences. Species classification was performed using the fungal database Unite8.0/Fungi (Unite Release 8.0) for comparison and identification. In this study, the abundance of fungal species was analyzed based on the Euclidean distance algorithm, and the FUNGuild online database platform was used to predict the function of soil fungi in three Paphiopedilum species. The Chao1 index, ACE index, Shannon index and Simpson index were used to represent the alpha diversity index, and a t test was used to determine significant differences. ANOSIM was used to verify the reliability of the species grouping. Beta diversity was analyzed by nonmetric multidimensional scaling (NMDS) calculated by the Bray‒Curtis distance, and the results were subjected to the Adonis nonparametric variance test. The above analysis was implemented using the BioCloud platform ( https://www.biocloud.net/ ). To effectively reveal the symbiotic network relationships of the soil fungi in the three Paphiopedilum habitats , the soil fungal species data of the Paphiopedilum plant habitats were filtered under conditions of a relative abundance ≥ 0.1% and at least three sampling points. Spearman correlation was performed on the filtered data set, and a correlation coefficient (ρ) ≥ 0.5 and p < 0.01 were selected to construct the network. The main topological attributes of the network were calculated by using the “igraph” package of R software, and the nodes were calculated and visualized with Gephi software. The Mantel test was used to explore the effects of habitat soil nutrients on fungal diversity. db-RDA was used to reveal the effects of habitat soil nutrients on soil fungal community changes. Spearman correlation analysis was used to reveal the correlation between habitat soil nutrients and dominant habitat soil fungal groups. The analysis and mapping were achieved using the “vegan” package, “ggcor” package, and “corrplot” package of R software. Habitat information and soil nutrient characteristics Species dilution curve and Venn diagram Fungi composition and functional group composition in the habitat soil of three Diversity analysis Soil fungal co-occurrence network of three Relationships between habitat soil fungi and soil nutrients of three paphiopedilum species The correlation between soil nutrients and their effect on fungal alpha diversity was analyzed using the Mantel test. The results indicated (Fig. A) several significant correlations among the soil nutrient factors. Total nitrogen content in the soil was significantly positively correlated with organic carbon, alkali-hydrolyzable nitrogen, and available phosphorus, while total phosphorus exhibited a significant positive correlation with available phosphorus. Conversely, available potassium showed a significant negative correlation with total phosphorus, alkali-hydrolyzable nitrogen, and available phosphorus, and soil pH was significantly negatively correlated with total potassium. Furthermore, soil pH significantly influenced the Shannon and Simpson indices of soil fungi. Redundancy analysis was conducted with soil nutrients, utilizing the ten most dominant fungal taxa as response variables. The results demonstrated that the first axis (Fig. B) accounted for 60.41% of the variance, while the second axis explained 20.29% of the variance, together accounting for 80.70% of the changes in the horizontal distribution of the dominant fungal groups, with total nitrogen, organic carbon, and alkali-hydrolyzed nitrogen identified as the most significant factors. Additionally, Spearman correlation analysis was employed to elucidate the associations between soil nutrients and dominant fungal groups (Fig. ). In the P. emersonii habitat, total potassium was significantly negatively correlated with Rozellomycota, Mortierellomycota, and Ascomycota, while it was significantly positively correlated with Basidiomycota. Alkaline nitrogen was significantly negatively correlated with unclassified fungi and Chytridiomycota, whereas pH was significantly positively correlated with both unclassified fungi and Chytridiomycota. In the P. armeniacum habitats, the Rozellomycota, Basidiomycota, and Ascomycota phyla exhibited significant correlations with all nutrient factors, with the exception of total potassium. The abundance of Rozellomycota was found to be negatively correlated with available potassium while simultaneously being positively correlated with total potassium. Conversely, Basidiomycota showed a positive correlation with available potassium but a negative correlation with the other nutrient factors. Similarly, the abundance of Ascomycota was negatively correlated with available potassium and positively correlated with the other phyla. Additionally, a negative correlation was observed between unclassified fungi and total potassium. In the P. wenshanense habitat, unclassified fungi, Basidiomycota, Ascomycota, and Chytridiomycota were significantly correlated with total nitrogen, total phosphorus, organic carbon, alkali-hydrolyzed nitrogen, available phosphorus, and available potassium, while not showing significant correlations with the remaining factors. Notably, the abundance of Mortierellomycota was significantly negatively correlated with both organic carbon and available potassium, whereas Glomeromycota abundance was significantly negatively correlated with pH. A habitat survey of three wild Paphiopedilum species revealed that P. armeniacum grows in mountain shrubs at altitudes of approximately 1700–2000 m, typically on slopes ranging from 30° to 55° (Table ). P. emersonii is found in evergreen and deciduous broad-leaved mixed forests at altitudes of approximately 300–800 m in karst low-mountain hills, where it thrives on very steep slopes of 75° to 90°. P. wenshanense is located in shrubs on both normal and karst landforms, at altitudes of approximately 1500–1600 m. The three species share similar habitat preferences, favoring shady slopes, particularly those facing north and northwest, as well as negative terrain features such as tree roots, stone pits, and stone crevices. Soil nutrient analysis for the three Paphiopedilum species was conducted using one-way analysis of variance (Table ). The results indicated no significant differences in total nitrogen, total phosphorus, total potassium, organic matter, available potassium, or pH among the three species. However, the content of alkali-hydrolyzable nitrogen in the soil of P. emersonii was significantly greater than that in the soil of P. wenshanense . Additionally, the available phosphorus levels in the soils of P. emersonii and P. armeniacum were significantly higher than those found in the soil of P. wenshanense . The dilution curve effectively reflects the sequencing depth of the sample sequence. As illustrated in Fig. A, at a similarity threshold of 97%, the soil fungal dilution curves for the three Paphiopedilum species exhibited a decreasing trend, suggesting that the sample size adequately represents the soil fungal community within the plant habitat as a whole. A total of 2,161,515 pairs of reads were obtained from 27 fungal habitat samples. Following quality control and the splicing of paired-end reads, a total of 2,154,184 clean reads were generated. Each sample produced a minimum of 79,308 clean reads, with an average of 79,785 clean reads. High-throughput sequencing analysis was conducted based on the 97% similarity tag classification, serving as the operational taxonomic unit (OTU) standard, resulting in the identification of 1,068 operable units. Among these, 452 unique OTUs were identified in the soil of P. emersonii (Fig. B), followed by P. wenshanense ( n = 232) and P. armeniacum ( n = 211). There were 65 OTUs shared between the habitat soils of P. wenshanense and P. armeniacum , 25 OTUs shared between P. armeniacum and P. emersonii , and 46 OTUs shared between P. emersonii and P. wenshanense . Furthermore, 37 common OTUs were identified across the three Paphiopedilum habitat soils. According to the species annotation presented in Table , a total of 336 fungal species were identified, belonging to 11 phyla, 30 classes, 74 orders, 157 families, and 272 genera. In the soil of P. emersonii , 230 fungal species were identified across 10 phyla, 26 classes, 62 orders, 127 families, and 202 genera. In the soil of P. armeniacum , 138 fungal species were identified, representing 9 phyla, 21 classes, 44 orders, 86 families, and 116 genera. Finally, in the soil of P. wenshanense , 145 fungal species were identified, comprising 7 phyla, 21 classes, 52 orders, 98 families, and 126 genera. paphiopedilum Figure illustrates the relative abundance of fungal groups in the habitat soil of three Paphiopedilum species at both the phylum and genus levels. In the soil of P. wenshanense , Calcarisporiellomycota emerged as the dominant fungal group, whereas no distinct dominant group was identified in the soil of P. armeniacum . Conversely, the habitat of P. emersonii exhibited dominant fungal groups, including Kickxellomycota, Entorrhizomycota, Olpidiomycota, and Rozellomycota. At the genus level, the habitat soil of P. wenshanense was characterized by the presence of unclassified Sordariomycetes, Sebacina, unclassified Basidiomycota, Boletus, unclassified Boletaceae, Archaeorhizomyces, and other prevalent fungi. In the soil fungal habitat of P. armeniacum , unclassified Thelephoraceae, Hygrocybe, unclassified Serendipitaceae, unclassified Ascomycota, Tomentella, and unclassified Agaricomycetes were identified. The soil of P. emersonii contained Acremonium, unclassified Fungi, unclassified Chaetothyriales, Apodus, unidentified fungi, unclassified Hypocreales, and various other soil fungi. Based on the ecological role of fungi in the environment, we conducted a functional classification and annotation of soil fungi associated with three Paphiopedilum species using the FUNGuild microecological tool. The functions of soil fungi can be categorized into three types based on their nutritional modes: saprophytic, symbiotic, and pathogenic nutrition. In the soils of the three Paphiopedilum species, saprophytic and symbiotic fungi were predominant (Fig. A). Notably, the relative abundance of these two nutritional types in the soil of P. armeniacum constituted 98%, while they represented over 80% in the habitats of P. wenshanense and P. emersonii . The fungal functional groups were further classified into ten categories based on their environmental resource absorption (Fig. B). These categories included Undefined Saprotroph, Ectomycorrhizal, Undefined-Biotroph, Soil Saprotroph, Fungal Parasite, Wood Saprotroph, Plant Saprotroph, Animal Pathogen, Plant Pathogen, and Orchid Mycorrhizal. Among these, undefined saprophytic fungi and ectomycorrhizal fungi comprised a significant proportion of the soil fungal communities associated with the three Paphiopedilum species, playing crucial ecological roles. In the habitat soil of P. armeniacum , undefined saprophytic fungi, ectomycorrhizal fungi, and undefined biotrophic fungi dominated, accounting for over 95% of the relative abundance, along with some orchid mycorrhizal and animal pathogenic fungi. The primary fungal functional groups in the soil habitat of P. wenshanense included saprophytic fungi, animal pathogens, plant parasitic fungi, plant saprophytic fungi, wood saprophytic fungi, and plant pathogens, among other dominant groups. Similarly, the soil fungal functional groups associated with P. emersonii were diverse, with relatively comparable relative abundances. The alpha diversity analysis of the habitat soil fungal communities associated with the three Paphiopedilum species revealed no significant differences in the ACE and Chao1 indices, indicating that the community abundance of habitat soil fungi was similar among the three species (Fig. ). However, the Simpson and Shannon indices for P. emersonii were significantly greater than those for both P. armeniacum and P. wenshanense . Furthermore, no significant differences were observed in the four diversity indices between P. armeniacum and P. wenshanense . To clarify the overall differences in the soil fungal community structure among the three Paphiopedilum species, the beta diversity was analyzed via nonmetric multidimensional scaling (NMDS) based on the Bray‒Curtis distance (based on fungal abundance and species presence or absence). Prior to this, to verify the reliability of species as a grouping unit, we used permutational MANOVA. The results showedwere significantly greater than the intraspecific differences, indicating that the grouping results were reliable (Fig. ). The R values were 0.214 and 0.388 at the phylum and genus levels, respectively, indicating that the grouping method explained 21.4% and 38.8% of the sample differences, respectively. Figure presents the results of the NMDS analysis, with the ordination axis set to 2. The stress values (strees) of the soil fungal community associated with the three Paphiopedilum species, assessed at both the phylum and genus classification levels, were found to be less than 0.2, indicating that the results possess explanatory significance. Specifically, the stress values at the phylum and genus levels were 0.0071 and 0.159 (Fig. ), respectively, suggesting that the differences in the habitat soil fungal community structure among the three Paphiopedilum species are more pronounced at the phylum level. Paphiopedilum species To investigate the potential interactions among soil fungi associated with three Paphiopedilum species and the alterations in their co-occurrence networks, we constructed an OTU-level co-occurrence network based on random matrix theory. A consistent threshold ( r > 0.6, p < 0.01) was applied for network construction, allowing for a comparative analysis of the changes in the co-occurrence networks. As illustrated in Fig. , the habitat soil fungal co-occurrence network of P. emersonii comprised 74 nodes and 606 edges, with 93.56% of the correlations being positive and only 6.44% negative (Fig. A). In contrast, the co-occurrence network of soil fungi in the habitat of P. armeniacum contained 77 nodes and 479 edges, with positive correlations accounting for 70.56% and negative correlations for 29.44% (Fig. B). The soil fungal co-occurrence network in the habitat of P. wenshanense was the smallest, featuring only 53 nodes and 183 edges; here, positive correlations represented 87.43% while negative correlations constituted 12.57% (Fig. C). This study indicates that the co-occurrence networks of soil fungi associated with P. emersonii and P. armeniacum exhibit a high degree of modularity and a substantial proportion of positive interactions. This suggests that these fungal networks contain modules that are resilient to changes in the external environment. Such a symbiotic model may play a crucial role in maintaining community structure, enabling resistance to adverse environmental conditions, and facilitating the effective degradation of organic matter. Ex situ conservation, a critical strategy for mitigating the risk of orchid extinction, has consistently played a vital role in biodiversity conservation , . However, the efficacy of ex situ conservation is contingent upon a comprehensive understanding of the ecological habits of the species, as well as the ecological and biological characteristics of their living environments. This is particularly true for species such as orchids, which are highly reliant on fungal symbiosis; thus, elucidating the characteristics of their habitat soil environment is a prerequisite for successful ex situ conservation. While some studies have examined the habitats of wild orchids, most have focused on distribution patterns, habitat preferences, and habitat evaluations, often neglecting the specific habitat characteristics essential to orchids – . In this study, targeted sampling methods were employed to investigate the habitats of wild populations of Paphiopedilum species. The study revealed the habitat characteristics, soil nutrients, and soil fungal microbial community structures of three rare and endangered Paphiopedilum species, while also exploring the interrelationships among these factors. This information will be instrumental in constructing simulated environments and selecting sites for future field reintroductions in ex situ conservation efforts. The results indicated that the three species of Paphiopedilum preferred to thrive in low-lying areas characterized by high vegetation canopy density and ventilation. The relative humidity of these habitats was notably high, which enhanced the resilience of Paphiopedilum species to arid climatic conditions. However, this unfavorable terrain may also impede population dispersal. The physical and chemical properties of the soil across the three species of Paphiopedilum were largely similar, with the exception of significant differences in available nitrogen and phosphorus, potentially linked to the pronounced topographic heterogeneity in the region. A survey revealed that these species are confined to a narrow distribution within the mountainous areas of Southwest China. This region features both karst and non-karst landforms, a variety of soil-forming bedrock, an intricate distribution of soil types, and considerable climate variability, resulting in spatial and temporal heterogeneity . The complexity of this environmental backdrop has led to variations in soil nutrients within the habitats of Paphiopedilum species, which may contribute to significant differences in soil nutrient levels at various distribution points for the same species – . Furthermore, the soil fungal community serves as a crucial factor influencing plant growth and distribution, acting as a key intermediary that indirectly affects these processes. Its composition and structure are often dictated by prevailing environmental conditions , . In this study, the number of soil-specific operational taxonomic units (OTUs) in the habitat of P. emersonii was greater than that observed in the other two species of Paphiopedilum . This finding indicates a higher diversity of soil-specific fungi in the P. emersonii habitat, suggesting a relationship between habitat characteristics and fungal diversity. A survey revealed that the habitat of P. emersonii primarily consists of humus soil, which is rich in fungal populations. In contrast, P. wenshanense and P. armeniacum are found in high-elevation areas of Yunnan (1500–2000 m above sea level), where the subtropical mountain monsoon climate is characterized by prolonged sunshine and low relative humidity, factors that limit the diversity of soil fungi. The composition and structure of the fungal communities associated with the three Paphiopedilum species exhibited differences at both the phylum and genus levels, with distinct dominant fungal groups identified. These compositional differences may arise from two primary factors: first, the mycorrhizal characteristics of different Paphiopedilum species may selectively favor fungal groups that form symbiotic relationships, thereby altering the fungal community structure in the habitat soil through mechanisms such as alternation, antagonism, and competition during the growth process – . On the other hand, there are notable differences in the growth environments of various Paphiopedilum species. In particular, the environmental heterogeneity resulting from geographical distribution serves as a direct environmental factor influencing the composition of fungal communities in these habitats , . To adapt to their surroundings, fungi employ different nutritional strategies, which represent a survival mechanism that enables them to thrive under varying living conditions , . In this study, saprophytic fungi and ectomycorrhizal fungi were found to be dominant in the soil of the three Paphiopedilum species, thereby providing a substantial material basis and symbiotic fungal resources essential for plant survival. It is widely accepted that the diversity of fungal functional groups in the soil correlates with environmental complexity , , which may be linked to the specific habitat preferences of Paphiopedilum species. This group of plants thrives in environments characterized by high vegetation canopy density, good ventilation, and appropriate shading, often growing in rock joints or humus layers near the bases of other woody plants. Such habitats inherently foster a rich presence of saprophytic fungi and ectomycorrhizal fungi, playing a crucial ecological role in promoting plant growth and facilitating nutrient cycling – . Ectomycorrhizal groups may serve as the initial catalyst for the germination of Paphiopedilum seeds and represent the primary source of heterotrophic fungi during the early stages of Paphiopedilum species development. Saprophytic fungi enhance the cycling of habitat materials, improve soil texture, and increase both water and air permeability , , thereby creating nutritional conditions and habitat characteristics that are more favorable for the growth of Paphiopedilum species. Additionally, we observed a small presence of orchid mycorrhizal fungi in the habitat soil. Previous research has indicated that when the fungal community in the habitat soil provides the foundational material for orchid mycorrhizal fungi, these mycorrhizal fungi can establish themselves within the habitat soil alongside the growth of Paphiopedilum , further facilitating the germination of orchid seeds , . The alpha diversity index is a crucial metric for assessing community characteristics in ecology and biological sciences. This study demonstrated that the habitat soil fungal diversity indices of three Paphiopedilum species align with the observed number of fungal-specific operational taxonomic units (OTUs), indicating a correlation between fungal community diversity and the quantity of fungal-specific OTUs. Beta diversity highlights the variations in community structure across a gradient or the directional changes within a specific range of that gradient , . Furthermore, this study confirmed significant differences in the habitat soil fungal community structure among the three Paphiopedilum species. Despite the close genetic relationships among these species , variations in their historical geographical distributions and habitat conditions likely contribute to the observed differences in soil fungal community structures, reflecting distinct habitat selection preferences among the three Paphiopedilum species. A microbial co-occurrence network serves as a powerful tool for elucidating the coexistence relationships among microorganisms. This study demonstrated that the soil fungal habitats of the three Paphiopedilum species were predominantly characterized by positive symbiotic interactions, with a markedly small proportion of negative symbiotic interactions. This finding contrasts sharply with the results of biological co-occurrence network analyses conducted on other research subjects , . We hypothesize that the soil fungal communities in these habitats have reached a mature state, allowing for the harmonious coexistence of numerous fungi that collectively fulfill biological roles – . This is further evidenced by the diversity of soil fungal functional groups associated with the three Paphiopedilum species. Additionally, we propose that this model of positive effect-based microbial coexistence enhances the ability of soil microorganisms to withstand adverse environmental conditions, thereby providing essential nutrients for Paphiopedilum species in extreme environments . However, this hypothesis requires further validation. The fungal community, which plays a crucial role in the soil environment, is also influenced by soil nutrients – . This study elucidates the effects of habitat-related nutrient changes on the soil fungal community structure associated with three Paphiopedilum species. The Mantel test indicated that pH significantly influences the diversity of the soil fungal communities across these species. Soil pH affects diversity by directly impacting the survival, competition, growth, and reproductive efficiency of soil fungi . Furthermore, soil pH serves as an indirect indicator of variations in overall environmental conditions . Liu et al. established that soil pH is a significant predictor of soil fungal groups in Southwest China . Although this study revealed a direct correlation between soil pH values and nutrient availability, it could not confirm the direct or indirect effects on orchids. However, it can be inferred that soil pH influences the distribution and growth of orchids by affecting the efficiency of soil nutrient supply and the dynamic structure of soil fungi. In future ex situ conservation efforts for Paphiopedilum species, monitoring soil pH should be prioritized. Additionally, this study identified relationships between dominant fungal groups and soil nutrients, noting that the compositions of the dominant habitat soil fungal groups associated with the three Paphiopedilum species were complex. The identified fungal groups include Rozellomycota, Olpidiomycota, Mortierellomycota, Kickxellomycota, Glomeromycota, Entorrhizomycota, Chytridiomycota, Calcarisporielomycota, Basidiomycota, Ascomycota, among others. Most of these groups exhibited significant associations with soil nutrients, corroborating findings from previous research , . This suggests that fluctuations in soil properties can lead to changes in fungal community structure. Furthermore, fungi are crucial microorganisms in soil, playing a vital role in the material and energy cycles within soil systems, as well as in improving soil structure through the decomposition of organic matter and the release of nutrients. The habitat soil samples in this study were all collected from a specific orchid habitat. While the correlation between habitat soil nutrients and fungal communities was established, quantifying the influence of orchid species and habitat soil on the soil fungi remains a challenge and an area for further investigation in future research. Nonetheless, the findings regarding soil nutrients and fungal communities in these habitats can provide valuable scientific insights for future ex situ conservation measures. The results of this study indicated that the three species of Paphiopedilum thrived in low-lying, shaded, and well-ventilated environments, characterized by highly heterogeneous habitat conditions. While no significant differences in soil nutrients were observed among the different species, notable variability in soil nutrients was evident at various distribution points within the same species. Analysis of the soil fungal community structure across the habitats revealed significant differences in community composition among the three Paphiopedilum species; specifically, the habitat of P. emersonii exhibited a greater number of unique operational taxonomic units (OTUs) and fungal species. Furthermore, the soil fungal functional groups associated with the three Paphiopedilum species were largely similar, primarily consisting of saprophytic, symbiotic, and pathotrophic fungi. This included Undefined Saprotroph, Ectomycorrhizal, Undefined Biotroph, Soil Saprotroph, Fungal Parasite, Wood Saprotroph, and Plant Saprophytic fungi. Significant differences in soil fungal communities were noted among the three Paphiopedilum species, with the Simpson and Shannon indices for P. emersonii being significantly higher than those for the other two species. Additionally, microbial co-occurrence network analysis revealed that the symbiosis among soil fungi in the three Paphiopedilum species was predominantly positive, with P. emersonii demonstrating a higher degree of modularity in its symbiotic network. Nutrient levels significantly influenced the Shannon and Simpson indices across the three Paphiopedilum species, with soil nutrients accounting for 80.70% of the horizontal variations observed in the dominant soil fungal groups. The primary groups of soil fungi within each Paphiopedilum habitat were significantly correlated with soil nutrients, suggesting an interactive relationship between soil nutrients and soil fung.
How does the rise in contraceptive usage predict pregnancy termination among young women in Kenya? an in-depth multilevel analysis
b2193472-4937-4d9e-9484-eb553703ae5f
11806763
Surgical Procedures, Operative[mh]
Africa has a higher continental average of younger individuals than the global population, and the United Nations estimates that about 70% of the sub-Saharan African (SSA) population is less than 30 years of age . Within Kenya, individuals between the ages of 15 and 24, specifically women, make up approximately 15% of the total population of 47 million, translating to an estimated 7 million . With a total fertility rate (TFR) of 3.4, which is higher than the global average of 2.27 , this shows that young women in Kenyan are highly fecund and are particularly at risk of pregnancies, with 47% of such pregnancies being unwanted, according to the Kenya Demographic and Health Survey (KDHS) . In addition, evidence has shown that young individuals exhibit risky sexual behaviours , with the same study revealing that 59.4% of first-year university students in Kenya engaged in sexual activity, with many having their first sexual experience at a young age (< 20 years). However, only 32.5% reported consistent practice of condom use . Consequently, a cross-level countries study conducted by Sedgh et al. in 2014 shows that the national teenage abortion rate in Kenya was 38 per 1000 girls aged 15–18, representing one of the highest in the world . This is coupled with a highly prevalent adolescent first birth rate , even though adolescent motherhood comes with a lot of negative ramifications for girls and their newborns . Another recent finding in Kenya showed that the majority of these adolescent girls and young women [AGYW] are aware of the dangers of unwanted pregnancy, including social shame, partner rejection, and disruption of their schooling . Also, they are aware of the different forms of birth control available but hold several misconceptions regarding these commodities , which affects their decision not to use various methods of contraception. Furthermore, complicating the situation is the discrimination and stigma faced by these young individuals in accessing contraceptive care, even when they are willing and enlightened to do so . The resultant effect is usually unplanned pregnancy, and many of them turn to unsafe abortion practices, which lead to 2600 deaths a year . Notwithstanding, the Kenyan government has been making considerable efforts to improve accessibility to contraceptive services, as reported by the Kenya Service Provision Assessment that about 90% of service providers offer them . Abortion laws are also becoming flexible as a result of substantial changes since 2010 in Kenya , in part because health is recognised as a fundamental right in the country’s constitution. However, the 1963 Penal Code still makes abortion illegal, which puts women and girls seeking care at risk of intimidation, false accusations, and legal repercussions . Nevertheless, the Kenyan High Court upheld abortion as a fundamental right in 2022 and ruled that it is unlawful to arrest and prosecute anyone who has or provides an abortion . AGYW in Kenya experiences one of the highest teenage abortion rates globally and even abortion rates generally , and various factors have been associated with pregnancy termination in the country . Many of these findings have centred around demographic and socioeconomic factors, parity status, and type of contraceptive method, especially for the full spectrum of women of reproductive ages . Even though the literature in the geographical context of the study is filled with evidence on pregnancy termination, these studies have several shortcomings. Most importantly, there is a conspicuous lack of studies at the national level exploring pregnancy termination, particularly as it relates to AGYW. Additionally, given the rising prevalence of contraceptive use in the country, there hasn’t been an attempt to explore how this is associated with the likelihood of pregnancy termination among this population group. Thus, this present study attempts to fill this gap in the literature. The connection between contraceptive use and pregnancy termination is often seen as potentially confusing, particularly when both show an increase simultaneously, especially in specific populations, contrary to conventional expectations . In societies like this, individuals perceive that their risk of unwanted pregnancy is low ; this is because the emergence of modern contraception, in particular, is associated with a destabilisation of high fertility preferences, especially in places experiencing fertility transition. Therefore, with the increase in contraceptive prevalence and the decline in fertility, a growing percentage of couples express a desire for fewer children or a significant delay before the next child. Consequently, there is a heightened exposure to the risk of unintended pregnancy, especially if there is no intentional effort to avoid unplanned pregnancy . Given this perspective, this study fills the gap in the literature by exploring the association between the contraceptive use experience of AGYW and their history of pregnancy termination. By addressing these issues, this study contributes to a deeper understanding of the intricate relationship between contraceptive use and pregnancy termination of young Kenyan women in the face of a plethora of initiatives on the reproductive choices and outcomes of young women while also considering the recent legal developments around abortion as a fundamental right. The findings will inform policies and interventions aimed at enhancing reproductive rights and outcomes for this vulnerable demographic, ultimately contributing to their well-being and the broader public health landscape. Study design and participantsStudy variablesStatistical analyseshis cross-sectional study extracted and analysed data from the 2022 KDHS. The KDHS is a national survey (implemented by the Kenya National Bureau of Statistics (KNBS) and Ministry of Health (MoH)) that collected data from respondents on their socio-demographic characteristics, maternal and child health, and other sexual and reproductive health-related indicators such as pregnancy termination, contraception, family planning, fertility, intimate partner violence, etc., from all women of reproductive age group using a questionnaire. However, our study focused on young women aged 15–24 years of age at the time of the survey who responded to the questions on pregnancy termination history, and this resulted in an analytical sample of 12,166 young women . The KDHS employed a two-staged cluster design sampling technique to select the primary sampling unit from which enumeration was achieved . The second stage was the listing of households to select enumeration areas to derive a representative sample for the country. More information about the study design and survey instruments can be found here . The DHS datasets employed in this study are publicly available on the Demographic and Health Survey (DHS) website and can be downloaded for free upon request via https://dhsprogram.com/data/available-datasets.cfm . Outcome variableExplanatory variablesCovariatesoutcome variable is a self-reported history of pregnancy termination; in the survey, the women were asked whether they have ever had a pregnancy terminated, and the responses were coded “Yes = 1” if the respondent had ever terminated pregnancy and “No = 0” if the respondent had never terminated pregnancy. The explanatory variable in this study is the correspondents’ contraceptive use history. The response of the women to questions on their contraceptive use history, i.e., whether they have ever used anything to avoid getting pregnant. The responses included “not using” and “use” of various short, long-acting, and permanent methods. For the study, we measured contraceptive use as “Yes = 1” for those who have ever used any method to avoid getting pregnant, while those who reported never using a method were coded as “No = 0”. Covariates for this study were selected based on variables that showed association at the bivariate levels and other similar studies in the literature . These include age, age at first marriage, women’s highest level of education, partner’s highest level of education, type of place of residence, parity, type of marriage, work status, level of exposure to mass media, household wealth index, and the sex of the household head. The ages of the young women were coded as 15–19 & 20–24, while the highest level of education was a categorical variable– no education, primary, secondary, and higher. We categorised marital status into never married, married/cohabiting, and previously married. Parity was also included as a covariate with categories ranging from one child to four or more; the household wealth index was used as conceptualised in the DHS survey (Poorest, Poorer, Middle, Richer & Richest). At the household level, we included the sex of the household head with two categories: male and female. Type of place of residence variable: rural and urban, while for community-level socioeconomic disadvantage, we developed textiles ranging from least disadvantaged to most disadvantaged based on the individual socioeconomic measure. The data analysis was conducted at three levels: univariate, bivariate, and multivariate. At the univariate level, frequency and percentage distributions of all the study variables were reported. At the bivariate level, a chi-square test was used to assess the association between contraceptive use and the history of pregnancy termination, also providing the distribution of pregnancy termination history across the main explanatory variable and covariates. For the multivariate analysis, a multilevel regression modelling with mixed and random effects was employed due to its ability to handle hierarchical data structures and account for multiple levels of influence on the outcome variable . Five models were fitted, including the null model (Model 0), which assessed variations in pregnancy termination history across communities without adjusting for explanatory variables or covariates. Model 1 estimated the association between contraceptive use and pregnancy termination history, while Model 2 introduced covariates. Model 3 included contraceptive use status, household-level, and community-level variables. The final model (Model 4) accounted for the main explanatory variable along with individual, household, and community-level covariates. The fixed effects of the models were reported using adjusted Odds Ratios (aOR) with 95% confidence intervals. Random effects were evaluated using the Intra-Cluster Correlation Coefficient (ICC) , which quantifies the proportion of total variance in pregnancy termination attributable to community-level clustering . The ICC was calculated as the ratio of variance of interest (community-level) to the total variance (variance of interest plus residual variance), indicating the extent of clustering in the data. Higher ICC values suggest substantial clustering effects, underscoring the importance of multilevel modelling in studies with community-based characteristics . To account for potential biases, survey sample weights were applied to correct for non-response and under-sampling, while missing values were excluded from the analysis. All statistical analyses were performed using Stata version 14.1 . These methodological considerations ensure robustness and reliability in addressing the study objectives. Table shows the distribution of study variables and the percentage of young women reporting a history of pregnancy termination by explanatory variables. It can be reported that 29% of young women reported contraceptive use, and more than half are between the age group of 15–19 [50.1%], 56% of the young women have secondary education, 71% have never married while 66% do not have a child. For the household wealth index, 23% and 17% are from the richest and poorest households, respectively; the analysis also showed that 61% of the young women reside in rural areas while 62% are from male-headed households, and 48% are from the least socioeconomically disadvantaged communities. The distribution of the history of pregnancy termination by contraceptive use history and other covariates was also shown in Table : 3% (3%) of young women who are not using contraceptives and have ever terminated a pregnancy compared to 6% among those using contraceptives. The study also found that the highest percentage of young women with a history of pregnancy termination is among those with no education (8%), those currently married/cohabiting (10%), those with four or more children (11%), those residing in urban areas (5%), and those residing in socioeconomically disadvantaged neighbourhoods (4%). Fixed effects (measures of association) resultsRandom effects [measures of variation] results The empty model [Model 0] revealed32, 95% CI 0.14–0.75]. The empty model indicated that 9% of the overall variance in pregnancy variation is attributable to inter-cluster variation in the characteristics [ICC = 0.09]. For model 1, the probability of pregnancy termination did not vary (σ2 = 0.33, 95% CI 0.14–0.77). However, in model 3, there was a drop in the overall variance in pregnancy termination attributable to inter-cluster variation of the characteristics (5%). This indicates that the variation in pregnancy termination is highly attributable to differences or variations in factors at the community level, as shown in Model 2. In Model 3, the ICC increased to 7% while it dropped to 5% in the full model, and there was a drop in variation in the probability of pregnancy termination concerning the clustering of PSUs [σ2 = 0.18, 95% CI 0.04–0.85]. In Table , Model 4 is the complete model that shows the association between contraceptive uses, individual and contextual level factors, and pregnancy termination among young women in Kenya. At the individual level, contraceptive use, age, highest level of education, marital status, and parity were associated with pregnancy termination. In contrast, none of the community-level variables showed an association with pregnancy termination. However, household wealth, type of place of residence, and sex of household head showed association in model 3. The unadjusted model shows that young women who reported using contraceptives are more than two times more likely to report a history of pregnancy termination compared to non-users of contraceptives [aOR = 2.26; 95% CI: 1.86–2.75], and similar results showed up in Model 3 when we added household and community level variables,7; 95% CI: 1.86–2.76]. In the complete model, it was found that young women who are using contraceptives are 3% more likely to terminate pregnancy compared to those who are not using [aOR = 1.03; 95% CI: 0.80–1.26]. It was also found that young women aged 20–24 are more than two times significantly more likely to have terminated a pregnancy compared to those aged 15–19 [aOR = 2.69; 95% CI: 1.97–3.67]. Furthermore, the analysis showed that young women with secondary and higher education are 34% and 46% significantly less likely to have terminated a pregnancy, respectively, compared to young women with no education. Married/Cohabiting and previously married young women are more than 13 times [aOR = 13.11; 95% CI: 9.29–18.50] and 10 times [aOR = 10.15; 95% CI: 6.31–16.33]. More likely to terminate a pregnancy compared to those who have never been married. It was also found that higher parity is significantly associated with a lower likelihood of pregnancy termination compared to young women with zero parity. The lack of contextual literature on the association between contraceptive use and pregnancy termination among young women in Kenya motivates this study, and we sought to establish a connection among young women in the context of Kenya– where the abortion death rate is significantly high. The results from the study show that there is a higher frequency of pregnancy termination among young women aged 20–24. This high prevalence implies that this group of young women have a higher likelihood of pregnancy termination compared to women aged 15–19. This confirms what previous studies have found about the low prevalence of contraceptive use among young people and the higher pregnancy termination rate among young women 20–24 years old in Kenya. We also found an association between contraceptive use and a history of pregnancy termination; specifically, young women who reported using contraceptives have a higher likelihood of pregnancy termination, which sort of appears counterintuitive, but as earlier explained, such a parallel relationship between contraceptive use and pregnancy termination is a possibility in populations undergoing transition or even pre-transition societies like Kenya, where individuals perceive that their risk of unwanted pregnancy is low . The association between contraceptive use and a history of pregnancy termination could be better contextualised within Kenya’s demographic landscape because while fertility rates in Kenya have been steadily declining since 2022, teenage pregnancy rates remain relatively high, with varied low geographical median age for first birth . This dynamic reflects a population in transition, where declining fertility coexists with traditional norms and behaviours . In such contexts as this, young women may perceive a low risk of unwanted pregnancy, leading to inconsistent or incorrect contraceptive use, which could contribute to higher rates of pregnancy termination . Consequently,us, as contraceptive prevalence rises and fertility starts to fall, an increasing proportion of couples want no more children (or want an appreciable delay before the next child), and exposure to the risk of unintended pregnancy also increases as a result . Additionally, contraceptive failure or inconsistent use may be key factors contributing to the high rates of pregnancy termination observed in this population . Furthermore, evidence from Kenya has reported a high sexual activity and non-use of contraceptives among young people, especially young women 20–24 years of age , which has warranted efforts at increasing the prevalence of contraceptive use among young people in the country and other countries in SSA . This finding implies that such efforts haven’t been achieving the right goals of reducing unintended pregnancy among young women in Kenya. Previous studies have reported similar findings on the relationship between contraceptive use and history of pregnancy termination. For instance, a multi-country study in SSA found that women whose Contraceptive needs have been met are more likely to report pregnancy termination. Just as we have mentioned, this relationship between contraceptive use and termination of pregnancy seems counterintuitive, but explanations that have also been offered centre around contraceptive failure, which is even stronger among women under the age of 30 . Cleland , in a study explaining the complex relationship between contraception and abortion, opined that accidental pregnancy while using a method is a possibility in about 30% of all unintended pregnancies, and this can be explained by the fact that the likelihood of unintended pregnancy increases when desired family sizes decrease– which evidence has shown to be happening in Kenya, as fewer reproductive years are dedicated to planned pregnancies and the rise in contraceptive usage in recent years has lessened the risk of unintended pregnancies. This impact has been tempered in various global regions due to a growing tendency to terminate such pregnancies . On this basis, the implications of a high prevalence of sexual intercourse without the use of contraceptive methods are evident, especially among young women who could also face the consequences of their actions, such as unsafe abortion , school drop-out , poor child spacing , intergenerational poverty , and preterm delivery . Beyond contraceptive use, other covariates along these themes of demographic and socioeconomic factors showed an association with the likelihood of pregnancy termination. For instance, young women with low levels of education have a higher likelihood of pregnancy termination experience, and married young women or even those who reside in urban areas exhibit a higher likelihood of pregnancy termination. These findings are consistent with what has been reported in previous studies among adolescent girls and young women in SSA . Our study set out to explain the association between contraceptive use and the history of pregnancy termination among adolescent girls and young women in Kenya. An association has been established which confirms the counterintuitive argument in the literature. Hence, future studies in this context must explore the dynamics that play into this relationship. However, another plausible explanation could be that pregnancy termination may precede contraceptive use . Women who have experienced a termination may be more likely to use contraception afterwards, either due to counselling or being prescribed contraceptives post-termination to prevent future unintended pregnancies . This aligns with findings from other studies showing that women who have given birth are often more likely to use contraception, as postpartum contraceptive counselling and prescriptions are standard practice in many healthcare settings . Thus, the apparent association between contraceptive use and pregnancy termination may reflect a sequence of events rather than a direct relationship. Strengths and limitationsImplication for research and policyConclusion and recommendations Our analysis established association between contraceptive use history and pregnancy terminations remains prevalent among young women in Kenya, particularly among young women aged 20–24 years. Place of residence, age, wealth index, education level, and met needs of children have been identified as key risk factors. Interventions to reduce pregnancy termination in Kenya should prioritise women aged 20–24, those with no formal education, urban residents, and women with met needs for children. This study adds to the limited literature by examining the association between contraceptive use and pregnancy termination among young women in Kenya, where unsafe abortion rates are alarmingly high. Our findings indicate that young women who report using contraceptives are more likely to have experienced pregnancy termination. While this may appear counterintuitive, it reflects a broader demographic transition in Kenya. As fertility rates decline and contraceptive prevalence increases, more women are exposed to risks of unintended pregnancy due to contraceptive failure or inconsistent use, especially in populations that perceive a low risk of unwanted pregnancy. This dynamic is further complicated by high sexual activity and low contraceptive use among adolescents and young adults, implying that existing efforts to increase contraceptive prevalence may not adequately address unintended pregnancies among this group. Additionally, women with low education levels, urban residence, and marital status exhibited a higher likelihood of pregnancy termination, consistent with previous studies in sub-Saharan Africa. These findings underscore the importance of targeted interventions that consider socio-demographic factors, education campaigns, and improved access to high-quality contraceptive methods to reduce contraceptive failure rates. Finally, we emphasise the need for future research to explore the dynamics between contraceptive use and pregnancy termination in greater depth by conducting a qualitative study or longitudinal study. Understanding these dynamics will enable the design of more effective strategies to address the complex relationship between contraception and pregnancy termination and ultimately reduce unsafe abortions and their consequences among young women in Kenya. This study benefits from the use of recent and reliable data derived from a well-representative, large sample size in Kenya, allowing the findings to be generalised to young women regarding contraceptive use and pregnancy termination within the country. However, due to the cross-sectional design of the KDHS data utilised, causal relationships cannot be established, as no longitudinal analysis was conducted. Additionally, some important variables and time-related information that could provide a more comprehensive understanding of the causes of pregnancy termination among this population were unavailable in the KDHS dataset. Furthermore, the self-reported nature of pregnancy termination data may introduce recall bias, potentially leading to either under- or overestimation. Stigma and fear associated with disclosing pregnancy termination could also contribute to underreporting. Finally, it is essential to highlight that the DHS definition of pregnancy termination encompasses stillbirths and miscarriages, which may not align with the respondents’ intentions, potentially impacting the interpretation of the findings. Factors associated with the history of contraceptive use and pregnancy termination, as identified in this study, will help health practitioners and policymakers to make informed decisions as regards proper family planning, especially for young women who have met their desired number of children. This is germane to avoid the risk associated with the process of terminating pregnancy as well as avert both long and short-term consequences of unintended pregnancy and unsafe termination of their pregnancy. This will help develop targeted intervention strategies for women aged 20–24 years who are more prone to pregnancy termination due to low levels of education, place of residence, and low awareness of using contraception.
Shift from Pro- to Anti-Inflammatory Phase in Pelvic Floor Muscles at Postpartum Matches Histological Signs of Regeneration in Multiparous Rabbits
63f47761-63a6-4bfd-916f-f02f8b53fa4f
11052351
Anatomy[mh]
Pelvic floor muscles (PFM) play a core role in defecation and micturition. PFM weakening underlies some of the most prevalent and debilitating pelvic floor disorders, such as pelvic organ prolapse (POP) and stress urinary incontinence (SUI). Aging and childbirth are often reported as the main risk factors that weaken PFM. Vaginal birth is considered the leading risk factor, because the passage of the fetal head is associated with exceptional force on, and overstretching, of the PFM . This may lead to their rupture or avulsion . Therefore, childbirth-induced PFM injuries imply neuromuscular and supportive impairments that are prone to the onset of urogynecological disorders like POP or SUI . In this regard, three-dimensional computer models and magnetic resonance imaging have been able to predict the overstretching of pelvic and perineal muscles, tendons or connective tissue, and nerves . Ultrasonographic data also supports childbirth-related injuries of PFM . The rate of women diagnosed with puborectalis trauma increases four-fold from pregnancy to the first five days postpartum, matching the enlargement of the levator hiatus . Some of the latter morphometry defects in PFM and functional outcomes coming from neurophysiological outcomes (i.e., evaluation of neuromuscular reflexes and muscle strength) are considered indicators of muscle damage . Following metabolic and physical insults, skeletal muscle degeneration and regeneration phases, which comprise orchestrated processes involving injured myofibers, fibroblasts, satellite cells, and infiltrated immune cells, among other cell types , lead to muscle recovery . Given that macrophage-driven mechanisms seem to condition muscle regeneration , addressing inflammation in PFM is supported by the exploration of cell-based therapies, along with the adequacy of biomaterials to ameliorate pelvic floor dysfunctions . Muscle trauma implies an inflammatory response mediated by myeloid cells essential for subsequent muscle recovery . In the pro-inflammatory phase, neutrophils and M1 macrophages phagocyte cellular debris by expressing molecules like HLA-DR and releasing proteolytic enzymes, oxidative factors, and cytokines . M2 macrophages, including CD-206, express molecules among anti-inflammatory cytokines . Molecular signaling characterizing the pro-inflammatory phase shift of M1 macrophages to M2 macrophages (anti-inflammatory phenotype) modulates muscle repair and could be considered an indicator of muscle regeneration . Whereas inflammation and macrophage influence in the hindlimb muscles have been commonly researched, these mechanisms and influences in PFM have been addressed more recently. In this regard, some PFM of animal species like rodents and rabbits have been used as study models. Simulated birth trauma in rats, comprising the introduction of a catheter balloon into the vagina and its filling to produce vaginal distention, caused morphometry defects, immune cell infiltration, and edema in the external urethral sphincter and levator ani muscles, as well as the pubococcygeus . Moreover, the leak point pressure, an indicator of a urethral deficit and poor urine continence, became lower, which resembles a model of SUI . More recent studies have reported that simulated birth trauma increases levels of IL-6, TNF⍺, and TNFR, suggesting ongoing inflammation in the urethra; exposure to more than one muscle trauma seems to affect them differentially . Findings from rats support that adverse outcomes of vaginal distention depend on the evaluated pelvic floor, which may be associated with the plastic adaption of each individual during pregnancy and postpartum . The female rabbit is another well-suited organism to evaluate the anatomical organization and function of the pelvic floor and its PFM in reproductive contexts for biomedical interests . Our workgroup has focused on the bulbospongiosus (Bsm) and pubococcygeus muscles (Pcm), given their contribution to urine storage and voiding . Findings from the Bsm and Pcm of pregnant, primiparous, and multiparous rabbits suggest that their differences in damage and recovery may underlie PFM functional plasticity at postpartum . In addressing representative markers for muscle degeneration and regeneration at postpartum, we have reported elsewhere that muscle regeneration processes seem to recover Bsm faster than the Pcm in multiparous rabbits . Accordingly, we hypothesize that the M2 macrophages of the Bsm increase on day 20 postpartum, while the M1 macrophages increase for the Pcm in the same time frame. Overall, the present study aimed to evaluate the histological characteristics of, and the presence of M1 and M2 macrophages in, the Bsm and Pcm of young nulliparous and multiparous rabbits on postpartum days three and twenty. 2.1. Animals 2.2. Histology 2.3. Immunohistochemistry 2.4. Statistical Analyses The normality of data was assessed with Kolmogorov–Smirnov tests. Subsequently, we used Kruskal–Wallis followed by Dunn’s multiple comparisons tests, or one-way ANOVA followed by Tukey tests, to determine significant differences ( p < 0.05) among the groups. Data are shown as median (minimum to maximum value) or mean ± S.E.M. unless otherwise stated. All the analyses were performed using the Prism 9 (GraphPad, Boston, MA, USA) program. Eighteen young chinchilla-breed female rabbits ( Oryctolagus cuniculus ) were housed in individual stainless-steel cages (50 × 60 × 40 cm) and kept at 22 ± 2 °C under artificial lighting conditions (L:D 16:8; lights on at 0600 h) in the vivarium of the Centro Tlaxcala de Biología de la Conducta (Universidad Autónoma de Tlaxcala). All rabbits had daily free access to pellet food and tap water. The procedures below followed the guidelines of, and were approved (Protocol ID 6310 approved on 25 July 2019) by, the Ethics Committee of the Instituto de Investigaciones Biomédicas-Universidad Nacional Autónoma de México. We allocated the rabbits into three groups: nulliparas (n = 6), and multiparas euthanized on postpartum day 3 (n = 6) or day 20 (n = 6). Multiparous rabbits had their first mating at six months of age; henceforth, rabbits mated again during the first postpartum days after the first, the second, and the third delivery . After the fourth delivery, the pups were euthanized to allow multiparous rabbits a hormonal status more similar to nulliparous rabbits . Age-matched nulliparas and multiparas were euthanized with an overdose of sodium pentobarbital (120 mg/kg; i.p.). Next, animals were placed supine to harvest the Bsm and Pcm, and transferred into Bouin-Duboscq fixative for 24 h, as explained elsewhere . After fixation, muscles were dehydrated in ascending ethanol concentrations (70, 80, 96, and 100%), cleared in xylene, and embedded in Paraplast X-tra (Sigma-Aldrich, St. Louis, MO, USA). We obtained 7 µm thickness traverse muscle sections with a microtome (RM2135, Leica, Wetzlar, Germany) and placed them serially on poly-L-lysine coated slides. Next, some sections were stained with hematoxylin and eosin (HE), while others were used for immunohistochemistry (see below). We measured the cross-sectional area and myonuclei of 50 myofibers from the medial regions of the Bsm and Pcm . For this sake, photographs were taken at 50×, and the entire muscle cross-section was reconstructed. The reconstruction was divided into sixteen quadrants to take two photos at 40× in each quadrant with an OLYMPUS camera (Tokyo, Japan) connected to a visible light microscope (Nikon ECLIPSE E600, Tokyo, Japan). Each field, in turn, was divided into twenty-four quadrants with the support of a grid, and the Axio Vision Rel. 4.6 (Carl Zeiss AG, Oberkochen, Germany) program was used to measure the myofiber cross-sectional area (CSA) of one fiber located in every fifth quadrant . Centralized and internalized myonuclei were manually counted in the sampled myofibers by one observer (ERB) that was blinded to the slide ID. The resulting data were averaged (per muscle per animal) and represented as the percentage of myonuclei per myofiber. For addressing the inflammatory response in both the Pcm and Bsm, muscle sections were immunostained with HLA-DR (ab166777, Abcam, Cambridge, UK) or anti-CD206 antibodies (MCA2155, Bio-Rad, Hercules, CA, USA) to identify M1 or M2 macrophages. We adapted a protocol reported elsewhere . Briefly, muscle sections were deparaffinized before retrieving antigens by incubating them with sodium citrate (pH 6). Sections were incubated in an H 2 O 2 solution at room temperature for 30 min to quench endogenous peroxidases before blocking non-specific sites with 5% normal goat serum (Santa Cruz Biotechnology, Dallas, TX, USA) diluted in PBS for one hour. The sections were washed with PBS-triton solution before incubation with anti-HLA-DR (1:200) and anti-CD206 (1:200) overnight at 4 °C. Spleen sections from nulliparous rabbits were used as the positive control; negative controls consisted of sections where the primary antibody was omitted. Sections were washed with PBS-triton x-100 and incubated with biotinylated goat polyclonal anti-mouse IgG antibodies (1:250, sc-2309, Santa Cruz Biotechnology). We used the Vectastain ABC kit (Vector Labs, Newark, CA, USA) to develop the immunostaining. Sections were counter-stained with Mayer’s hematoxylin, dehydrated in ascending concentrations of ethanol cleared with xylene, and mounted with mounting medium (entellan). Photographs were taken with an OLYMPUS camera connected to an optical microscope (Nikon ECLIPSE E600). For counting M1 and M2 macrophages, the entire muscle was reconstructed, followed by drawing a grid (3 × 4 quadrants) to sample alternate quadrants. Representative photographs were taken using a Ni-NU microscope (Nikon) coupled to a digital camera. Histological characteristics of the Bsm and Pcm were observed in H–E sections ( and ). For both muscles, nulliparous rabbits showed typical polygonal myofibers with peripheral nuclei, compacted by a well-delimited endo- and perimysium ( A,D and A,D). In high contrast, muscle sections from multiparas (M3 and M20 groups) exhibited signs of histopathological damage, including centralized and internalized myonuclei, focal necrosis, hyper-contracted fibers, and PMN cells ( B,C,E and B,C,E). 3.1. Muscle Morphometry 3.2. HLA-DR Immunostaining 3.3. CD206 Immunostaining We analyzed HE-stained sections to measure the myofiber cross-sectional area and count the peripheral and central myonuclei. The average Bsm myofiber CSA did not vary significantly among the groups (Kruskal–Wallis = 1.636, p = 0.4699; G); the same was true for the Pcm (F = 0.865, p = 0.441; G). Further analyses of the CSA distribution supported the latter findings ( H and H). We further estimated the ratio of sampled myofibers having centralized myonuclei (as a percentage) as a histological indicator of muscle regeneration. The Bsm indicated significant differences among the groups (Kruskal–Wallis = 9.732, p = 0.0028), and post hoc tests indicated a significant augmentation in the M20 vs. N group ( I). Similarly, the percentage of Pcm myofibers with centralized myonuclei varied between nulliparas and multiparas (Kruskal–Wallis = 10.05, p = 0.0024), which was prompted by the significant increase measured in the M20 compared to the N group ( p = 0.0047; I). We carried out anti-HLA-DR immunohistochemistry in the Bsm and Pcm of nulliparous and multiparous rabbits ( A–F). Spleen sections were used as positive controls; non-specific staining was assessed by incubating slides without the primary antibody ( G,H). The HLA-DR immunostaining was observed in the cytoplasm of resembling PMN cells and in strongly stained vacuole patterns ( B,E). Therefore, we estimated the number of HLA-DR ir-cells and HLA-DR ir-vacuoles. For the Bsm, we found that HLA-DR ir-cells changed significantly among the groups (Kruskal–Wallis = 12.76, p < 0.0001), prompted by a significant increase detected in comparing the M3 vs. N groups ( p = 0.0014) ( I). On the other hand, there were no significant ( p > 0.05) variations between the N vs. M20 ( p = 0.7625) and the M3 vs. M20 groups ( p = 0.0545). Such findings matched well with those for the Pcm, particularly the significant differences among the nulliparas and multiparas (Kruskal–Wallis = 13.90, p < 0.0001; J) and the significant increase ( p < 0.0006) of the HLA-DR ir-cells of the M3 vs. N group. No significant differences were detected when comparing the N vs. M20 ( p = 0.7625) and M3 vs. M20 groups ( p = 0.0545). Statistical tests indicated significant differences regarding the estimated number of HLA-DR ir-vacuoles for the Bsm (Kruskal–Wallis = 13.72, p < 0.0001; K) and Pcm sections among the groups (Kruskal–Wallis = 15.76, p < 0.0001; L). Post hoc tests indicated a significant increase in the HLA-DR-ir-vacuoles for the M3 vs. N group for both the Bsm ( p = 0.0007) and Pcm ( p = 0.0002). We observed cytoplasmic CD206 immunostaining in Bsm and Pcm, consistent with CD206 ir-cells observed in spleen sections (positive control); no staining was seen when the primary antibody was omitted . Remarkably, no CD206 ir-cells were detected in muscle sections from the N and M20 groups ( A,B,D,E). The number of CD206 ir-cells for the Bsm and Pcm changed significantly among the groups (Bsm: Kruskal–Wallis = 12.73, p = 0.0021; Pcm: Kruskal–Wallis = 16.13, p = 0.0002). Post hoc tests indicated that the number of CD206 ir-cells for the Bsm of M20 rabbits was significantly higher than both the N and M3 rabbits ( p = 0.006 for both pairwise comparisons) and Pcm ( p = 0.0015 for both comparisons) ( I,J). Labor trauma impairs connective tissues and muscles, leading to the onset of pelvic floor disorders. Some cases of SUI are transient, while others are long-lasting. Pelvic organ prolapse complications may imply the need for surgical procedures. Therapies for both SUI and POP temporarily improve some of the pathological signs, which could also be due to factors related to menopause and aging, among others. Inflammation may also influence PFM recovery. The present findings demonstrate that centralized myonuclei in both the Bsm and Pcm increase by twenty days postpartum, matching significant increases in the HLA-DR immunoreactive cells (M1 macrophages) on day three postpartum and in the CD206 immunoreactive cells (M2 macrophages) on day twenty postpartum. We have reported that multiparity increases the Pcm myofiber CSA without modifying both variables in the Bsm . Such results agree well with the Bsm, but not with the Pcm, for which higher CSA has been reported elsewhere . Similar to the findings herein, the average myofiber CSA for neither Bsm nor Pcm changed among nulliparous, late-pregnant, and primiparous rabbits in a previous study . The discrepancy between multiparas findings may rely on the different regions of muscle that were sampled. In rats, simulation of birth trauma affected the histology of the coccyges and pubococcygeus muscles in the enthesis of each . In addition, the damage caused by eccentric exercise, which is expected during vaginal delivery, has been found to be increased in the proximal region of hindlimb muscles such as the rectus femoris . Present data, and other data reported elsewhere, have been obtained from the medial regions of Bsm and Icm, where the content of myofibers is predominant . Therefore, signs of muscle injury like focal necrosis, hypercontractile myofibers, and PMN cell infiltration on postpartum days 3 and 20 could provide information about ongoing inflammation. In this regard, the findings herein show consistency with previous reports from rat studies . The infiltration of PMN cells in the Bsm and Pcm agrees with the histological observations and biochemical indicators of muscle damage (e.g., β-glucuronidase activity) reported for late-pregnant, primiparous, and multiparous rabbits . Remarkably, histological modifications reported for rabbits subject to reproductive challenges seem mild compared to those of rats subject to multiple simulated birth traumas, lacking the hormonal milieu surrounding the delivery . The latter could be explained in terms of PFM adaptions occurring at the end of pregnancy, as reported in rats . Overall, data from histological analyses suggest have suggested that childbirth-induced muscle damage is asymmetrical among myofibers, which may influence the further development of therapies based on neurostimulation or biomaterials . We used anti-HLA-DR and -CD206 as reliable markers of M1 and M2 macrophages, respectively . HLA-DR M1 macrophages and/or mononuclear cell infiltration matches pro-inflammatory responses in striated muscles. TNF-alpha up-regulates the HLA-DR expression in IFN-gamma-treated myoblasts, which may signal autophagy-mediated antigen presentation . Furthermore, HLA-DR expressing T-helper cells could also be present in infiltrates appearing after exercise-related muscle injury . We observed HLA-DR immunostaining in cells, easily seen at 600×, along with giant vacuoles observed at 100× magnification, given the association between low muscle strength and inflammatory cells in the biceps brachii of people with polymyositis and dermatomyositis . Our present data provide supporting evidence of ongoing inflammation in both the Bsm and Pcm on postpartum day three. Such an increase was not observed in the muscles of multiparas on postpartum day 20. Conversely to HLA-DR immunostaining, that of CD206 was almost absent in nulliparous and M3 rabbits. M2 macrophages attenuate the M1 macrophages and secrete molecules that enhance muscle recovery, including IL-10, TGFβ, and miR-501 . Indeed, the latter observation matched a significant increase in the centralized myonuclei and the estimated number of M2 macrophages (CD206 positive) for the Pcm, suggesting active muscle regeneration in agreement with the expression of muscle regeneration markers such as MyoD, MyoG, and desmin . In contrast, CD206 positive M2 macrophages measured on postpartum day 20 increased to a lesser extent in the Bsm than the Pcm, likely indicating that muscle regeneration occurred faster than in Pcm . Thus, a single delivery or multiple deliveries may trigger the kinetics of the inflammatory response in the PFM (through the presence of M1 and M2 macrophages), as a fundamental part of muscle injury and efficient regeneration. Such a notion agrees with the significant increase in myofibers, showing centralized myonuclei in the Bsm and Pcm, in multiparas on postpartum day 20. Pelvic floor muscle injuries during childbirth often lead to disorders that impair the quality of life for women . Muscle damage implies macrophage and satellite cell interactions, among other cell types that regulate degeneration and regeneration processes in PFM . The latter interactions have underlined some targets of interest in developing therapies for ameliorating deleterious urogynecological symptoms. In female rats, the administration of anti-inflammatory drugs has been found to impair PFM recovery after simulating birth injury to PFM , which is likely associated with the roles of muscle M1 and M2 detected 3 and 7 days post-injury . Indeed, M2 macrophages make an important contribution to the muscle regeneration process by interacting with muscle satellite cells through anti-inflammatory cytokines and other molecules such as miR-501 . A recent study has reported that a pro-regenerative extracellular matrix hydrogel can modulate the immune response, myogenesis, and extracellular matrix remodeling, thus exerting a protective effect on PFM after simulated birth injury . Indeed, injury is different among individual PFM . In this context, our present findings extend the challenge of understanding PFM disorders, to evaluate the transition from M1 to M2 macrophages in the PFM of multiparous rabbits that contribute differentially to either urine voiding (e.g., the Bsm) or continence (e.g., the Pcm) . In addition to being the first report in which pro- and anti-inflammatory states are evaluated at postpartum in a physiological model like multiparity, our findings could boost further studies on the plasticity of immune responses, along the reproductive experience of females. Limitations of the present work include the lack of investigation of molecules involved in muscle injury or regeneration (e.g., TNF⍺, TGF-β) and mature myosin isoforms that prove informative in relation to the extent of functional recovery. In contrast, one of the remarkable strengths of this study was represented in its addressing of pro- and anti-inflammatory processes in an animal model subject to reproductive challenges that imply adjustments in hormone actions, which are expected to occur in women postpartum. A shift from the pro- to anti-inflammatory phase in the bulbospongiosus and pubococcygeus muscles of multiparous rabbits matches with centralized myonuclei, suggesting ongoing regeneration of the bulbospongiosus and pubococcygeus muscles.
Urgent dental care use in the North East and Cumbria: predicting repeat attendance
72a3dc0b-ff24-4f4d-b9dd-7e29d249504c
8837533
Dental[mh]
Just under 10% of the dentate population in England, Wales and Northern Ireland report experiencing acute dental pain which is known to have a significant impact on everyday life. , Despite this, almost one-third of the UK population are so called 'problem-orientated attenders', , , only seeking care when they have acute dental pain or problems, often waiting over two months before doing so. , , As well as affecting their quality of life, this also puts them at risk of serious adverse events such as unintentional paracetamol overdose , , , , , and life-threatening infections. , , , As problem-orientated attenders only seek care when they have acute dental pain, they frequently use drop-in services in secondary care, often on a repeated basis and for the same problem, , as well as presenting to other healthcare professionals including hospital (medical) emergency departments, , , general medical practitioners , and other allied health professionals. , , , They will also seek urgent or emergency dental treatment with primary care general dental practitioners; however, little is known about the rates or predictors of repeat attendance in primary care. It is important that research is carried out to understand problem-orientated dental attendance so that interventions can be developed to encourage regular dental attendance and part of this understanding must include where these patients attend, to ensure that any interventions are sited in the appropriate places. The North East and Cumbria covers a population of just under three million people, with a slight predominance of women at 51%. The North East of England has a slightly different demographic to that of Cumbria, with Cumbria having a generally older population and more rural areas. Access to dental services also varies between the North East and Cumbria, with 2-4% of North East residents reporting being unable to access dental care, compared to 8% of Cumbria. A further 12% of those responding to the National GP Survey stated that they did not try to access care because they thought that they would not be able to get an appointment. In addition, previous commissioning reports have shown that Cumbria has higher utilisation rates of urgent dental care services than the North East. The aim of this study was to determine the period prevalence of repeat urgent and emergency care attendance in the North East and Cumbria and identify any sociodemographic predictors of repeat attendance to inform intervention development aimed at problem-orientated dental attendance. A request was made to the NHS Business Service Authority for data available on Band 1 Urgent Course of Treatment FP17 claims during the period of April 2013 to April 2019 for Cumbria, Northumberland, Tyne and Wear and Durham, Darlington and Teesside legacy area teams. Data requested included: patient sex; ten-year age band; lower layer super output area (LSOA); and Index of Multiple Deprivation (IMD). To avoid disclosure of patient-identifiable information, the data were aggregated into the number of urgent care attendances before being made available to the authors for analysis. According to the UK's Human Research Authority's processes, the aggregated and anonymous data used within this paper did not mandate ethical review or approval. IMD is the official measure of deprivation in the UK and considers deprivation being related to more than just poverty. IMD combines seven different domains: income; employment; health deprivation and disability; education, skills and training; crime; barriers to housing and services; and living environment. There are 32,844 LSOAs in England, with each being assigned a ranked IMD score, with 1 being the most deprived area and 32,844 being the least deprived. For the purposes of this study, IMD was considered in deciles and quintiles: quintile or decile 1 is the most deprived and quintile 5 or decile 10 the least deprived. IMD data were provided as part of the data request. To take into account the variation in population sizes within the areas studied, the prevalence of urgent care attendances were calculated using freely available census data during the year of interest for the relevant population. The prevalence period was calculated as a percentage of the population registered on the census and therefore of all the population of interest who could theoretically access a dentist in that area. Population estimates were not used. LSOA was used for location-relevant outcomes including mapping the data to Office for National Statistics urban/rural definitions and also to middle layer super output area (MSOA) to allow mapping of the prevalence by area using the Public Health England Local Health Mapping Tool. A repeat urgent care user was defined as someone attending urgent care twice or more in one year, in order to capture data on frequent urgent care users and therefore most likely to represent problem-orientated dental attenders. Data were considered year by year to identify any changes in trends over the six-year period. These were analysed using descriptive statistics and univariate and multivariable logistic regression modelling with interaction and likelihood ratio analysis using STATA v15 (StataCorp LLC, College Station, TX, USA). Logistic regression modelling was repeated with adjustments for any potential confounders and included in the final model where a larger than 10% change was observed. Over the six-year period there were 601,432 patient attendances for urgent and emergency dental care, which equates to an overall period prevalence of 2.76% for the North East and Cumbria population. When considered as a prevalence, the majority of these patients were women (population prevalence 3.3% women, 3.1% men), aged 30-39 years old and from most deprived areas of the North East ( ). Attendances increased in older age groups before decreasing from the seventh decade. The most common area for attendances was Copeland ( ). The majority of attendances were from rural locations (population prevalence 4.6% compared to 3.5% for non-rural locations). Attendances decreased from 2013-2017 and then began to increase again in 2018 ( ). The majority of patients attended for one urgent or emergency care appointment over the six-year period (83.9%), the remainder attending for more than one urgent or emergency care appointment. Repeat attenders accounted for 97,155 (16.15%) patient attendances over the six-year period, equating to an overall period prevalence of 0.45%. Patients who were repeat attenders tended to be women (0.58% compared to 0.45% prevalence), from the most deprived areas of the North East and aged 30-39 years old ( ). The prevalence of repeat attenders by year are shown in , with a decrease seen from 2013-2017, before stabilising in 2018. Repeat attendances tended to be from rural areas (0.78% compared to 0.56% prevalence). The location of repeat attenders are shown in . Given the difference in access to dental services between the North East and Cumbria, the prevalence between the two geographical areas was compared over time ( ). The prevalence of all and repeat patients attending for urgent dental care was consistently higher in Cumbria compared to the North East. Using univariate logistic regression modelling repeat attenders were less likely to be men (OR 0.8, 95% CI: 0.80-0.82, p <0.0001) and from urban areas of the North East and Cumbria (OR 0.9, 95% CI: 0.90-0.95, p <0.0001). In addition, repeat attenders were more likely to be from more deprived areas (OR 0.93, 95% CI: 0.93-0.94, p <0.001) ( ). Within multivariable regression modelling, a significant interaction was found between being a repeat attender and IMD quintile and rurality (p <0.00001). The relationship between IMD quintile and rurality for repeat attenders is shown in , whereby repeat attenders are less likely to be from the least deprived and urban areas of the North East (OR 0.89, 95% CI: 0.83-0.95, p <0.0001). Considering IMD quintile over time, people from the most deprived areas of the North East remained the majority of repeat attenders. The overall number of repeat attenders in each quintile decreased from 2013-2017, however from 2017 the number of repeat attenders in quintiles 1-3 increased, while those in quintiles 4-5 continued to decrease ( ). Over a six-year period in the North East and Cumbria, the period prevalence of all urgent and emergency dental care attendances in primary dental care was 2.76%. In total, 16.5% of these attendances were repeat attendances, which equated to a 0.45% period prevalence. This is a lower repeat attendance rate than observed in secondary care where around one-third of attenders are repeat attenders. , The majority of the patients attending were in their fourth decade and from the most deprived areas of the North East and Cumbria which is in keeping with the typical sociodemographic of patients attending secondary care urgent dental clinics and medical emergency departments in the same region, as well as nationally and internationally. , , , , , , , , , , , , , , , However, in contrast to the demographic attending secondary urgent care services the majority of patients were women. This may be because female patients are more likely to attend for routine dental care and as such, be undergoing an active course of treatment at the practice, making access for urgent care easier in comparison to those who are not undergoing active treatment. Unfortunately, a limitation to this study is that it is unknown what proportion of the patients attending for urgent and emergency dental care were undergoing an active course of treatment and therefore may reflect those attending with complications associated with treatment, rather than from avoiding regular dental care. Predictors of being a repeat attender reflected the typical sociodemographic of all attendees which included being a woman from rural and deprived areas. The odds of being a repeat attender varied in relation to deprivation depending on their urban or rural status, with those having the highest odds for repeat attendance living in the most deprived and rural areas. Patients from deprived areas may be more likely to seek repeat urgent and emergency care due to an increase in prevalence of dental disease and pain, , fewer seeking regular preventive dental care and having poorer health literacy. Living in a rural area is also associated with a decreased likelihood of attending for regular preventive care , , , which may be partly explained by patients reporting oral health as a low priority, in addition to dental access potentially being more challenging, which is known to be a problem in Cumbria compared to the North-East and may explain the difference in attendances between the two geographical areas observed. Attending primary dental care services in a problem-orientated manner means that patients are more likely to continue to suffer with oral health problems , , and fail to receive standard preventive dental care. This continues to put them at risk of adverse health events as well as exert a direct and indirect economic impact on the patient and wider society. For this reason, it is imperative that interventions are developed to try and encourage regular preventive dental care over and above problem-orientated dental care. In primary dental care in the North East and Cumbria, these interventions should therefore be targeted to patients residing in the most deprived and rural areas to ensure those who would benefit the most receive them. Although the current literature has been used to provide some explanation as to why these particular patient groups may be repeat attenders, the data analysis cannot provide casual evidence for the reasons behind repeat attendance. This warrants further research exploring the specific barriers within these patient groups. Changes in attendance patterns were noted over the time period studied, with a decrease in attendance noted from 2013-2017 and repeat attendance remaining stable into 2018, while one-off urgent care attendance began to increase. In addition, all and repeat urgent care attendances were consistently higher in Cumbria than the North East. This could indicate that either service improvements or interventions aimed at repeated urgent and emergency dental attendance in primary care may need to be prioritised in Cumbria. Whereas in the North East, interventions could be sited in other clinical settings where these patients are more likely to attend, such as secondary care urgent dental care clinics. The reasons why problem-orientated attenders chose to present repeatedly to secondary care rather than primary care are under-researched; however, could include cost of service and availability of immediate walk-in treatment. Changes in attendance patterns by IMD were also noted over the six-year period, with an increase in repeat attenders from the more deprived quintiles of the North East and a decrease from the least deprived quintiles, indicating a potential increase in oral health inequalities across the region. It should be noted that the findings of this study are limited to attendees at urgent and emergency dental care before the COVID-19 pandemic, which has had a significant impact on dental care internationally. At the start of the pandemic in March 2020, all routine dental care ceased in the UK and patients were only able to access urgent and emergency dental care in dedicated hubs. As the pandemic progressed, access to dental care subsequently improved with individual practices offering urgent and emergency care before transitioning to offer a mix of urgent and more routine dental care. Therefore, the majority of the UK population will have changed their attendance habits. At this stage, it is uncertain what long-term impact there will be on engagement with routine dental care and as a result, the proportion of problem-orientated attenders could increase and this will warrant further future research. In addition, this study examines part of the UK where access to dental care in Cumbria is known to be an issue with an increase in urgent dental care attendance. Findings may therefore be affected by these access issues and may not be representative of the rest of the UK. Further work is required in other areas to establish if predictors of repeat urgent dental care attendance is comparable elsewhere. This dataset also covered NHS dental care only and therefore may not represent patients accessing private dental care. In conclusion, across the North East and Cumbria during a six-year period, there were 601,432 patient attendances for urgent and emergency dental care, equating to an overall period prevalence of 2.76%. To put this another way, nearly 3 in every 100 people in the region need urgent care. Repeat attenders were more likely to be women and from the most deprived and rural areas; however, the prevalence of repeat attendance declined over the study period. Any interventions developed to promote regular dental care should therefore be targeted at patients residing in the most deprived and rural areas of the region.
Association of Oncologist Participation in Medicare’s Oncology Care Model With Patient Receipt of Novel Cancer Therapies
3dddfa0e-fa10-40eb-b95f-7d058c65fccd
9523492
Internal Medicine[mh]
Cancer is the second leading cause of death in the United States and accounted for more than $200 billion in annual spending in 2020. , In an effort to improve care quality and reduce costs, the Centers for Medicare & Medicaid Services implemented a voluntary alternative payment plan called the Oncology Care Model (OCM) in 2016. More than 3200 oncology clinicians in 200 practices volunteered to participate in the OCM. The program reimbursed practices with fixed per-patient monthly payments to enhance cancer coordination and delivery services and to improve care quality. Practices were eligible to receive incentive payments for meeting quality metrics and reducing costs below risk-adjusted thresholds for 6-month episodes of care. , One concern that arose in the development of the OCM was how to account for the cost of expensive, novel anticancer drugs in the cost thresholds used to ascertain incentive payments. Target cost thresholds in the OCM were based on the costs that were borne by Medicare (all Part A and B services and, for Part D beneficiaries, a portion of low-income subsidies and catastrophic costs) during the baseline period of 2012 to 2014, which were risk adjusted and projected forward and then discounted to impose cost-saving restraints. However, novel cancer therapies are increasingly expensive, and clinicians have little control of the costs of drugs, which compose more than 50% of cancer care costs, raising concerns that the cost thresholds may discourage use of novel therapies, especially given evidence that financial incentives can affect oncologist prescribing patterns. , Thus, Medicare also included a novel therapy adjustment that offset up to 80% of the incremental spending on the use of novel therapies compared with baseline costs in its cost threshold calculations. Although modeling suggested that this adjustment was sufficient to account for the costs of novel therapies, early experiences raised concerns that novel therapies were affecting costs that exceeded OCM spending targets. , A study found that the OCM was associated with reduced use of expensive supportive care medications. Recently, a large difference-in-differences analysis of cancer care cost and use under the OCM found that the OCM was not associated with the use of novel chemotherapies for 5 cancers in the first 3 years of the OCM, but the study found an increase in the use of immunotherapies for lung cancer and melanoma (2.5 and 2.9 percentage point difference, respectively) but not for kidney cancer. However, the study calculated novel therapy use rates for the entire study population by each cancer type rather than subgroups eligible to receive novel therapies, possibly missing important differences in novel therapy use attributable to the OCM. By linking OCM participation data with Surveillance, Epidemiology, and End Results (SEER) Program data, it is possible to better identify patients who would have been candidates for specific types of novel therapy. To extend previous work and provide a granular analysis of the association between the OCM and patients’ receipt of novel therapies, this study used SEER registry data linked to Medicare enrollment and claims data to identify patients likely to be eligible for novel therapies based on cancer histologic subtype, stage, and treatment history. The aim of the study was to assess the association between oncologists’ participation in the OCM and patients’ receipt of novel therapies using a difference-in-differences approach. Study Design and Data Identification of Novel Therapies Study Populations and Outcomes Identification of OCM Participation Statistical Analysis is retrospective, registry-based cohort study used SEER registry data linked to Medicare enrollment and fee-for-service claims (SEER-Medicare) from 2014 to 2018 to identify patients potentially eligible to receive 1 of 10 novel cancer therapies and ascertained whether those patients received the novel therapy or an alternative treatment. The study used a nonrandomized difference-in-differences design to evaluate the association between oncologists’ participation in the OCM and patients’ likelihood of receiving a novel cancer treatment in the period before and after the OCM started. The study excluded the Hawaii registry because complete data were not available at the time of the data request. The Dana-Farber Cancer Institute Institutional Review Board deemed the study exempt from review and waived the requirement for written informed consent because it did not involve human researchidentify novel therapies for inclusion in this study that would have been available to patients both before and after the start of the OCM, authors reviewed a list of all cancer therapies (administered orally or intravenously) that received US Food and Drug Administration approval for a new indication in the 18 months before the OCM’s start on July 1, 2016 (drugs approved before this period were not designated as novel by the OCM). Therapies were included if (1) a cohort of patients eligible for the novel therapy could be identified using SEER-Medicare claims data and (2) the novel therapy had an existing alternative treatment (as described in contemporaneous National Comprehensive Cancer Network guidelines) such that there was a choice to be made between the novel vs alternative therapy, which was observable in claims data. describes the 10 new drug indications for patient cohorts included in the study, and eTable 1 in the has additional details on inclusion and exclusion criteria and how sequentially approved novel therapies were incorporated into the cohorts. All patients who met inclusion criteria for 1 of the novel drugs were assigned to the respective novel drug cohort (eg, non–small cell lung cancer: anaplastic lymphoma kinase positive; eTable 1 in the ). The study outcome was the first claim for the novel therapy vs alternative therapy as long as the claim date was within 2 years of the novel therapy’s Food and Drug Administration approval for use in the applicable cohort. For instance, patients in the anaplastic lymphoma kinase–positive non–small cell lung cancer cohort, which was assessed for the receipt of second-line alectinib therapy, had to have a diagnosis of stage IIIB, IIIC, or IV non–small cell lung cancer; received first-line crizotinib; and received a second-line therapy within 2 years of alectinib’s Food and Drug Administration approval of December 11, 2015. Outcomes were categorized as receipt of novel therapy if the patient received second-line therapy with alectinib or categorized as receipt of alternative therapy if patients received 1 of the chemotherapies listed in eTable 1 in the . In addition, the outcome date was assigned to the first claim for the outcome (ie, the date of the first claim for either the novel therapy or alternative therapy). Patients also had to have continuous Medicare Parts A, B, and D coverage for 3 months before their diagnosis until the date of their outcome to be included in the analysis. Patients enrolled in Medicare Advantage plans were excluded because their treatment claims were not reliably available. Oncologists participating in the OCM can bill for a Monthly Enhanced Oncology Services payment for each patient treated under the OCM. A list of OCM participants was generated by finding all National Provider Identifiers (NPIs) associated with these payment claims (Healthcare Common Procedure Coding System code G9678) for patients with a diagnosis of any of the cancers included in this study between the start of the OCM in July 2016 and December 2018. Oncology Care Model participation was categorized at the patient level based on whether the prescriber NPI for the first claim of the outcome treatment was in this encrypted list of oncologists participating in the OCM. In addition, a sensitivity analysis was performed using a list of NPIs (converted to encrypted NPIs by SEER-Medicare data managers [R.B.P. and J.E.B]) for oncologists participating in the OCM who were identified in a previous publication by searching the websites of OCM-participating practices in late 2017. Data were analyzed between July 2021 and April 2022. χ 2 and Kruskal-Wallis tests were used to compare unmatched patients in the OCM and non-OCM groups based on measured characteristics, including age, sex, race, ethnicity, marital status, Charlson Comorbidity Index score, disease cohort, urbanicity, Census tract poverty rate, and treating oncologist. To adjust for observable baseline differences between the OCM and non-OCM groups, greedy matching up to a 3:1 ratio was used to exactly match on novel therapy cohort (ie, patients must be assigned to the same novel therapy cohort described in ), outcome date in 6-month increments (eg, first half of 2016), and clinician specialty status. Previous research has found differences in characteristics of clinicians participating vs not participating in the OCM, but such characteristics were not available for matching because of the encrypted NPIs. Instead, specialist status was inferred by examining all of the claims for each NPI in the study data, calculating the percentage of the total number of unique patients for each NPI that had an International Statistical Classification of Diseases and Related Health Problems, Tenth Revision code for each disease category (bladder, breast, gastrointestinal [colon and pancreas], melanoma, lung, and renal). Clinicians for which 50% or more of patients had 1 disease category were categorized as specialists, and those without any categories of 50% or more were categorized as generalists; clinicians with fewer than 5 patients were categorized as low volume. In addition to a descriptive summary of findings, a multivariable mixed-effects regression model with random effects for matching group and logit link was used to calculate patient likelihood of receipt of a novel therapy compared with alternative therapies in the 2 years after Food and Drug Administration approval, with OCM participation as the projected variable and the interaction of the OCM and receipt of treatment after the start of the OCM on July 1, 2016, as the outcome of interest. Because matching was not expected to balance characteristics across all measured variables, the models also included the following covariates: categorical age, sex, race, Hispanic ethnicity, marital status, Census tract poverty rate, and urbanicity (metropolitan vs nonmetropolitan area), which were all reported in the SEER registry, plus Charlson Comorbidity Index score (0 indicates no comorbid conditions; a higher score indicates an increasing number or severity of comorbid conditions), novel therapy cohort, time period, and specialist status. Race and ethnicity were included to evaluate secondary hypotheses regarding disparities in access to novel therapies and were defined as coded by the SEER registry. Using the estimated parameters of the mixed-effects model, the between-group difference (ie, difference-in-differences) and corresponding 95% CI were derived. Inclusion of registry and state were not permitted by the data use agreement to prevent inadvertent identification of OCM practices. The parallel baseline trends assumption was tested using the same model restricted to observations before the start of the OCM and including 3-month time periods and an interaction between these time periods and the OCM to test the trend. Patients with missing values for any of the variables included in the model were excluded from the analysis with the mixed-effects model (n <11 patients [<1% of study sample]). A prespecified analysis to assess differences between Black and White patients in the between-group difference (ie, difference-in-difference-in-differences) used the same model as the primary analysis and added the following 3 interaction terms: race by OCM group, race by post-OCM start, and race by OCM group and post-OCM start. A 2-sided α < .05 was the threshold for statistical significance; a confidence β coefficient of 0.95 was used to calculate 95% CIs. Analysis was performed using SAS Enterprise Guide, version 7.15 (SAS Institute Inc) and Stata/IC, version 15.1 (StataCorp LLC). The unmatched study sample included 3310 patients, which decreased to 2839 patients (median [IQR] age, 72.7 [68.3-77.6] years; 1591 women [56.0%] and 1248 men [44.0%]) after matching, with 760 patients in the OCM group and 2079 in the non-OCM group. Matching increased the similarities between the groups for the matched variables of the cohort (unmatched vs matched standardized difference, 0.16 vs 0.05), time period (unmatched vs matched standardized difference, 0.10 vs 0.04), and treating oncologist (unmatched vs matched standardized difference, 0.14 vs 0.04) (eTables 2 and 3 in the ). Among matched patients, 181 (6.4%) were Asian or Pacific Islander individuals, individuals of other race (including Alaska Native or American Indian or indicated as other in the SEER registry), or of unknown race; 232 were Black individuals (8.2%); and 2426 were White individuals (85.5%). One hundred eighty-four individuals (6.5%) had Hispanic ethnicity and 2655 (93.5%) had non-Hispanic ethnicity. A total of 1740 patients (61.3%) were treated by oncology generalists ( ). Matched patients eligible to receive second-line immunotherapy for metastatic non–small cell lung cancer (n = 994 [35.0%]), second-line liposomal irinotecan for pancreatic cancer (n = 753 [26.5%]), and first-line palbociclib for metastatic hormone receptor–positive erb-B2 receptor tyrosine kinase 2 ( ERBB2 ; formerly HER2 )–negative breast cancer (n = 376 [13.2%]) composed the 3 largest novel therapy cohorts. Unmatched patients treated by clinicians participating in the OCM were more often White individuals (672 of 764 [88.0%] in the OCM group vs 2143 of 2546 [84.2%] in the non-OCM group; P = .04), lived in a metropolitan area (687 of 764 [89.9%] in the OCM group vs 2130 of 2546 [83.7%] in the non-OCM group; P < .001), and lived in a low Census poverty track (0% to <5% poverty rate, 232 of 764 [30.4%] in the OCM group vs 577 of 2546 [22.7%] in the non-OCM group; P < .001) (eTable 2 in the ). A total of 1819 clinicians had a median of 1 patient in the matched sample (IQR, 25%-75% of 1:1 matched sample). Patient receipt of novel therapy steadily increased over time ( ). Baseline trends according to OCM status differed significantly in the unmatched sample (odds ratio for before OCM time trend, 0.73; 95% CI, 0.57-0.93; P = .01) but not in the matched sample (odds ratio for time trend, 0.81; 95% CI, 0.62-1.06; P = .12). In the multivariable mixed-effects regression analysis, calculated patient likelihood of receipt of novel therapy for those treated by oncologists not participating in the OCM increased from 33.2% before July 2016 to 40.1% after July 2016 compared with an increase from 39.9% to 50.3% over the same period for patients treated by participating oncologists, a nonsignificant difference-in-differences of 3.5 percentage points (95% CI, −3.7 to 10.7 percentage points; P = .34). When the novel drug cohorts were analyzed independently, OCM participation was associated with an increase in patient receipt of novel therapy only for second-line immunotherapy for lung cancer (17.4 percentage points; 95% CI, 4.8-30.0 percentage points; P = .007). The other cohorts showed no significant differences in receipt of novel therapy or had sample sizes that were too small to run the model. Among all cohorts, patients treated by oncologists participating in the OCM were 47% more likely to receive a novel therapy over the entire time period compared with those treated by nonparticipating oncologists (odds ratio, 1.47; 95% CI, 1.09-1.97; P = .01). Black patients were 39% less likely to receive novel therapies compared with White patients (odds ratio, 0.61; 95% CI, 0.42-0.89; P = .009). Likelihood of receipt of novel therapies among Black patients of clinicians participating in the OCM increased from 27.8% before OCM initiation to 54.1% after its implementation vs an increase of 40.8% before OCM initiation to 49.9% after implementation for comparable White patients. In addition, this group was the only group of Black patients to have higher rates of novel therapy receipt than comparable White patients (eTable 4 in the ). In contrast, disparity in receipt of novel therapies increased for Black patients (vs White patients) of nonparticipating OCM practices after OCM implementation. The difference-in-differences estimate for the OCM group was 23.0 percentage points (95% CI, −2.4 to 48.5 percentage points; P = .08) for Black patients and 1.8 percentage points (95% CI, −6.0 to 9.5 percentage points; P = .66) for White patients. A preplanned difference-in-difference-in-differences analysis showed no association for the OCM on the difference in this Black/White racial disparity (difference-in-difference-in-differences, 21.2 percentage points; 95% CI, −5.5 to 48.0 percentage points; P = .12). Parallel trend and regression results are available in eAppendixes 1 through 4 in the . A sensitivity analysis that used the alternative method of identifying oncologists participating in the OCM showed similar results (eAppendix 5 in the ). This study analyzed patient cohorts based on cancer site, stage, and treatment history using SEER-Medicare data to measure the association between receiving treatment from an oncologist participating in the OCM and receipt of novel cancer therapies. Across a range of cancer types, no observable association between physician participation in the OCM and the likelihood of patient receipt of novel therapies was found. However, patients treated in OCM-participating practices were more likely to receive novel therapies both before and after initiation of the OCM. After OCM implementation, the likelihood of receiving novel therapy for second-line treatment of advanced lung cancer was higher for patients treated at OCM-participating practices. Some single-practice reports suggested that patients who receive standard of care treatment with novel therapies, including patients with lung cancer treated with immunotherapy, had health care spending that exceeded OCM-defined cost thresholds and could theoretically alter physician prescribing of novel therapies. , However, another study that used broader cohort definitions and ascertained eligibility for novel therapies based solely on diagnosis with a cancer for which a novel therapy is approved found either no association (for chemotherapy for 5 cancers and immunotherapy for 1 cancer) or an association with limited cancer types (for immunotherapy for 2 cancers). The current study used more rigid cohort definitions in identifying patients potentially eligible to receive novel therapies, but because of the smaller sample size, all chemotherapy and immunotherapy novel treatments were combined in 1 analysis, and no statistically significant differences were found in the receipt of novel therapies before and after implementation of the OCM. Despite the concerns raised specifically regarding patients with lung cancer treated with immunotherapy, this study found an increase in receipt of novel immunotherapy after the start of the OCM for patients treated by participating clinicians, echoing findings from Keating et al. In combination, the results of these 2 studies may reassure patients, clinicians, and policy makers that the OCM does not appear to discourage the use of novel therapies despite incentives to reduce cancer spending. Two factors may explain why a financial disincentive to prescribe novel therapies may not translate into a change in prescribing practices. First, the financial disincentive may be successfully attenuated by the novel therapy adjustment, outweighed by the oncologists’ financial benefit from higher drug reimbursement of expensive novel therapies (for IV therapies) or outweighed by the clinical benefit to the patient. Of note, all OCM practices participated in the single-sided risk model (good performance is rewarded but poor performance is not penalized; no practices participated in 2-sided risk) ; loss aversion as part of 2-tailed risk participation may have been a more effective disincentive that was avoided by all practices. The earlier study found a decrease in spending on supportive care medications, such as bone-modifying agents and growth factors, suggesting that oncologists may be responsive to this financial incentive if they believe that cheaper alternatives would not compromise patient care. Second, OCM performance payments and penalties are calculated retrospectively. The current study evaluated novel therapies early in the OCM period before practices received any feedback on cost performance. Oncology Care Model’s retrospective performance measurement that incorporates a practice’s novel therapy use compared with that of all other practices prevents practices from anticipating their performance and adjusting behavior, particularly early in the period after OCM implementation. However, Keating et al evaluated an extended time period through 2019 and found no association between OCM participation and novel therapy prescribing. These considerations and the results of this study need to inform the implementation and evaluation of the successor to the OCM, the Enhancing Oncology Model, which is expected to start in July 2023. Although the difference-in-differences approach showed that the OCM was not associated with a decrease in patient receipt of novel therapies as a whole, in adjusted analyses, patients treated by oncologists participating in the OCM were more likely to receive a novel therapy overall and more likely to receive immunotherapy for second-line treatment of lung cancer after the start of the OCM. To reduce confounding, this study matched patients on 3 patient and clinician characteristics and adjusted for many other observed variables, but encrypted NPIs and data use restrictions prevented matching on other unmeasured physician characteristics that may explain this finding. Previous studies have found that oncologists who voluntarily participated in the OCM had different characteristics than nonparticipating physicians and may be early adopters of new therapies or have different approaches to evaluating the value and utility of new treatments. The study also found that Black patients were much less likely to receive a novel therapy, a finding consistent with other studies describing racial disparities in access to novel therapies. , , Particularly for cancers for which novel therapies offer substantial improvement in outcomes (eg, immunotherapy for non–small cell lung cancer , ), further research is necessary to identify the underlying causes for persistent disparities in novel technology access and test remedies for closing these gaps. Furthermore, the difference-in-difference-in-differences analysis to assess the association between the OCM and disparities in access to novel therapies showed a large increase in receipt of novel therapies for Black patients associated with the OCM, but the finding was not significant ( P = .12) in this underpowered prespecified subanalysis. This finding raises the possibility that the OCM might have helped narrow racial disparities in patient access to novel therapies, which would be a noteworthy advance if it bears out in future research. The association between payment models and disparities in access to care needs to be incorporated into evaluation of the OCM’s successor model as well as assessments to identify why the payment model has such an association. Limitations This study has limitations. The difference-in-differences estimate has a wide CI, suggesting that the study is underpowered. The cohort was small because of exacting specifications, but the sample size of individual cohorts was in line with other studies using SEER-Medicare data (eg, receipt of second-line treatment of non–small cell lung cancer ), and the results are consistent with a much larger but less selective study that similarly found no association between the OCM and patient receipt of novel therapies. This analysis was performed at the patient and physician level; the OCM is a practice-level intervention, and physicians may switch between OCM and non-OCM participating practices during the study period, but small sample sizes limit clinician-specific or practice-level analyses. Difference-in-differences analyses for evaluating receipt of novel therapy are also challenging because the definition of novel therapy changes over time, so preintervention and postintervention groups may differ in cohort composition, as we observed in this study. In addition, cohort and treatment definitions were subject to misattribution bias (eg, programmed cell death 1 ligand 1 status is unknown for patients with lung cancer) that may overestimate or underestimate receipt of novel therapy but would not be expected to bias results in the direction of either OCM or non-OCM participation. Despite concerns that the OCM inadequately considers the costs of novel therapies in calculating episode payments, the OCM had no discernable association with the overall likelihood that patients receive novel cancer therapies. In conjunction with previous studies, these results can inform the development and evaluation of cost and use incentives for Medicare’s new Enhancing Oncology Model.
AI am a rheumatologist: a practical primer to large language models for rheumatologists
9a68f15d-e743-474e-80fc-e49f0de934eb
10547503
Internal Medicine[mh]
For centuries, as humans, we kept learning the basics of life and used all the data to design our future, sometimes in a positive way, other times in a catastrophic way. Nevertheless, in the last century, we ‘invented’ a game-changer (computers) that seems to have unlimited potential even to change our thinking. The idea of artificial intelligence (AI) came up hand in hand with the invention of computers. As hardware and software technologies are improving day by day and every segment of society appreciates the value of ‘data’, AI is enlarging its place in our lives, and so it is in medicine. AI has been used in medicine for several decades: image/case identification, classification, scoring and grading, education, analysing large data, etc. . With recent developments, AI is now ‘invading’ our minds in another way: natural language processing (NLP) . Nowadays, one of the most attractive topics is an application of NLP: Generative Pre-trained Transformer (Chat-GPT) . In this editorial, we will explore, and sometimes speculate upon, the chances and risks emerging using Chat-GPT and other similar tools in rheumatology in academic and clinical settings. Artificial intelligence can be defined as the ability of computers and related systems to execute tasks that mainly require the human mind and intelligence, such as learning, reasoning, decision-making, understanding, recognizing, and natural language processing (NLP). Machine learning, expert systems, speech recognition, planning, robotics, vision, and NLP processing can be counted as the subsets of AI . NLP can be considered as the bridge of communication between machines and humans. With the improvements in deep learning algorithms and neural networks, the abilities of NLP will exponentially progress. Key components of NLP are text analytics (extracting meaningful information from unstructured text data), speech recognition (recognizing and transcribing spoken language), natural language understanding (understanding and interpreting human language in a particular context), natural language generation (generating human-like language) and machine translation (translating text from one language to another). Generative Pre-trained Transformer (GPT), its chatbot (Chat-GPT) and Meta’s Large Language Model Meta AI (LLaMA) are applications of NLP and are the hot topics of the current times . Large language models (LLMs) are deep learning models trained on huge amounts of text data to generate human-like text based on context and input. Technically, LLMs use advanced NLP techniques, such as tokenization, parsing and named entity recognition (NER) . Tokenization breaks the text into individual words or tokens while parsing analyses the grammatical structure to identify relationships between the tokens. NER locates and classifies named entities, which might also refer to medical conditions and medications. The extracted data can then be structured, aggregated and analysed for actionable insights. GPT are large language models constructed via deep learning methods (namely transformers) first introduced in 2018 by the OpenAI company. The latest version (GPT-4) was recently released in the middle of March 2023. The model has been pre-trained in an enormous amount of unlabelled text data via unsupervised deep-learning techniques and then fine-tuned with reinforced supervised learning from human feedback. Training data consisted of online books, published articles, medical case reports and case series, web pages, social media and online forums up to the end of 2021. GPTs are considered a part of generalized AI, meaning they are not trained for specific tasks. The output is generated in a probabilistic manner in that the model selects the most likely one based on the internal dynamics of the model and input. Interestingly, the models are constantly learning from the inputs and outputs. ChatGPT is the interactive, conversational interface of the GPT and was released in November 2022 by OpenAI for public use . For now, it is based on GPT-3.5 in the free version and GPT-4 in the paid version. It mainly works with the same principles as GPT: processes inputs, generates multiple possible responses, selects the most likely one, and presents it. Also, ChatGPT remembers what was said earlier in the conversation and replies according to the context of the conversation in a continuous manner. By ChatGPT, we can communicate with machines in a human-like manner and use the power of AI more efficiently. LLMs of medical interest are not limited to GPT-4. Meta recently released Large Language Model Meta AI (LLaMA), aiming to be more efficient and less resource-intensive compared with other models, increasing its use for a broader base . LLaMA is particularly notable for its availability under a non-commercial licence for researchers and organizations, facilitating its use in various projects. Like GPT, LLaMA is based on a transformer architecture to analyse massive data volumes and generate new content or make predictions based on the data. Another noteworthy difference is their training data. LLaMA is trained on a diverse text corpus, including scientific articles and news articles, while GPT-3.5 is primarily trained on internet text, such as web pages and social media content. As a result, LLaMA may be better suited for generating technical or specialized language, whereas GPT-3.5 may excel in producing informal or conversational language. LLaMA is available in a variety of sizes, ranging from 7 billion to 65 billion parameters (B). The larger models are more powerful, but they also require more computational resources to train and use. LLaMA can run in a local machine and be fine-tuned to improve its performance on specific tasks; this can be done by providing the model with additional data and feedback . How can AI aid rheumatologists in their activity? Applications of generative AI will certainly affect almost all sectors. We can just see the tip of the iceberg for now, but we will soon understand its application in the near future. Before diving into the possible applications of GPT and ChatGPT in academic life, we would like to underline the precautions to be always kept in mind. All components of the academic community (e.g. authors, editors, publishers, researchers, trainees) should be aware of the basics and the developments in AI. Whatever was generated using generative AI, humans should always proofread and supervise the content generated. Users of these systems should always keep in mind the limitations and challenges of these systems and use the systems responsibly. The current version of GPT has many limitations and challenges (especially regarding safety issues). With the improvements in technology, limitations will decrease; however, safety issues are always in front of us . Below we will try to list some of the limitations and challenges regarding academic life: The system tends to produce incorrect, unreal contents (hallucination). From an academic perspective, the system may create non-existant citations, which is very dangerous. In addition, as the system improves, outputs tend to be more convincing and over-relying on themselves. Also, the competition between AI companies may lead to less control of the content and outputs. The system can provide harmful content, which may cause violent and discriminatory language, although several precautions have already been taken. Because of the large and very diverse content for educating the system, the system can create biased content. The personal data of individuals can be uncovered; also, private data regarding the publication process of an article may be altered. Trained data is up to 2021 (for now), so recent advancements in the literature are missing in the model. Ethical issues: authorship, plagiarism, and so on. In the current definition of ICJME, we think that any AI system cannot be considered an author. So, it needs to be defined how we will place and nominate the system: associate author? author assistant? Besides, a few publishers have declared that they will not accept ChatGPT as an author, and they recommended that authors using ChatGPT declare it in the methods or acknowledgments. Plagiarism is also another issue. The system may create similar outputs with similar queries. So, boundaries of plagiarism in the era of AI need to be re-shaped. The opportunities that AI offers are already amazing . From an academic perspective, AI suggests solutions from generating research questions to advertising published articles and so on. The OpenAI Application Programming Interface (API) allows the deployment of advanced models like GPT-3.5, or even GPT-4 (given limited access permission), to create task-specific applications tailored for academic purposes. Mainly, it saves time and allows researchers to focus on their work's content. Here, we tried to list some of the possibilities and opportunities: It can suggest research questions and hypotheses related to specific or general topics. However, the system is not trained with recent data, so that outputs may have already been studied. It can contribute to all sections of an article. It can help with methodological problems. It can suggest study designs and sizes and define dependent and independent variables. It can suggest designs for follow-up studies. For statistical analyses, it can suggest relevant statistical methods regarding how to test hypotheses, write codes for statistical packages like R and/or python, and correct mistakes in the code. The system can create and suggest abstract and title alternatives. It can assist in translation, editing, proofreading, correcting grammatical errors, and improving the readability and overall quality of the manuscript. As a supervising agent, one AI model can check whether a given content includes outputs from ChatGPT. DetectGPT can effectively assess whether the content was written by human or ChatGPT. In the peer review process, the system can critically appraise the manuscript and assess the methodology, validity and relevance of the citations. Also, it can provide insights into possible conflicts of interest, plagiarism and other ethical problems. An OpenAI API-based package called ‘PaperQA’ ( https://github.com/whitead/paper-qa ) is used for questioning and getting answers from texts and can be used to review papers effectively. After publication, it can create advertisement texts and figures. It can also create formal e-mails, social media posts, and educational materials. The system can assist in writing ethical committees and grant applications. The above-mentioned opportunities and challenges converge on the same point: AI systems should always be used responsibly, and humans should supervise all steps. Large language models like GPT-4 or LLaMA have significant potential in rheumatology, offering a range of applications and benefits. They can assist in processing and analysing vast amounts of medical literature, letting clinicians and researchers to stay up-to-date with the latest advancements in rheumatology. Additionally, these models can help create personalized patient education materials, ensuring that patients receive accurate and easy-to-understand information about their conditions and treatments. They can also be used for automating routine tasks such as documentation, by analysing patient data, treatment details and follow-up recommendations, LLMs create well-structured, coherent and accurate discharge letters, allowing healthcare professionals to focus more on patient care. Moreover, with proper finetuning and adequate validation, they can potentially contribute to the development of decision-support tools, aiding rheumatologists in diagnosing and managing complex rheumatic diseases more effectively. LLMs, like GPT-4 or LLaMA may also reduce time-to-referral. In fact, they can potentially be used to analyse text data from forums, social media or electronic health records to identify patterns or keywords associated with rheumatic diseases. By leveraging natural language processing capabilities, LLMs can help recognize early signs or symptoms of rheumatic conditions and facilitate early referral to specialists. In this regard, one potential use is the extraction and analysis of data from general practitioner (GP) healthcare records . In this context, LLMs can be employed to analyse unstructured text data and extract valuable insights to improve patient care, detect patterns and even identify early signs of diseases. However, privacy and data security concerns need to be addressed, as handling sensitive personal information in this context requires strict adherence to data protection regulations and ethical guidelines. While working with sensitive patient data, it is crucial to address potential privacy and data security concerns. Compliance with data protection regulations, such as the Health Insurance Portability and Accountability Act (HIPAA) in the US or the General Data Protection Regulation (GDPR) in the European Union, is essential to ensure the ethical use of LLMs for data scraping in healthcare. Furthermore, data anonymization techniques, like removing personally identifiable information (PII) and employing differential privacy methods, can help protect patient privacy while still allowing for meaningful data analysis. A locally hosted, fine-tuned LLM like LLaMA could potentially address some of the ethical concerns related to privacy and data security in healthcare settings. By deploying the LLM within a hospital’s secure infrastructure, data would be processed internally, reducing the risk of unauthorized access or data breaches compared with cloud-based solutions. Finetuning the LLM on hospital-specific data would allow it to better understand the unique context and terminology used within that institution. It could also be tailored to comply with the hospital's data handling policies and relevant data protection regulations. However, it is important to note that implementing a local LLM does not completely eliminate all ethical concerns. Ensuring the responsible use of the LLM still requires adherence to data protection regulations, such as HIPAA or GDPR, and the application of data anonymization techniques when necessary. Furthermore, the development and maintenance of the LLM should follow guidelines for responsible AI, including transparency, fairness and accountability. The potential biases in medical data used to train LLMs can negatively impact patient care if the models perpetuate these biases in their outputs. Ensuring fairness and addressing biases are vital for maintaining care quality and avoiding harm to patients. Additionally, obtaining informed consent from patients whose data is used for LLM training is essential, as they must be aware of the intended use of their data and potential risks associated with its use in LLMs. In light of the growing integration of LLMs in healthcare, it is vital to establish clear policies and agreements between healthcare providers, researchers, technology developers and patients. Ensuring responsible and ethical patient data usage, as well as maintaining transparency and accountability, is critical for addressing potential negative consequences in a timely and responsible manner. On 30 March 2023, the Italian Data Protection Authority (Garante) issued a provisional measure against OpenAI, asking to temporarily suspend ChatGPT’s processing of personal data for individuals in Italy and ultimately leading OpenAI to disable service for Italian users. This unprecedented action was taken ahead of the Garante’s investigation into ChatGPT’s privacy practices, following a data breach that exposed users’ information. The Garante identified potential GDPR violations, including a lack of transparency in data processing, inaccuracies in data processing, and failure to verify users’ ages. Even if the ban was removed on 28 April 2023, the case has gained significant attention due to ChatGPT’s rapid growth and marked the first instance of an EU data protection authority intervening in the data processing activities of a widely used generative AI tool . Although the concerns raised by the Garante seem legitimate, international AI regulation and a shared approach should be preferable to avoid draconian decisions. Ultimately, there is an urgent need for patients and rheumatologists to understand the potential and limitations of AI in healthcare. Both must have a clear understanding of how these technologies work. This knowledge fosters trust and acceptance of AI, improving patient outcomes and overall healthcare efficiency. AI is not intended to replace healthcare professionals but to augment their abilities. Understanding AI allows doctors and patients to collaborate better with the technology, maximizing its potential while maintaining a human-centric approach to care.
Metabolomic and transcriptomic analyses reveal differences in fatty acids in tobacco leaves across cultivars and developmental stages
45230dec-4d36-47b2-a4a0-714aedc6c4c5
11895271
Biochemistry[mh]
Tobacco is a key economic crop with a long cultivation history, grown widely around the world. As a model species within the Solanaceae family, it is of significant scientific value for studies on gene function, molecular breeding, and other areas of research . Tobacco products undergo various curing processes, including flue curing, sun curing, and air curing, each of which significantly affects the chemical composition and sensory properties of the tobacco. The levels of starch, pigments, polyphenols, fatty acids, and other compounds in tobacco vary markedly under different curing conditions, with the curing process also driving the transformation of several substances . Fatty acids are an essential class of compounds in plants, playing a crucial role in their growth and development . They are key components of cell membrane lipids, important energy sources, and precursors of signaling molecules. Additionally, fatty acids are involved in cell recognition, species specificity, and tissue immunity. They also contribute to indirect insect resistance in plants , by inducing defense responses and utilizing pre-formed physical barriers, such as the cuticle and cell wall, to combat pathogens and other threats . Fatty acids are classified based on chain length into short-chain (C1-C6), medium-chain (C6-C12), and long-chain (> C12) fatty acids. Short- and medium-chain fatty acids and their derivatives exhibit extensive structural diversity and are incorporated into various biomolecules, serving as components of antibiotics, insect pheromones, and plant storage lipids . Long-chain fatty acids play a crucial role in maintaining normal plant development and preventing organ fusion .In tobacco, the composition, content, and distribution of fatty acids play a key regulatory role in its adaptation to various stress conditions . In addition to being essential for plant growth, development, and cell structure, fatty acids also significantly influence the quality and flavor of tobacco leaves . As tobacco leaves mature, both the content and composition of fatty acids change. Different tobacco varieties exhibit distinct patterns of organic acid variation, as confirmed by previous studies . The study found that spice tobacco contains higher levels of 3-methylvaleric acid, valeric acid, and isovaleric acid compared to flue-cured tobacco, while formic acid and acetic acid are relatively lower. In contrast, all organic acid levels in burley tobacco and Maryland tobacco are lower than those in spice and flue-cured tobaccos. Current research on tobacco fatty acids primarily focuses on leaf fatty acid composition , metabolic changes in primary, secondary, and lipid metabolism before and after topping , and the effects of curing methods (such as air-curing, sun-curing, and flue-curing) on the chemical composition of tobacco leaves . However, there is a lack of comprehensive studies analyzing fatty acid content variations across different growth stages and curing methods. We selected four tobacco varieties: K326, Basma, Samsun, and Cuba1. K326 is a flue-cured tobacco ( Nicotiana tabacum L.) variety known for its superior quality . Basma, an aromatic type, and Samsun, a taste type, are representative varieties of aromatic tobacco, originating from the Mediterranean region in the 1560 s . Cuba1 is one of the primary varieties used in cigar production. This study provides a comprehensive analysis of the fatty acid composition during the growth, development, and processing of four tobacco varieties. It investigates the differential expression of fatty acid related genes at specific stages, offering molecular insights into the accumulation patterns of fatty acids in tobacco. Moreover, the findings provide new perspectives for selecting roasting methods tailored to different tobacco varieties. Plant materials Four varieties samples from the seedling and transplant stages in the seedbed, as well as two stages during field growth, and applied two curing methods: flue-curing and air-curing. Using K326 as the reference variety, we compared the four tobacco varieties based on transcriptomics, targeted fatty acid metabolomics, and targeted short-chain fatty acid metabolomics. For all omics analyses, three independent biological replicates were used, with each replicate consisting of 3–6 tobacco plants of similar growth vigor. Samples were immediately flash-frozen in liquid nitrogen and stored at -80℃ until further use. Detailed information on the time, stage, usage, and omics category of material selection is shown in Table . Schematic diagram of the materials is shown in Fig. . The seeds of K326, Samsun, Basma, and Cuba1 are stored at the Technology Center of Hunan Tobacco Industry Co., Ltd. The seedbed stage cultivation conditions were as follows: soil cultivation in an artificial growth chamber with a temperature of 26–28 °C, humidity around 75%, light intensity of 4.3 K Lux, a 16-h photoperiod with 8 h of darkness, and a carbon dioxide concentration of approximately 672 PPM. The field growth stage was conducted in the experimental field at Liuyang City, Hunan Province. Curing status: Flue-cured tobacco was processed using a three-stage curing method, while air-cured tobacco was cured until the main veins were dry. The curing procedures for both flue-cured and air-cured tobacco followed the specific method outlined by Jie Chen et al. . Construction of the differential fatty acid metabolic profile for different tissue types of the K326 variety, samples were collected from mature plants at the squaring stage (55 days after transplanting). A total of 15 tissue samples were analyzed (Fig. a), including: flower buds (FB), upper stem (US), upper leaf 1 (UL), upper leaf 2 (U2L), upper leaf 1 vein (ULV), upper leaf 2 vein (U2LV), middle stem (MS), middle leaf (ML), middle leaf vein (MLV), lower leaf 1 (LL), lower leaf 2 (L2L), lower leaf 1 vein (LLV), lower leaf 2 vein (L2LV), lower stem (LS), and root (R). Based on the average total number of leaves at the squaring stage, the stems were divided into three sections according to plant height, and the leaves were classified into five parts according to their position on the plant. Tissue samples were collected from three plants of uniform growth, with three biological replicates. Targeted short-chain fatty acid metabolomics Targeted fatty acid metabolomics analysis Total RNA extraction and cDNA synthesis Quantitative RT–PCR validation of candidate genes ranscriptome sequencing and analysis Data analysis The method for short-chain fatty acid determination is based on references . After sample extraction, short-chain fatty acids (SCFAs) were analyzed by LC–MS/MS in multiple reaction monitoring (MRM) mode. The stock solution of individual SCFAs was mixed and prepared in an SCFA-free matrix to generate a series of SCFA calibrators. Isotope-labeled standards were prepared and mixed as the internal standard (IS). For sample preparation, 0.02 g of the ground sample was weighed and suspended in 80 μL of 80% methanol–water solution, followed by mixing. The sample was then centrifuged at 12,000 rpm for 10 min at 4 °C. The supernatant (50 μL) was transferred and mixed with 150 μL of derivatization reagent. The derivatized samples were incubated at 40 °C for 40 min. After derivatization, the samples were diluted with 80% methanol–water solution at 1 × and 100 × concentrations. A total of 95 μL of the supernatant was mixed with 5 μL of a mixed internal standard solution in 80% methanol–water, and the mixture was thoroughly mixed before LC–MS analysis. Short-chain fatty acids (SCFAs)Vanquish™ Flex UHPLC-TSQ Altis™, Thermo Scientific Corp., Germany), operated by10 mM ammonium acetate in water (solvent A) and a 1:1 mixture of acetonitrile and isopropanol (solvent B), delivered at a flow rate of 0.30 mL/min. The solvent gradient was programmed as follows: initial 25% B for 2.5 min; 25–30% B for 3 min; 30–35% B for 3.5 min; 35–38% B for 4 min; 38–40% B for 4.5 min; 40–45% B for 5 min; 45–50% B for 5.5 min; 50–55% B for 6.5 min; 55–58% B for 7 min; 58–70% B for 7.5 min; 70–100% B for 7.8 min; 100–25% B for 10.1 min; and 25% B for 12 min. Sheath Gas (35 psi), Ion Source Temperature (550℃), Auxiliary Gas (50 psi), and Collision Gas (55 psi). The selection of differential short-chain fatty acids (SCFAs) was based on fold change (FC) and P -value, with the following thresholds: FC > 1.2 or FC < 0.833, and P -value < 0.05 . The method for fatty acid determination is referenced from . The stock solution of individual fatty acids was prepared by mixing them in a fatty acid-free matrix to generate a series of fatty acid calibrators. Isotope-labeled standards were prepared and mixed as the internal standard (IS). Samples (100 mg) were frozen in liquid nitrogen, homogenized with 300 μL of isopropanol/acetonitrile (1:1) solution, and subjected to ultrasound treatment for 10 min. After 60 min at -20 °C, the samples were centrifuged at 12,000 rpm for 10 min. The supernatant (50 μL) was mixed with 150 μL of derivatization reagent and incubated at 40 °C for 40 min. Following derivatization, 47.5 μL of the supernatant was combined with 2.5 μL of a mixed internal standard solution. The final preparation was injected into the LC–MS/MS system for analysis. Fatty acids were quantified using an ultra-high performance liquid chromatography coupled to tandem mass spectrometry (UHPLC-MS/MS) system (ExionLC™ AD UHPLC-QTRAP 6500 + , AB SCIEX Corp., Boston, MA, USA) at Novogene Co., Ltd. (Beijing, China). Separation was carried out on a Waters ACQUITY UPLC BEH C18 column (2.1 × 100 mm, 1.7 μm), maintained at 40 °C. The mobile phase consisted of 0.1% formic acid in acetonitrile/water (1:1) (solvent A) and isopropanol/acetonitrile (1:1)45% B for 1 min; 45–70% B for 4.5 min; 70–75% B for 9 min; 75–80% B for 12.5 min; 80–100% B for 14 min; 100–45% B for 15.1 min; and 45% B for 17 min.The mass spectrometer was operated in negative multiple reaction monitoring (MRM) mode with the following parameters: IonSpray Voltage (-4500 V), Curtain Gas (35 psi), Ion Source Temperature (550℃), Ion Source Gas 1 and 2 (60 psi). Four varieties of Middle leaf blade in the Squaring stage were used as materials (Table , 55 days after transplanting, -SS). Total RNA was extracted using the RNAprep Pure Plant Plus Kit (Tiangen, China), and cDNA synthesis was carried out using the All-in-One First-Strand cDNA Synthesis SuperMix Kit (NovoScript Plus, China)The RNA concentration was measured using a NanoDrop 2000 and the RNA integrity was assayed by agarose gel electrophoresis. Only RNA samples with an OD260/280 ratio between 1.8 and 2.2 and an OD260/230 ratio greater than 2.0 that showed three discrete bands of 28S, 18S and 5S were used for cDNA synthesis. The reverse transcription system of each sample was as follows: RNA template (1 μg), gDNA Purge (1 μL), Supermix (10 μL) and RNase Free Water (up to 20 μL) at 50 °C for 30 min and 75 °C for 5 min. Quantitative RT–PCR (qRT–PCR) was performed on a Roche LightCycler 480. Gene-specific primers were designed using Primer Premier 5.0 software and the NCBI primer BLAST tool (Table ). Actin7 was used as the reference gene (LOC107795948) . Detailed procedures are outlined in the LightCycler® Multiplex Master qPCR Reaction Mix kit manual. Relative gene expression levels were calculated using the 2 −ΔΔCt method. Detailed protocols are provided in the kit manuals and are based on previously published methods . RNA sequencing was conducted by Novogene (Beijing, China, https://en.novogene.com/ ). The reference genome and gene model annotation files were obtained directly from the genome website ( http://lifenglab.hzau.edu.cn/Nicomics/ ) . Gene function annotation was performed using the following databases: KEGG ( https://www.kegg.jp/kegg/ko.html ) and GO ( http://geneontology.org/ ). Differential expression analysis was carried out using DESeq2 software (version 1.20.0) with the following criteria: padj ≤ 0.05 and |log2FoldChange|≥ 1.0 for each comparison . For detailed methods, analysis procedures, and software, showed in Supplementary Material 5. The original data of differentially expressed genes and pathway enrichment information are shown in Table . Data analysis was performed using GraphPad Prism 8 and Excel 2016, results are presented as the mean ± SD. One-way analysis of variance (ANOVA) was conducted, followed by Tukey’s post-hoc test, with statistical significance set at P < 0.05. All data were derived from three biological replicates. For the metabolomics differential metabolite clustering heatmap, the data were standardized using the following formula: (content value—metabolite mean) / standard deviation. For the transcriptomics differential gene clustering heatmap, FPKM-normalized values were used (Log2(FPKM + 1)) . Fatty acid profiling in K326 at the squaring stage Analysis of short-chain fatty acid content variations in leaves during the seedling stage Gene expression analysis of tobacco leaves at the seedling bed stage Analysis of the difference in fatty acid content in leaves before and after topping Gene expression analysis of leaves in the Squaring stage Analysis of fatty acid content in leaves subjected to different curing methods Short-chain fatty acids and fatty acids were analyzed in the middle leaves of four tobacco varieties following two different curing methods. The PCA plot based on 11 short-chain fatty acids and principal components PC1 (43.51%) and PC2 (30.95%) clearly differentiated between the curing methods (Fig. a). Except for the air-cured Cuba1 (Cuba1_AC), the second principal component effectively separated the air-cured and flue-cured samples, with air-cured samples clustering at the top and flue-cured samples at the bottom. The Cuba1_AC sample exhibited a more scattered distribution in the principal components. Figure b shows the number of differential short-chain fatty acids both between varieties and within varieties under different curing methods. Generally, air-cured (AC) samples of the same variety exhibited fewer downregulated or unchanged short-chain fatty acids compared to flue-cured (FC) samples, indicating higher short-chain fatty acid content in air-cured leaves. The differential short-chain fatty acid clustering heatmap (Fig. c) revealed that Samsun_AC and Basma_AC clustered together, as did Samsun_FC and K326_FC, while Basma_FC and Cuba1_FC formed another cluster. Overall, the short-chain fatty acids were grouped into two categories: Butyric acid, Hexanoic acid, Isobutyric acid, Valeric acid, and 2-Methylbutyrate clustered together, showing upregulation in Samsun_AC, Basma_AC, and K326_AC, and downregulation in the remaining samples. Figure d presents the trends in short-chain fatty acid content before and after curing. With the exception of Propionic acid, 2-Methylvalerate, and 3-Methylvalerate, all other short-chain fatty acids showed varying degrees of increase after curing. For K326, the content of short-chain fatty acids (except for Propionic acid and 2-Methylvalerate) followed the order: air-cured > flue-cured > pre-curing. Similar trends were observed for Isobutyric acid, Butyric acid, the three C5 acids, and Hexanoic acid in Basma and Samsun. Additionally, the changes in the three C5 acids before and after curing were consistent across all four varieties. A targeted fatty acid metabolomics analysis was performed on the processed tobacco leaves, identifying a total of 47 fatty acids. A PCA plot (Fig. a) based on PC1 (51.45%) and PC2 (17.51%) showed a significant distinction in fatty acid composition between air-cured and flue-cured tobacco leaves along the first principal component, with the flue-cured samples being more tightly clustered. Differential analysis of the fatty acids revealed that, under air-curing, Samsun, Basma, and Cuba1 had 21, 8, and 13 differential fatty acids compared to K326, respectively. Under flue-curing, Samsun, Basma, and Cuba1 showed 7, 13, and 5 differential fatty acids compared to K326. Notably, air-cured Samsun and Basma had more downregulated differential fatty acids compared to K326, while flue-cured samples showed the opposite trend (Fig. b). When comparing different curing methods for the same variety, K326 showed 2 upregulated and 15 downregulated differential fatty acids under flue-curing compared to air-curing. The most downregulated fatty acid was Tridecanoic acid (log2FC: -2.66). Four fatty acids, including Tridecanoic acid, cis-10-Heptadecenoic acid, Docosanoic acid, and Heneicosanoic acid, showed a decrease (log2FC < -1) across all varieties. In contrast, gamma-Linolenic acid and Arachidonic acid showed upregulation (log2FC > 0.5). For air-cured varieties, Samsun, Basma, and Cuba1 showed three shared downregulated differential fatty acids compared to K326, namely alpha-Linolenic acid, cis-11,14,17-Eicosatrienoic acid, and homo-gamma-Linolenic acid. Fatty acid clustering analysis (Fig. c) based on differences in fatty acid content showed that Samsun_AC and Basma_AC clustered together, while K326_AC and Cuba1_AC formed another group. The remaining samples each formed separate clusters. Overall, the measured fatty acids were divided into three categories: one category showed downregulation in flue-cured compared to air-cured, another showed upregulation, and the third showed upregulation in air-cured K326 and Cuba1 and flue-cured Basma, with downregulation in the other five sample groups. In this study, targeted fatty acid metabolomics analysis was conducted on K326 plants at the flowering stage (55 days after transplanting), with 15 tissue samples analyzed. A total of 11 short-chain fatty acids (SCFAs) were measured. Among the 15 tissue samples, the contents of SCFAs in flower buds (FB) and upper leaves (UL) were higher than in other tissues, except for 3-Methylvalerate and 2-Methylvalerate. Cluster analysis (Fig. b) revealed that the samples were grouped into five clusters, with FB and UL forming one group, exhibiting significantly higher relative content compared to other tissues. Flower buds and upper leaves (UL) clustered together, and the 11 measured SCFAs were grouped into four categories: 2-Methylvalerate and 3-Methylvalerate, which showed lower levels in flower and upper tissues compared to other parts; 4-Methylvaleric acid, 2-Methylbutyrate, and Isobutyric acid, which were higher in flower tissues compared to other parts. The total content of short-chain fatty acids in the leaves followed the pattern: UL > U2L > LL > ML > L2L (Fig. c), and in the stems: US > MS > LS. The total short-chain fatty acid content in the upper leaves, stems, and flower buds was higher than in the middle and lower parts of the plant. This suggests that the short-chain fatty acid levels in K326 at the flowering stage were generally higher in the upper parts of the plant compared to the middle and lower sections, although the root did not show the lowest content. In the targeted fatty acid metabolomics analysis, 50 fatty acids were measured, including all but gamma-Linolenic acid, across the various tissue samples (C8:0-C22:6). The fatty acid clustering heatmap (Fig. d) indicated that the tissue samples could be grouped into three categories: roots as one group, flower buds, all leaves, and the main veins of upper leaves as another group, and stems and other main leaf veins as the third group. Significant differences in fatty acid content were observed between the upper, middle, and lower leaf, vein, and stem tissues, with distinct separation based on fatty acid content. The root had higher levels of nine fatty acids, including Octadecanoic acid, Hexadecanoic acid, and Heptadecanoic acid, compared to other tissues. Flower buds exhibited elevated levels of Linoelaidic acid, Linoleic acid, and cis-13,16-Docosadienoic acid, compared to other tissues. Upper leaves (UL) contained higher levels of Caprylic acid, cis-10-Heptadecenoic acid, and trans-10-Pentadecenoic acid compared to other tissues. The specific content data for short-chain fatty acids and fatty acid metabolomics measurements are provided in Table . The short-chain fatty acid content in the leaves at two stages of the seedling bed period was measured. Among the 11 short-chain fatty acids detected, Acetic acid had the highest content in all tobacco varieties, accounting for 78–95% of the total short-chain fatty acids, with a higher proportion in the (transplant stag) ML stage compared to the (Seedling stage) S stage (Table ). In terms of total short-chain fatty acid content, K326 had the highest content in the S stage, followed by Basma, Cuba1, and Samsun. In the ML stage, Basma showed the highest content, closely followed by K326, while Cuba1 and Samsun had lower levels. For 3-Methylvaleric acid, Cuba1 had the highest proportion (0.18%) at the S stage, compared to the other three varieties. At the ML stage, Basma and Samsun had significantly higher proportions (0.18% and 0.16%, respectively) compared to K326 and Cuba1 (0.04%). In the case of Isobutyric acid, Cuba1 exhibited a lower proportion at the S stage but had a significantly higher proportion at the ML stage than the other varieties. For the C5 isomer/normal ratio, Cuba1 had the highest value at the S stage, while the ratio in the ML stage was lower than that of the other varieties. The C6 isomer/normal ratio in K326 was notably lower in the ML stage compared to the other varieties. At the S stage, K326 showed higher levels of differential short-chain fatty acids compared to the other three varieties. At the ML stage, Basma and Cuba1 exhibited downregulation of 2-Methylbutyrate compared to K326. Specific fold changes (FC > 1.2 or FC < 0.833 and P -value < 0.05) for the differential short-chain fatty acids are provided in Table . As shown in Fig. a, principal component analysis (PCA) based on PC1 (54.03%) and PC2 (21.09%) clearly differentiated the short-chain fatty acid profiles of Cuba1 and Basma. In the MS stage, PCA (Fig. d) with PC1 (36.46%) and PC2 (21.08%) showed that Samsun and Basma had similar profiles, with reduced differences in the first principal component compared to the S stage. Figure b, e shows the specific content of 11 short-chain fatty acids at two stages. According to the heatmap of normalized short-chain fatty acid content (Fig. c, f), most short-chain fatty acids in K326 at the S stage were higher than in the other varieties. Comparison between stages revealed that Acetic acid, Isovaleric acid, and 2-Methylbutyrate were higher in the ML stage, while Propionic acid, Isobutyric acid, Butyric acid, and Valeric acid showed the opposite trend (Fig. g). Differential gene expression was analyzed in tobacco leaves from four varieties at two stages of the seedling bed period: seedling stage (S) and mature leaf stage (ML). A total of 158.25 GB of clean data was obtained from 28 samples, with an average of 6.59 GB per sample. The average total mapping rate to the reference genome was 93.52%. At the S stage, compared to K326, Samsun showed 1973 differentially expressed genes (DEGs), with 906 upregulated and 1067 downregulated (DESeq2 padj ≤ 0.05, |log2FoldChange|≥ 1.0). Basma exhibited 3433 DEGs, with 1212 upregulated and 2221 downregulated. Cuba1 showed 2705 DEGs, with 961 upregulated and 1744 downregulated (Fig. a). At the ML stage, Samsun compared to K326 had 3383 DEGs, with 1969 upregulated and 1414 downregulated. Basma had 5020 DEGs, with 2161 upregulated and 2859 downregulated, while Cuba1 showed 2574 DEGs, with 1308 upregulated and 1266 downregulated (Fig. b). Basma showed the highest number of DEGs at both stages among the three varieties compared to K326. PCA indicated distinct separation of S and ML stages along PC1, with a more scattered distribution in the ML stage compared to S. Intra-variety comparisons in the ML stage showed greater variability (Fig. c). Venn diagram analysis of the common DEGs between K326 and the other three varieties revealed 710 DEGs at the S stage and 879 at the ML stage, with 457 shared DEGs between the two stages. This DEG set represents the major differences between K326 and the other varieties during the seedling bed period (Fig. d). GO enrichment analysis showed significant enrichment in “signal transduction” and “signaling” pathways (padj < 0.05, Fig. f). KEGG pathway analysis indicated significant enrichment in pathways such as “Various types of N-glycan biosynthesis,” “N-Glycan biosynthesis,” and “RNA degradation” ( p -value < 0.05, padj = 0.52, Fig. e). In the fatty acid-related biosynthesis pathways, genes involved in “Fatty acid degradation,” “Fatty acid biosynthesis,” and “Fatty acid metabolism” ( p -value < 0.24, padj = 0.52–0.61) were enriched. Three genes ( Nta01g31980 , Nta08g22780 , and Nta23g11140 ) were identified in these pathways,. These genes were not expressed in K326 at either stage, but were expressed in the other three varieties. Based on gene expression data, a fatty acid pathway heatmap for the four varieties during the seedling bed period was generated, showing pathways such as “Fatty acid biosynthesis” (nta00061), “Fatty acid elongation” (nta00062), and “Fatty acid degradation” (nta00071) (Fig. ). In the “Fatty acid biosynthesis” pathway (nta00061), the relative expression levels of 13 ACC genes, 2 MT genes, and 4 enoyl-[acyl-carrier-protein] reductase I genes were significantly higher in the S stage compared to the ML stage. Four KAS III genes and six TE genes showed half of the genes with higher relative expression in either the S or ML stage. Varieties showed varying expression patterns across different genes. For example, Nta23g00340 (KAS II) was not expressed in the ML stage, and Nta12g11030 (acyl-[acyl-carrier-protein] desaturase) was not expressed in the S stage. In the “Fatty acid elongation” pathway (nta00062), two palmitoyl-protein thioesterases and two palmitoyl-CoA hydrolases had higher relative expression in the ML stage compared to the S stage, while two long-chain 3-oxoacyl-CoA reductases showed higher relative expression in the S stage. K326 showed higher relative expression than the other varieties for these genes. Nta06g11470 and Nta05g12800 were not expressed in the ML stage. In the “Fatty acid degradation” pathway (nta00071), the Nta12g16270 and Nta11g17320 were not expressed in the ML stage for all varieties. Two acetyl-CoA C-acyltransferase genes showed significantly higher expression in K326 at the ML stage compared to the other varieties and stages. Two long-chain fatty acid omega-monooxygenase genes had higher expression in K326 during the S stage compared to the other varieties and stages. The short-chain fatty acid (SCFA) content in the middle leaves of four varieties was measured at the budding stage (55 days post-transplantation) and 10 days after topping (70 days post-transplantation). Among the 11 measured SCFAs, the total content at the budding stage was ranked as Basma > Samsun > Cuba1 > K326, while the ranking reversed after topping. The proportion of 3-methylvaleric acid was significantly higher in Basma and Samsun than in Cuba1 and K326 both before and after topping. The trend of isovaleric acid proportion was consistent at the budding stage, but after topping, all four varieties showed similar trends. The C5 isomer/normal ratio was higher in Basma and Samsun than in Cuba1 and K326 at the budding stage, while after topping, K326 and Basma had significantly higher ratios than Samsun and Cuba1 (Table ). Differential metabolite analysis (FC > 1.2 or FC < 0.833, P -value < 0.05) and within-group significance analysis revealed that at the budding stage, the primary differentially expressed SCFAs were valeric acid and hexanoic acid, which showed similar trends across varieties. After topping, Basma and Cuba1 had significantly higher levels of 3-methylvalerate than K326. Detailed information on the differentially expressed SCFAs across varieties is provided in Table . As shown in Fig. a, the PCA plot based on PC1 (83.7%) and PC2 (7.15%) clearly clustered the samples into two groups: the K326 and Cuba1 groups were closer to each other, while the Samsun and Basma groups were closer. After topping, K326 and Cuba1 were more distinctly separated, while Basma and Samsun clustered together (Fig. d). Figure c, f shows the specific content of 11 short-chain fatty acids at two stages. The contents of acetic acid, isobutyric acid, butyric acid, valeric acid, isovaleric acid, 2-methylbutyrate, hexanoic acid, and 2-methylvalerate were significantly lower after topping (Fig. g). After topping, K326 showed higher levels of all fatty acids compared to other varieties, except for 2-methylvalerate, propionic acid, and 3-methylvalerate. Samsun had significantly higher levels of 2-methylvalerate, propionic acid, and 3-methylvalerate compared to the other varieties. Cluster heatmap analysis (Fig. b, e) revealed that after topping, the short-chain fatty acid content patterns in Basma and Samsun became more similar compared to those before topping. Additionally, the content of most short-chain fatty acids in K326 after topping was higher than in the other varieties. The PCA plot for the budding stage (Fig. h), based on PC1 (54.43%) and PC2 (15.68%), showed that K326 was clearly separated from the other three varieties along PC1. After topping, Basma and Cuba1 clustered together, while K326 and Samsun grouped similarly. The two groups were distinctly separated along PC2. Among the 49 measured fatty acids, Basma, Samsun, and Cuba1 had 13, 16, and 5 differentially expressed fatty acids compared to K326 at the budding stage, respectively, and 6, 13, and 22 differentially expressed fatty acids after topping (Fig. i). At the budding stage, Samsun showed a downregulation of fatty acids compared to K326, with pentadecanoic acid showing the highest downregulation (Log2FC = -2.09). Basma also showed a downregulation of fatty acids compared to K326, except for linoelaidic acid, with trans-10-heptadecenoic acid being the most downregulated (Log2FC = -2.07). Cuba1 showed the highest downregulation of cis-11,14,17-eicosatrienoic acid (Log2FC = -2.02) compared to K326. Common differentially expressed fatty acids between Basma and Samsun compared to K326 included pentadecanoic acid, palmitoleic acid, and heptadecanoic acid. After topping, Samsun showed downregulation of 5 fatty acids compared to K326, with caprylic acid being the most downregulated (Log2FC = -0.62). Basma downregulated 12 fatty acids compared to K326, with cis-5,8,11,14,17-eicosapentaenoic acid showing the highest downregulation (Log2FC = -1.64). Cuba1 downregulated 21 fatty acids, with pentadecanoic acid showing the highest downregulation (Log2FC = -1.30). The common differentially expressed fatty acids between Basma, Samsun, and Cuba1 compared to K326 included decanoic acid and hendecanoic acid. Based on the clustering heatmap of the 49 measured fatty acids (Fig. j), the fatty acid content differences were classified into three groups: Basma and Cuba1 after topping, Basma and Samsun at the budding stage, and the remaining four groups. Differential gene analysis was performed on the leaves of four tobacco varieties at the squaring stage. A total of 78.04 G of clean bases were obtained from 12 samples, with an average of 6.50 G per sample. The average total alignment to the reference genome was 94.45%. Principal component analysis (PCA) revealed that Basma and Samsun were closely related, while K326 and Cuba1 were clearly distinct (Fig. a). In the differential gene analysis, 18,600 DEGs were identified between Samsun and K326, with 9,067 upregulated and 9,533 downregulated (DESeq2 padj ≤ 0.05 |log2FoldChange|≥ 1.0). For Basma vs. K326, 17,770 DEGs were detected, including 8,520 upregulated and 9,250 downregulated. For Cuba1 vs. K326, 15,537 DEGs were found, with 6,678 upregulated and 8,859 downregulated (Fig. b). Using K326 as the control, differential gene analysis was conducted for the other three varieties, and a Venn diagram was used to identify a common set of 9,456 DEGs across the two stages, which represents the major differential gene set for K326 vs. the other varieties at the squaring stage (Fig. c). Enrichment analysis of this gene set showed that 37 GO terms were significantly enriched (padj < 0.05). Notably, the terms “sequence-specific DNA binding,” “unfolded protein binding,” and “calcium ion binding” were most enriched (Fig. d). KEGG analysis identified 10 pathways with significant enrichment (padj < 0.05), with the most significant being “MAPK signaling pathway – plant,” “Photosynthesis—antenna proteins,” and “Endocytosis” (Fig. e). In the fatty acid-related synthesis and metabolism pathways, 35 genes, including KAT1 , KCS1 , AIM1 , and FAD2 were enriched in pathways such as “Fatty acid degradation,” “Fatty acid biosynthesis,” and “Fatty acid metabolism.” Clustering analysis of these 35 genes revealed similar expression patterns in Basma and Samsun, distinct from K326 and Cuba1. Notably, genes such as Nta02g06150 ( KAT1 ), Nta12g29320 ( KAT1 ), Nta12g07350 ( KCS11 ), and Nta11g08380 ( KCS11 ) showed significantly higher expression in K326 compared to the other varieties (Fig. f). Six differentially DEGs were selected for qRT-PCR validation of their expression levels, including ACOT8 ( Nta02g19100 ), AL7B4 ( Nta05g24430 ), KCS11 ( Nta16g12070 ), KCS20 ( Nta20g17200 ), KCS4 ( Nta08g20960 ) and LACS8 ( Nta17g22980 ). These genes are DEGs from four fatty acid-related metabolic pathways (Fig. f).The expression data obtained by RNA-Seq and qRT–PCR were compared. The genes related to the fatty acid pathway ACOT8 , AL7B4 , KCS4 , LACS8 had exactly the same trends as the transcriptome results (Fig. ). Tobacco growth can be broadly divided into two stages: the seedling stage and the field stage. The seedling stage spans from sowing to transplanting the seedlings into the field, while the field stage starts from transplanting and ends at harvest. Each stage can be further subdivided into smaller periods . Among the four key stages selected in this study, the seedling transplanting stage was conducted under stable conditions in a controlled artificial climate chamber. This provided more consistent light and temperature conditions compared to tobacco seedlings grown in greenhouses under actual production settings. Topping is a critical event in the field stage of tobacco, as there are significant changes in the main substances of the plant before and after topping . Therefore, both pre- and post-topping periods were chosen for analysis. Notably, Basma and Samsun varieties flower much earlier than K326 and Basma. According to the results of this study, after topping, the content of short-chain fatty acids in the middle leaves of K326 decreased, as did the levels of five long-chain fatty acids. Decanoic acid and hendecanoic acid were identified as the major fatty acids differentiating K326 from the other three varieties. Among the 49 fatty acids measured, there were no significant differences in the fatty acid content between Basma and Samsun before topping. This trend was also reflected in gene expression, where principal component analysis of differentially expressed genes between Basma and Samsun showed no major differences, and the number of differentially expressed genes was much lower compared to other varieties. After topping, Basma fatty acid content was similar to that of Cuba1. The synthesis of saturated fatty acids in tobacco primarily occurs in the cytoplasm, although fatty acid synthesis also takes place in the mitochondria and chloroplasts. The main precursor for synthesis is acetyl-CoA, which is converted into malonyl-CoA by acetyl-CoA carboxylase (ACCase). ACCase is one of the key rate-limiting enzymes in the de novo fatty acid synthesis pathway in plants, and in tobacco, ACCase exists in its prokaryotic form . Key enzymes in the fatty acid synthesis pathway include citrate lyase (ACL), acetyl-CoA carboxylase (ACCase), β-ketoacyl-ACP synthase I (KAS I), and acyl carrier proteins (ACP). In the seedling stage of this study, three genes Nta01g31980 , Nta08g22780 , and Nta23g11140 were enriched in the fatty acid biosynthesis pathway. These genes were annotated as acyl-activating enzyme, alcohol dehydrogenase, and enoyl-[acyl-carrier-protein] reductase, respectively. Expression of these genes was not detected in K326 during both periods, while they were expressed in the other three varieties. In the gene expression differences at the budding stage, genes such as Nta02g06150 (KAT1), Nta12g29320 (KAT1), Nta12g07350 (KCS11), and Nta11g08380 (KCS11) showed relatively higher expression levels in K326 compared to the other three varieties. This difference in gene expression may contribute to the distinct fatty acid profile in K326 during the corresponding periods. In long-term production and experimental research, it has been observed that during the growth and development of tobacco plants, as well as the maturation and senescence of tobacco leaves, the content of various fatty acids initially increases and then decreases. As the leaves mature and age, the content of saturated fatty acids decreases, while the content of unsaturated fatty acids gradually increases. Spatially, the fatty acid content is higher in the middle and lower leaves of the plant, and lower in the upper leaves. Liu et al. found that fatty acids and similar compounds decrease during the vegetative growth phase, but increase during the reproductive growth phase . In the upper leaves, the fatty acid content peaks around 75 days post-transplantation, just before early flowering, and continues to decrease in senescent leaves . These results are consistent with the observation in this study that the fatty acid content in K326 decreases after topping, which is likely due to the shift back to vegetative growth following bud formation. During the growth, development, and senescence of tobacco leaves, the total fatty acid content first increases and then decreasesz . During the air-curing process, the fatty acid content in tobacco leaves also decreases . According to the results of this study, the variation in fatty acid content in tobacco leaves can be attributed to differences in varieties, leaf positions, and processing methods. Therefore, when curing tobacco leaves, it is recommended to adjust the curing conditions and processing measures based on the fatty acid content to optimize leaf quality and flavor. Our study systematically analyzed and compared the fatty acid composition in four different tobacco varieties. Using metabolomics and transcriptomics, we performed differential analyses on tobacco leaves at four developmental stages and under two curing methods. A fatty acid metabolic profile was established for the K326 variety at the budding stage, and three fatty acid-related genes that were significantly different from those in other varieties were identified during the seedling stage. Gene expression profiles for three fatty acid-related pathways were constructed for this period. Additionally, we compared the fatty acid differences in tobacco leaves across the four varieties under flue-curing and air-curing methods. This study broadens our understanding of fatty acids in different tobacco varieties and developmental stages, providing a molecular basis for further research on fatty acids and the genes involved in their biosynthesis and metabolism. Supplementary Material 1: Table S1. Raw Data of Targeted Fatty Acid Metabolomics for K326 (Unit: ng/g). Supplementary Material 2: Table S2. Short-Chain Fatty Acids Information Table at the Seedling Stage. Supplementary Material 3: Table S3. Short-Chain Fatty Acids Information Table Before and After Topping. Supplementary Material 4: Table S4. qPCR Primer Information. Supplementary Material 5: RNA-seq analysis detailed methods. Supplementary Material 6: Table S5. DEG and pathway enrichment information
Prolonged Low-Dose Administration of FDA-Approved Drugs for Non-Cancer Conditions: A Review of Potential Targets in Cancer Cells
fde0ea06-af8d-4145-ac6f-1dc3489e6f5d
11942989
Neoplasms[mh]
In the United States cancer has been the second leading cause of death behind heart disease for decades . Large bodies of research have been conducted for the prevention of cancer incidence. Of those, there have been reports of certain FDA approved drugs not specifically designed for cancer treatment having an impact on the rate of cancer incidence. For instance, the drug metformin is commonly used to treat type II diabetes for its ability to lower blood sugar levels by limiting the amount of glucose absorbed by the body and increasing sensitivity to insulin . Yet due to demonstrated metformin ability to regulate signaling pathways involved in cell proliferation and apoptosis the drug is regarded as an anticancer agent . When used on diabetic patients for prolonged periods, it decreased the cancer incidence of prostate cancer , colorectal cancer , and breast cancer . Another example is aspirin which is an analgesic and antipyretic as well as an agent to reduce risk for cardiovascular diseases . Although being FDA approved as a drug for diseases with no previous association to cancer treatment, it was shown to decrease the incidence of colorectal cancer , pancreatic cancer , and ovarian cancer . One more FDA approved drug that affects cancer incidence is simvastatin. Primarily used as a drug to lower cholesterol, simvastatin reduces the cancer incidence of renal cell carcinoma . Additional effect of simvastatin on apoptosis was demonstrated in endometrial cancer cells . The list of drugs possessing similar effects is already quite lengthy ( ) and is expanding as more and more FDA-approved drugs are shown to be effective to reduce the incidence of different types of cancer. Being administered at low doses relatively well tolerated by the human body, such drugs could be widely used in clinical practice to avoid adversary side effects related to the chemotherapy. The goal of this article is to review some of the most used FDA approved drugs prescribed for non-cancer conditions, highlighting the discrepancies between in vivo vs in vitro potency as anticancer drugs and their possible mechanism of action as chemopreventive agents. We will discuss the existing evidence that link these effects on these drugs on Cancer stem cells, cellular senescence, clonogenicity as well as other potential targets. Plasma concentration, the drug concentration in plasma derived from patient’s blood after drug intake , and in vitro concentration, the concentration typically used in in vitro assays , are the critical factors to evaluate the efficiency of each type of drug. Discrepancies between the plasma concentration vs. the in vitro concentrations needed to eliminate cancer cells have been described for each type of drug. In general, the in vitro concentrations required to reduce the number of cancer cells are of a higher value than the plasma concentrations in the body . Metformin, for example, has displayed a plasma concentration of 0.00116 mM (<1.5 μg/mL) while its in vitro concentration that inhibits the proliferation of cancer cells was typically 5–30 mM . As such, the difference between the in vivo and in vitro concentrations is more than a 4310 fold. Furthermore, a low concentration of metformin (0.2 mM) has a selective effect on the growth of pancreatic adenocarcinoma AsPC-1 and SW1990 cells, based on the differential expression of the surface markers . After treatment with 0.2 mM metformin in vitro, the proportion of C D 133 + cells was reduced by inhibiting proliferation through G 1 /S arrest, but not by apoptosis. However, low concentrations of metformin did not affect C D 24 + , C D 44 + , E S A + , or C D 24 + C D 44 + E S A + cells. It is interesting, nevertheless, that the low concentrations of metformin reduced the invasion of pancreatic cancer cells in vitro as well as inhibited the pancreatic cancer xenograft growth in vivo (the plasma concentration of 0.02 mm) . For comparison, the peak plasma concentration for aspirin was found to be up to 0.304 mM (54.25 mg/L) and the IC 50 obtained in vitro for 72 h treatment was 5 mM for MDA-MB-231 and 2.5 mM for MCF-7 breast cancer cell lines . The comparison between plasma and the in vitro concentrations demonstrates more than a 16-fold difference. The similar difference can be observed in the case of simvastatin which has displayed peak plasma concentrations ranging from 0.08 to 2.2 and from 0.03 to 0.6 μM for simvastatin lactone and carboxylate, respectively , while its in vitro concentration was revealed to be between 0.001 mM and 0.005 mM, creating a >10-fold difference. Treatment of MDA-MB-231 breast cancer cells with a 1–5 µM concentration of the drug was sufficient to increase the expression of tumor-suppressing genes p21 and p27 as well as the expression of miR-140-5p, which plays the role of tumor-suppressor in breast cancer, resulting in induced apoptosis and inhibited cell proliferation . Therefore, it can be stated that the effectiveness of the drugs on cancer demonstrates a great disparity between the plasma concentration and in vitro concentration. Other examples can be referred to in . The discrepancies cannot be explained by pharmacokinetics factors present in in vivo but absent in in vitro experiments. For instance, in in vitro experiments, there are no metabolic elimination and clearance of drugs by other organs such as liver and kidneys. Thus, it would be expected that if pharmacokinetics factors play a role, these drugs will be more potent in vitro than in vivo. Moreover, while the cell lines used in vitro experiments are mainly from commercial sources that might be different from the cancer cells in vivo this still does not explain the general discrepancy. This assumption is supported by the fact that when drugs are tested in commercial cell lines and patient-derived cell lines, the overall potency is similar. For instance, we found that commercial DBTRG.05MG glioma cells showed similar sensitivity to menadione and vitamin C compared to a panel of eight different glioma patient-derived cell lines . In general, for classical anticancer drugs commonly used for cancer treatment, there are no discrepancies between the plasma concentration found in patients vs. the in vitro concentrations needed to eliminate cancer cells ( ). Doxorubicin (DOX), a chemotherapeutic agent frequently used for the treatment of a variety of cancers, provides a good example. The mechanism of its cytotoxic action is multiple including DNA intercalation and adduct formation, topoisomerase II (TopII) poisoning, the generation of free radicals and oxidative stress, and membrane damage through altered sphingolipid metabolism . The plasma concentration found in patients ranges between 0.023 and 1.14 μM and the IC50 for of DOX in MDA-MB-231, MCF-7, MDA-MB-468, and 4T1 was 0.28 µM, 0.14 µM, 0.13 µM and 0.11 µM, respectively . Thus, the plasma concentration of DOX achieved in plasma is high enough to eliminate several types of cancer cells in vitro. The failure of DOX to eliminate cancer cells in vitro is attributed to resistance with some cell lines exhibiting a resistance index >100 . Another example is cyclophosphamide and its main active metabolite phosphoramide mustard. The cyclophosphamide serum level can reach up to 175 µM, a concentration higher than the in vitro IC50 described for human HL60 cells (IC50 = 8.79 μM) or mouse BALB/c 3T3 cell (cells (IC50 = 37.6 μM) . Peak plasma levels of phosphoramide mustard of 50 to 100 μM were found at 3 h after cyclophosphamide administration , a concentration range higher than the in vitro IC50 for several cancer cell lines including V-79 Chinese hamster lung fibroblasts (IC50 = 1.8–69.1 μM) and rat spontaneously immortalized granulosa cells (IC50 = 3–6 μM) . At these concentration ranges (3–6 μM), phosphoramide mustard induces DNA adduct formation and ovarian DNA damage and increases DNA damage responses (DDR) gene mRNA expression levels and DDR protein within 24–38 h . Capecitabine is a chemotherapeutic drug for the treatment of patients with metastatic breast cancer, metastatic colorectal cancer, pancreatic adenocarcinoma, and gastrointestinal cancer. Capecitabine is a prodrug effective when it is metabolized to 5-fluorouracil (5-FU) through three enzymatic reactions . After oral administration, both capecitabine and 5-FU reach the peak concentration in the plasma within 2 h and their elimination half-life is less than 1 h . However, prolonged oral administration of capecitabine is shown to increase the elimination half-life up to 11 h . Peak plasma levels of 5-FU detected in patients range from 0.845 µM 1 h after capecitabine administration to 2 µM and up to 31 µM 2 h after the drug administration. Interestingly, the IC50 for 5-FU measured in different cancer cell lines falls into an interval between 0.2 and 55 µM [ , , , , ]. Since the accepted protocol for capecitabine treatment requires oral administration twice a day for 14 days every 3-week cycle [ , , ], plasma levels of 5-FU are maintained in human patients comparable to its IC50. Stem cells (SCs) are cells that could develop into many different cell types. They also have capacity for self-renewal, which generates more undifferentiated stem cells, while the differentiation gives rise to mature cell types. A small subpopulation of cells within tumors, which demonstrates characteristics of both SCs and cancer cells, is named cancer stem cells (CSCs) . A notable feature of CSCs is their ability to start tumors when transferred into an animal host even in as small amount as 100 cells . CSCs are also characterized by the expression of cell surface markers, which are utilized to isolate and enhance CSCs. Interestingly, the expression of such markers is tumor subtype specific: CD44 + CD −/low lineage and ALDH+ are abundant in breast CSCs , CD133 + for colon , brain and pancreas , CD44 + for head and neck and cervix , CD90 + for liver CSCs and head and neck cancers. CD133 + has been utilized to identify a radioresistant subpopulation of glioma cells, demonstrating that radioresistance increased DNA repair in glioblastoma CSCs and pointing to the expression of CD133 as a prediction factor of clinical outcomes for patients with glioma . CSCs are also shown to play an important role in developing chemotherapy resistance in different types of cancer [ , , ]. In addition, CSCs have a significant impact on cancer relapses and metastasis [ , , ]. Multiple subpopulations of CSCs have been detected in different types of tumors, providing a concept of tumor heterogeneity, which refers to the biological differences between malignant cells of the same tumor arising from genetic and nongenetic mechanisms, which are responsible for the degree of resistance of cancer cells to a certain anticancer drug. This concept was further developed in the classical “Cancer Stem Cell Theory” (CSCT), postulating the hierarchical organization where a subset of CSCs can irreversibly differentiate into all types of non-CSCs. Thus, it should be sufficient to eliminate only the rare subpopulations of CSCs to effectively heal a cancer patient or at least reach a significant improvement [ , , ]. According to the CSCT, tumor heterogeneity results from the division of cancer stem cells producing cells with differing states of differentiation or stemness . Nevertheless, this concept was found insufficient to explain the experimental findings using only the hierarchical rigid model [ , , ]. An alternative plasticity model, the “Dynamic CSC Model” (DCSCM), put forward the idea that differentiated tumor cells and cancer stem cells can interconvert into each other . Consequently, each cancer cell has the potential to obtain a cancer stem cell phenotype. Another model, the “Stemness Phenotype Model” (SPM) explains inconsistencies observed with experimental data not suitable to the CEM or the CCSCM. This model illustrates a non-cancer cell evolving into a cancer cell that divides symmetrically during carcinogenesis. Through a process known as interconversion, any cancer cell can acquire a different phenotype depending on the microenvironment, and, following substantial changes in the microenvironment conditions, phenotypic changes are viable in the surviving cells . An additional model, the complex system model (CSM), implies that genetic and epigenetic transformations might occur within a single tumor, developing a multifaceted cell system consisting of coexisting tumor-initiating cell types. The intervention of the cell-cell and cell-niche interactions may weaken the entire tumor system, while every potential tumor forming cells must be targeted for effective therapy according to this model . These alternative models of cancer biology strongly suggest that simply targeting CSCs will not be sufficient to either eradicate cancer or prevent carcinogenesis. In non-cancer cells, senescence is an irreversible response to damage that may occur to cells with age or cells undergoing prolonged stress . Senescence can be naturally caused by the shortening of telomeres, the protective chromosomal termini . Telomeres shorten with every cell division due to DNA polymerase inability to completely replicate the lagging strands. When telomere length reaches a critical point, their protective structure is disrupted, leading to telomere disfunctions including the chromosomal fusion. To prevent such outcomes, cells undergo a transition to a non-dividing state, which limits the expansion of undesired cell population . Other cellular conditions that can lead to senescence development include oncogene activation, oxidative stress, mitochondrial dysfunction, irradiation, and exposure to chemotherapeutics . Therapy-induced senescence (TIS) nowadays is a well-established result of conventional cancer therapy. The primary cause of TIS is DNA damage , which initiates DDR by p53-facilitated translation of p21, a cyclin-dependent kinase inhibitor that prevents cell-cycle progression [ , , ]. The next phase, senescence maintenance, has been shown to be based on p16 activity , which prevents phosphorylation of retinoblastoma protein (Rb) family members and promotes the formation of Rb/E2F complex, facilitating chromatin alterations, mainly histone 3 lysine 9 trimethylation (H3K9me3), which was considered to permanently arrest the cells in the G1 phase [ , , ]. TIS was considered a favorable outcome of the therapy, as growth arrest and DNA damage associated with senescence have been shown to prevent uncontrollable cell proliferation and eliminate cancerogenic mutations from being passed to the next generations of cells . Nevertheless, further studies demonstrated that at least a subpopulation of tumor cells can escape TIS and give rise to a more aggressive cancer phenotype able to overcome the cell-cycle blockade . An important characteristic of senescent cells is their high resistance to apoptosis [ , , ]. Thus, senescent state allows the cancer cells to avoid therapy-induced apoptosis . Later, malignant cells can escape senescence, consequently re-entering the cell cycle and causing tumor recurrence [ , , ]. In the senescence maintenance phase, senescent cells acquire senescence-associated secretory phenotype (SASP), which can modulate signaling pathways in neighboring cells and tissues through secretion of cytokines, chemokines, growth factors and mRNAs, mostly in extracellular vesicles. SASP can also promote tumorigenesis by creating inflammatory microenvironment through the enhanced expression of cytokines and chemokines [ , , ], especially IL-6 and IL-8, which lead to increased blood supply and tissue repair , thus supporting tumor progression, invasion and metastasis . Other components of SASP include matrix metalloproteases which create tumor-favorable microenvironments , VEGF to promote angiogenesis , as well as factors promoting epithelial–mesenchymal transition and inducing cancer stem cell-like phenotype . On top of that, SASP in chemotherapy-treated cancer cells can produce highly chemotherapy-resistant cell populations . Hence, senescent cells can implement both beneficial and adverse effects on tumor progression, which make the senescent cells a very important target for cancer therapy. Several FDA-approved drugs not designed for cancer-related application exhibit pharmacological properties which can be beneficial for cancer therapy. For example, aspirin is a potent inhibitor of NF-κB , suggesting that this drug can contribute to the elimination of CSCs. Indeed, daily aspirin use has been shown to reduce the risk of colorectal , pancreatic and esophageal and gastric cancer , recurrence of breast cancer , as well as to reduce death due to several common cancers . In vitro experiments using aspirin concentrations equate to the plasma levels between 1 and 5 mM demonstrated a decrease in CSCs markers expression (c-Met, CD44, Ki67, CxCR4); and inhibition of ALDH1 activity and spheroid formation in pancreatic adenocarcinoma AsPC-1 cells. Furthermore, xenograft pancreatic tissue from mice treated with aspirin revealed a reduction in SOX2, CD133, p65 and TNF-α, as well as the ECM components fibronectin and collagen. A recent study found that aspirin decreases metastasis in a mouse model by increasing T cell activation at the metastatic site, provoking immune-mediated rejection of lung and liver metastases . These findings suggest that aspirin targets highly aggressive cancer as well as non-cancer cells. In comparison, low concentrations of metformin did not inhibit proliferation of pancreatic cancer cells but decreased the proportion of C D 133 + cells, a type of pancreatic CSCs, in a dose-dependent manner through specifically inhibiting their proliferation by G1/S arrest, after the cells were treated with 0.1–0.2 mM of metformin for 72 h. Moreover, xenograft experiments proved the effect of low-dose metformin on pancreatic cancer in vivo, as oral administration of metformin significantly inhibited xenograft growth. The observed effects are attributed to the inhibitory activity of metformin on Erk and mTOR in CD133 + cells . Alternatively, metformin has been shown to inhibit the SASP by inhibition of the NF-κB pathway, ultimately limiting the expression of inflammatory cytokines. Indeed, at doses of 1 mM or higher, metformin reduced cytokine gene expression for senescent cells, but did not affect cell proliferation, while as the doses were reduced to 0.5 mM, it became moderately stimulatory for proinflammatory cytokines such as IL6 and IL8, but inhibitory for CXCL5. Therefore, high doses of metformin can impede the negative effects of senescent cells without compromising its anticancer effects . Salinomycin, a drug that inhibits the proliferation of cancer stem cells, has also been shown to significantly reduce the number of senescent glioma cells in vitro. Glioma is a difficult tumor to treat, often involving combined treatment such as surgery, radiotherapy, and chemotherapy. Upon treatment of glioma cells with a high concentration of hydroxyurea (HU) or aphidicolin, a fraction of the cells survived and further on began a cycle of re-growth. Surviving cells displayed senescence-associated-β-galactosidase staining, as well as arrested cell division and flat morphology, which are characteristic features of senescent cells. When these cells were then treated with a low dose (0.5 μM) of salinomycin for 72 h, surviving cells were not detected, and re-growth was prevented. The treatment with even a lower concentration (0.25 μM) did not kill the surviving cells but prevented the re-growth. This two-step treatment not only opens up doors for a safer way to treat the tumor without high toxicity for the patient, but provide the principle which can be applied to other senescent cancer cells . Indeed, recent studies have validated such an approach to anticancer therapy [ , , ]. Taken together, these data suggest the existence of multiple mechanisms through which PDLA of some drugs may target cancer cells (stemness senescence, clonogenicity), but still the evidence is scarce to conclude that these mechanisms are the main target of PLDA of FDA-approved drugs. The fact that there are limited data supporting that PLDA of FDA-approved drugs can eliminate CSCs or senescent cells suggests that other cellular processes may be important and are worth considering. We suggest that clonogenicity as well as cellular plasticity may play a role. While this suggestion is merely speculative, due to the lack of available experimental data, they could be explored in future experiments. Recent study performed in our lab pointed to clonogenicity as one more potential target of PDLA. We compared the effect of either nigericin (antibiotic active against gram positive bacteria) or menadione (vitamin K3) on viability and clonogenicity of lung carcinoma A549 and H460 cell lines as well as breast carcinoma MCF-7 and MDA-MB-231 cell lines. The ability of either drug to eliminate cancer cells was 2–10-fold more potent in the colony forming assay than in the viability assay, suggesting that PDLA of certain drugs targets clonogenic rather than proliferation pathways. Our data also revealed the existence of short post-reattachment window of time when cancer cells growing at low density are more sensitive to specific drugs . Thus, PDLA of such drugs can eliminate cancer cells when they are highly sensitive immediately after reattachment, preventing in this way the formation of metastasis. Clonogenicity is the ability of a single cell to proliferate and develop into a full tumor. The clonogenic assay is an in vitro cell survival assay formed on the ability of one cell to grow into a colony. It has been used as a measure of CSC stemness, the cells’ potential for proliferation without bounds, self-renewal, differentiation into multiple tissue types within a lineage, and tumorigenicity . Through these clonogenic assays, it can be demonstrated that a single CSC can generate clonogenic colonies, supporting its potency for cancer metastasis and repopulation after treatment . In previous studies, both holoclones and meroclones of prostate cancer cell line DU145 were shown to contain cells having stem cell qualities, based on the analysis of the colony-forming ability, transplantation capacity and marker expression. In addition, the presence of CSCs in different type of colonies was confirmed by positive stem cell markers (CD44, α2β1 integrin, Oct4 and BMI1) staining . Another essential characteristic of tumors is intratumor heterogeneity, the biological difference that exists amongst malignant cells from the same tumor, which is responsible for the impaired response against particular anticancer drugs . Recent studies illustrate the complex interplay between clonogenicity, stemness, and intratumoral heterogeneity [ , , , , ]. Unfortunately, for technical reasons, most of the experiments using clonogenic assays have been performed for short periods (3–10 days) with drug concentrations typically used for in vitro experiments (high) and not with concentrations typically found in the plasma of patients. Cancer cell plasticity refers to the ability of the cell to reversibly shift (interconversion) from a differentiated state with limited tumorigenic abilities to an undifferentiated cancer stem cell state that promotes rampant cell division and tumor growth. Stem cell plasticity represents one of the major therapeutic challenges for differentiation therapies. Poorly differentiated tumors (portraying a mesenchymal phenotype) are better suited to tolerate chemotherapy, while well-differentiated tumors are more sensitive to treatment . Plasticity can also dictate the changes between distinct CSC states such as ones with varying specialization for invasion and metastasis . Cancer cell plasticity has been linked to the epithelial–mesenchymal transition program, which describes a constant shift through the spectrum of phenotypic states, capable of driving local invasion, generate cancer stem cells and facilitate metastasis by the dissemination of circulating tumor cells . By blocking interconversion, it may be possible to prevent tumorigenesis and metastasis, but at present, the underlaying mechanism of interconversion is poorly understood and there are no data of drugs affecting interconversion at concentrations found in patients. FDA-approved drugs such as metformin, aspirin, and simvastatin are being presented as alternative methods for chemoprevention as they are capable of eliminating cancer cells in in vitro experiments. However, the in vitro concentrations needed to affect these cancer cells are often more than a 1000-fold higher than the plasma concentration supported by the body. Even in the case of CSCs and senescence where low doses of the FDA-approved drugs have been found to affect the proliferation of cells, the in vitro concentration remains too high for the body to sustain. The fact that PLDA of these drugs indeed decreased the incidence of certain types of cancers indicated that there are essential cellular processes not yet identified that can be the actual target of these drugs. In this regard, genistein may offer some clues. Genistein is a protein tyrosine kinase and topoisomerase II inhibitor present in soy, that decreases the incidence of breast, colon, and prostate cancers. The disparity between the plasma concentration and the in vitro concentration needed to eliminate cancer cells is high. The plasma concentrations of genistein in the European and the Asian population were compared. The median circulating genistein concentration in the top fifth of the distribution amounted to 14 ng/mL in the European population (39) being 7-fold lower than the 99 ng/mL found in Japanese men , showing the varying degrees of plasma concentrations based on the diet of the population studied. However, the concentration needed to reduce 50% of the autophosphorylation of the EGF-R-associated tyrosine kinase is 2.7 μM, similar to the 2.4 μM plasma concentration found in Asians with a traditional diet with high soy product consumption . This raises the possibility that there may be other more specific targets, such as the inhibition of autophosphorylation of the EGF-R-associated tyrosine kinase, that may be the target of genistein when present for prolonged periods at concentrations found in vivo. Additionally, resveratrol, a phytochemical that targets cancer stem cells, has been effectively used in traditional medicine for over 2000 years . The drug possesses anti-oxidant, anti-inflammatory, cardioprotective, and anticancer properties . Resveratrol can reverse multidrug resistance in cancer cells, and, when used in combination with clinically used drugs, it can sensitize cancer cells to standard chemotherapeutic agents or radiation [ , , , ]. Multiple effects of resveratrol on cancer include reducing oxidative stress [ , , ], arresting cell cycle and promoting apoptosis [ , , ], decreasing inflammation-related tumorigenesis through inhibition of STAT3 , and modifying tumor microenvironment to reduce its progression and invasion [ , , ]. However, cancer patients did not sustain the amount of resveratrol plasma concentration compared to the drug concentration required to eliminate cancer cells in vitro, suggesting the ineffectiveness of this drug in decreasing the incidence of cancer, although some positive effects of resveratrol on colorectal cancer were revealed in clinical trials [ , , ]. Still, as in the case of genistein, the multiple effects of resveratrol suggest that the lower incidence of some types of cancers could be the effect of very specific cell type-dependent processes. Most FDA-approved drugs that prevent the incidence of cancer require a much higher concentration in vitro than in vivo to effectively eliminate cancer cells. In special circumstances, low doses of these FDA-approved drugs, including metformin, aspirin, and simvastatin, have been shown to inhibit cancer cell growth by specifically targeting CSCs and senescent cells. Thus far, specific mechanisms of targeting CSCs or senescence pathways through chemoprevention therapy have not been identified, so the data related to those processes are insufficient. However, the potential to discover these mechanisms and apply them to the administration of various FDA-approved drugs opens possibilities to the improvement of current chemoprevention techniques. On the other hand, the available data shown in indicate that there is no universal drug that can lower the incidence of all types of cancer. This notion is consistent with the paradigm that all existing anticancer drugs are cancer type specific, targeting different mechanisms involved in the development and progression of tumors. Hence, each type of cancer may be prevented only by a few specific drugs. For instance, metformin reduces the incidence of prostate, colorectal and breast but there is no known effect on, for example, renal cell carcinoma or pancreatic cancer. This seems to be the trend for all FDA-approved drugs known to reduce cancer incidence. It is also important to clarify that metformin does not prevent 100% but only reduces the incidence of breast cancer. Thus, even for a particular type of cancer, only a fraction of patients will benefit by taking metformin, so the likelihood of eliminating cancer incidence with a single “magic” chemopreventive agent is remote. However, the identification of specific drug(s) with the ability to reduce the incidence of a particular cancer type in selected groups of patients offers a promising strategy to reduce cancer metastasis for that cancer type. This notion is supported by a recent study that tested the effect of digoxin, a Na + /K + ATPase inhibitor typically prescribed for cardiovascular conditions. The authors found that digoxin suppress circulating tumor cell clusters and blocked metastasis in breast cancer patients treated daily for one week with a maintenance digoxin dose (0.7–1.4 ng/mL = 8.96–1.79 nM serum level) . For comparison, the IC 50 of digoxin for different breast cancer cell lines was 60 nM, 230 nM, 80 nM and 170 nM for MCF-7, BT-474, MDA-MB-231 and ZR-75–1 breast cancer cell lines, respectively . Pathway analysis in samples collected and processed for next-generation RNA-seq showed highly significant downregulation of cell-cycle-related genes . Tamoxifen is a particular drug that adds clinical evidence for the PLDA with the aim of preventing metastasis. Tamoxifen is an anti-estrogenic substance effective in the adjuvant therapy applied in human breast cancer. When prescribed to women with estrogen receptor (ER)-positive early breast cancer for 5–10 years, it reduced the risk of breast cancer recurrence, reduced breast cancer mortality, and reduced overall mortality . The anticancer effect of tamoxifen is believed to be due to the hydroxylated metabolites, 4-hydroxytamoxifen (4OHtam), and 4-hydroxy-N-desmethyltamoxifen (4OHNDtam/endoxifen), because of their high affinity for the ER . Its main active metabolite, 4-hydroxytamoxifen, has an anticancer effect when tested in vitro with an IC50 between 18 and 27 μM for MCF-7 and MDA-MB-231 human breast cancer cell lines, respectively. These concentrations are higher than the plasma concentration that ranges between 0.0213 and 0.0227 μM . However, the lowest effective in vitro concentrations of endoxifen (20–20 nM) are within the plasma concentration range (5–80 nM) found in patients (see ). Thus, tamoxifen provides a good example of a drug that when used for prolonged time, at relative low concentrations, prevents metastasis not directly but possibly through one of its active metabolites. This raises the possibility that some of the FDA-approved drugs listed in are prodrugs and a few of their unknown or known but not well-characterized active metabolites target key biological processes in cancer and non-cancer cells (as recently reported for aspirin) that drive tumor relapse and metastasis. Providing that initiation of both the primary tumor and metastasis in the same type of cancer (in the same patient) share the same mechanisms, it would be possible to use PLDA of a known drug to reduce metastasis formation. This concept is illustrated in using a hypothetical drug X in a specific subtype of triple-negative breast cancers carrying a specific mutation. This strategy can be applied to other subtypes of cancers. It can be anticipated that prevention of metastasis will require the identification of several specific drugs, each of which targets a specific cancer subtype, leading to a new approach to personalized medicine in oncology.
Grappling With the COVID-19 Health Crisis: Content Analysis of Communication Strategies and Their Effects on Public Engagement on Social Media
af5db13e-e780-4ac7-897c-5b6dbf17e6dc
7446717
Health Communication[mh]
Background Developing an Integrated Framework The first known coronavirus disease (COVID-19) case was reported in China on November 17, 2019 , and on January 23, 2020, the government in China imposed a strict lockdown in Wuhan, the epicenter of the virus. Despite a massive containment effort, by late February, 80,000 cases had emerged . By March, COVID-19 was confirmed in many countries worldwide and the World Health Organization (WHO) declared COVID-19 a pandemic on March 11, 2020 . Pandemics in the past such as the 2003 severe acute respiratory syndrome and H1N1 have had significant impacts on people’s lives, socioeconomic activities, and population movement . COVID-19 also presented similar impacts, but its spread was even faster . A pandemic requires large-scale immediate actions by the government to connect with the public and a change in behavior of the public to combat the rapid spread of the disease . For a new disease such as COVID-19, effective epidemic communication is crucial to inform the public about the latest updates of the disease, motivate them to adopt preventive measures to minimize the transmission of the disease, and reassure them that the government is capable of handling the situation . Many studies on epidemic and pandemic communication exist on traditional media , suggesting that the public learns about the health risks associated with the pandemic from the media , which affects how they respond to the epidemic or pandemic . In recent years, social media has played an increasingly important role in promoting health risk communication during an epidemic . Research on the use of social media to investigate public attention to new epidemics has been conducted, such as with H7N9 , the Ebola outbreak , and the H1N1 pandemic in 2009 . However, there are few studies that have adopted social media analysis in examining government media communication with the public and the public’s response to the new COVID-19 epidemic . Because timely public action is needed to contain the spread of the new disease, it is of urgent importance to investigate how the government media communication engages the public. This information can provide insights on what the media, health organizations, and government can further do to disseminate information to the public so that the latter can take appropriate measures to stem the spread of the virus. In terms of what organizations emphasize in their epidemic or pandemic communication, a prior study found that most corporate and government organizations in the United States relied on the content frames of health crises, health issues, and disasters in communicating messages about the 2009 H1N1 flu pandemic with the public. Government organizations were more likely than corporate organizations to frame the H1N1 pandemic as a general health issue and emphasized uncertainty, disease detection, and preventive measures . The style of communication can also have an impact on public engagement in that a narrative style has a positive effect on preventive and detection health behaviors and arguments and facts may be used too . Researchers have pointed out that narratives promote health behavior change , yet there is a lack of research on the use of narratives in pandemics for effective health communication, apart from Sandell et al who revealed that positive narratives were effective in raising the public’s awareness of health risks and the preventive measures to curb the spread of the 2009 H1N1 pandemic . Additionally, interactions on social media can affect health behavior and attitudes , and thus, the creation of the dialogic loop via the use of interactive features is important on social media. This can be done by allowing the public to post questions and receive feedback and using interactive features such as multimedia and hashtags . A previous study has found a positive relationship between chief executive officers’ (CEOs) use of hashtags and public engagement with respect to likes, shares, and comments on social media . However, a research gap exists in understanding the interactive features used by the government in its communication with the public on social media with regard to a pandemic. Synthesizing this literature, our study was guided by the observation that there is scant research on the use of social media to disseminate information about COVID-19 and public engagement with this information . In particular, there is a research gap in understanding the content frames employed by the government’s media in the Chinese context, its style of communication, and the use of interactive features in its communication with the public regarding a new epidemic. Therefore, in this study, we investigated how the most-read government-owned newspaper in China, People’s Daily , serving as the main vehicle for the government’s dissemination of information to the public, employed a social networking site, Sina Weibo, to communicate and possibly engage the public with COVID-19. As of 2014, China had 649 million internet users . To use the power of the internet, the main Chinese state-owned media such as People’s Daily and CCTV News have shifted the paradigm of media coverage by placing more emphasis on communication with the public via social media . They have also switched to a more interactive style to better connect with the public . In China, where Facebook is blocked, Weibo, a social media platform under People’s Daily introduced by the Chinese commercial corporation Sina, has taken over and become the largest social media network . In 2018, Weibo had over 462 million active users and was used by approximately 200 million people every day . With years of the government’s continued efforts, the reputation of the Chinese state-owned media has improved significantly in the eyes of the Chinese public . State-owned media such as People’s Daily now maintain a strong web presence and a user-friendly image rather than an authoritative image . People’s Daily encourages its audience to participate in discussions and demonstrates a strong tendency to adopt positive and persuasive messages . For example, on a topic of haze-related issues, instead of providing pictures of haze with a negative valence, People’s Daily posted positive images that encouraged the public to appreciate the beauty of nature, accompanied by persuasive messages that suggested substantial improvements to be made in the future . This is vastly different from how China Daily handled the same topic, which displayed a cartoon of Santa Claus hitting a tree due to haze . This example demonstrates that the state-owned media in China , People’s Daily , and its online platform, Sina Weibo, have actively adapted their styles of interactive communication to better engage the public. In terms of health emergency communication, previous studies have found that social media platforms such as Twitter and even the photograph-based Instagram played a significant role in guiding the public during the Zika virus outbreak in 2016 . For China, Sina Weibo performs a similar role during pandemics since the government, news media, and the public heavily relied on it as an online platform for communicating information during the current COVID-19 outbreak . Sina Weibo serves as a pivotal communication platform for the government to interact with the public and disseminate information about COVID-19, such as its symptoms, preventive measures, and adopted health policies . Therefore, we contend that People’s Daily would also communicate information about COVID-19 and interact with the public on its social media platform, Sina Weibo. In our study, we integrated factors, including health crises framing in the media context , message style in health communication , and interactive features to examine epidemic communication and public engagement in China. We then developed an integrated framework to investigate the relationship between these factors and the levels of public engagement. Since our study also investigated public engagement in the form of likes, comments, and shares, it might offer insights on how effectively social media platforms such as Weibo can be used for epidemic communication. The WHO has advised governments to take proactive steps to communicate with the public about epidemics, as the sharing of critical information about the epidemic can minimize the spread of the disease and foster the public’s collaboration with the government . Social media serves as a major communication platform for the government and public health authorities to provide timely health information to the public . The contribution of this study is that we incorporated three key dimensions in health emergency communication on social media, namely, the framing of health crises and issues , message style , and the interactive loop to examine COVID-19 communication by the government-owned media and public engagement in China. Our findings shed light on how responses to the epidemic are framed by the media and what encourages the public to engage with such communication and take appropriate actions to slow the spread of the virus. In the following, we explain the three dimensions adopted in our study: content frame, message style, and interactive features. Content Frame Dimension Message Style Dimension Interactive Features Dimension The interaction (ie, one-to-one or one-to-many) on social media sites can influence health behavior and attitudes , and consequently, the promotion of the dialogic loop with interactive features is crucial on such sites. An interactive dialogic loop allows the public to post questions and receive feedback as well as post comments and share them . A wide range of interactive features are available on sites, including multimedia (eg, videos, audio, photos, podcasts), stay-up-to-date tools such as hashtags, and comments on content . Hashtags enable users to find relevant shares on an issue and facilitate in making synchronous conversations on Twitter, thereby fostering engagement , with a study noting a positive relationship between CEOs’ use of hashtags and engagement in terms of likes, shares, and comments . To encourage users to return to the site, an attractive site and relevant links are necessary. Regarding conservation of visitors, the site should include useful external links . In health-related communication, it is known that social media posts with interactive features leave a deep impression on the public when compared with posts in plain text . Hence, we assigned “interactive features” as the third dimension, comprising the four subdimensions of. Although prior studies have recognized the importance of the content frames , message style , and interactive features in health-related communication, the question as to whether these three dimensions can facilitate the communication of COVID-19 on the government’s social media platform remains unclear. Therefore, our first research question (RQ) is derived: RQ1: How frequently did the official social media employ the subdimensions of content frames, message style, and interactive features in its communication of COVID-19? A clearer indication of the public’s awareness of the information communicated by the government can be revealed through their actions of liking, sharing, and commenting on the government’s posts. Therefore, it is pertinent to investigate the effects of the content frames, message style, and interactive features on different levels of public engagement . Social media users may use “likes” to indicate their interest in a health issue , and by commenting and sharing, the public can let others know that the issue is important, thereby serving as disseminators of the original message posted . To investigate differences in public engagement with health information posted by the government in response to COVID-19, our second RQ is posed: RQ2: Did the subdimensions of content frames, message style, and interactive features have different levels of impact on public engagement? Different dimensions may function synergistically to impact public engagement. As has been found in a study, an interaction effect between content and style of communication on public engagement in brand social media communication was observed . It is, therefore, likely that interaction effects might exist between some of the dimensions or subdimensions on public engagement in COVID-19 communication. Thus, our third RQ is as follows: RQ3: Could the dimensions (ie, content frames, style, and interactive features) or subdimensions interact synergistically to increase or decrease the levels of public engagement with the government’s communication of COVID-19? By examining the impact of content frames, message style, and interactive features on public engagement in COVID-19 communication, our study aims to provide meaningful and critical information for governments, health organizations, communication professionals, and researchers regarding the health emergency communication strategies employed and their effectiveness in raising the public’s awareness of and urgent need for taking preventive measures against COVID-19. Communication related to health risks depends on persuasion for the framing of the message that informs the public about important information and motivates them to act . Framing refers to how a text or message defines an issue and provides the necessary context . Entman pointed out that “to frame is to select some aspects of a perceived reality and make them more salient in a communicating text.” Drawing on framing analysis, one can identify how organizations and the government frame their messages pertaining to critical issues for the public , thereby impacting the effectiveness of the information disseminated . In the management of a health crisis, the media and government tend to employ six frames in message delivery: conflict (aspects of crises that bring tensions between parties), action (past or current crisis response actions), consequence (the effects or severity of the crisis), new evidence (discovery of new evidence that contributes to the crisis understanding), uncertainty (aspects such as the spread of the epidemic, treatment, and what is unknown), and reassurance (reassuring the public) . When handling communication of health issues, five frames in the delivery of health messages are noted, namely, disease detection (symptoms to indicate how the disease is spreading), disease prevention (taking preventive measures), health care services (the actions that the health care system is taking), scientific advances (discovery of new evidence showing how the disease is spread), and lifestyle risk factors (personal habits that are likely to lead to the disease) . In the application of these frames, Liu and Kim noted that most corporate and government organizations in the United States used the frames of health crises and health issues much more via traditional media than social media in disseminating messages about the 2009 H1N1 flu pandemic . Yet corporate organizations framed the pandemic as a health crisis rather than as a general health issue, meaning that they did not emphasize the long-term actions that could prevent the health issue from arising in future. In addition to this, Liu and Kim noted that government organizations were more likely to use uncertainty subsumed under the health crisis frame whereas corporate organizations tended to use the conflict indicator . In another study, Shih et al noted that the frames of governmental action and consequence were predominantly used by journalists to craft stories about epidemics including mad cow disease, West Nile virus, and avian flu in the print version of New York Times . Given that COVID-19 was a health crisis and health issue emerging in China and required immediate action from the public, we contend that framing this epidemic using the health crisis frames of action, new evidence, uncertainty, and reassurance, would be of relevance to communication with the public, while the frame of health issues, namely, disease prevention and health care services, are of salient importance too since information is lacking on the details and duration of the epidemic. As highlighted by Shih et al , the government may attempt to minimize loss by reassuring the public with actions and new evidence via its influence on the media and its frames. Therefore, the previously mentioned frames could be effectively used in the media coverage of the epidemic. For a new epidemic, vaccines and medicine are not available to the public, so disease detection and scientific advances are tasks that only medical professionals can undertake and, thus, may not be able to engage the public. Disease prevention is vital and includes information about what preventive measures the public should adopt to reduce the risk of infection . A prior study found that government organizations in the United States were more likely to incorporate uncertainty into their crisis responses to the H1N1 pandemic, and with the implications of their results, we incorporated uncertainty into our framework, since the newspaper we examined is the main vehicle used by the government in China to communicate with the public. Uncertainty is useful because by indicating what is unknown, more transparency of information is provided, possibly generating trust . Conflict was primarily used by corporate organizations as opposed to government organizations for the H1N1 pandemic in the United States and, thus, not deemed of specific value in our framework. The frames that we employed are in line with the information that the WHO recommends that the media should provide to the public: offering accurate and transparent information to the public; encouraging appropriate attitudes, actions, and behaviors; and helping prevent unnecessary fear . As a result, we combined the eleven frames of health crises and issues into six frames for the investigation of COVID-19 content frames in social media posts, namely,, and (6) uncertainty . Since these frames are all content-related, we termed them subdimensions under the content frame dimension. Since a key objective of epidemic communication is to persuade the public to change their behavior to limit the spread of the disease while the public has a need for real-time information , effective messages need to be designed, requiring some form of appeal. In this regard, the effectiveness of narratives in health communication on disease detection and prevention has been explored . Narratives refer to stories that people use and tell, and consist of anecdotes and personal stories with plots . Narratives engage the public because they make them concentrate on the story events instead of disputing the presented information while eliciting emotional reactions and being both entertaining and informative . On the other hand, nonnarrative messages depend on the use of arguments and facts presented logically and are considered as informative . Studies on the effectiveness of narratives in brand advertisement connection with customers and in the area of health communication have been conducted . For example, a narrative film was effectively employed to communicate the need for vaccination against the human papillomavirus . Scholars have increasingly recognized the role of narratives in promoting health behavior change , but studies on the use of narratives in pandemics for effective health communication are scarce with the notable exception of Sandell et al , who found that positive narratives were powerful in raising the public’s awareness of health risks and preventive measures for the 2009 H1N1 pandemic . Based on this, we categorized narrative and nonnarrative as subdimensions under the message style dimension. Data Collection Sample Period Sample Size and Sample Data Collection Content Analysis and Coding Scheme Intercoder Reliability Statistical Analyses We selected the government-owned social media platform People’s Daily ’s Sina Weibo account for data collection. People’s Daily is the official newspaper of the Central Committee of the Communist Party of China for disseminating government information to the Chinese public . It is the most influential and authoritative newspaper in China, having a circulation of 3 million, and is ranked as one of the world’s top 10 newspapers . With 117 million followers, Sina Weibo of People’s Daily is also one of the top followed and most visited Sina Weibo sites in China. Due to the prominent use of Sina Weibo for social media communication in China with 462 million active online users in 2018, we captured all posts and the public’s responses communicated between the government and public on COVID-19 from People’s Daily for the investigation of government communication of COVID-19 and its interaction with the public. A text corpus containing all posts on Sina Weibo of People’s Daily pertaining to COVID-19 from January 20, 2020, to March 11, 2020, was constructed. The sampled period began on January 20, 2020, when the Chinese State Council officially announced the management of COVID-19 as a public health emergency issue and the corresponding preventive measures were launched to tackle COVID-19 . The sampled period ended on March 11, 2020, when the WHO declared the COVID-19 outbreak a pandemic, meaning that the regional epidemic had become a global public health emergency . Subsequently, all online posts related to COVID-19 were manually extracted from Sina Weibo’s account of People’s Daily , and in total, 3255 posts were collected. To generalize a sample size to represent the target population (3255 posts), we employed the sample size calculator developed by the Australian Statistics Bureau to estimate a sample size of 620, giving a confidence level of 95%, a confident interval of 0.035, and a standard error of 0.018. A random sampling method was employed. The 620 posts and their corresponding public responses (ie, number of shares, comments, and likes) on People’s Daily ’s Sina Weibo account from January 20, 2020, to March 11, 2020, were harnessed for quantitative content analysis. To systematically detect statistically valid outliers, we employed z score to quantify the unusualness in the observations . There were 12 posts (2%) identified as outliers and removed from the data pool. These outliers included posts that were significantly longer or shorter, which would have otherwise caused problems during content analysis, as the length of the posts would affect the number of counts in content themes, style, and interactive features. Consequently, 608 posts and the related public responses were included in the corpus for content analysis. Content analysis was employed to examine COVID-19 communication in the 608 posts of People’s Daily ’s Sina Weibo. Content analysis is a widely employed method in the study of technical and media communication . It is concerned with the context in which the occurrences of words, phrases, signs, and sentences are recorded and analyzed to provide an in-depth understanding . Researchers can design a variety of categories based on their interactions with the data to develop an integrated framework for quantitative studies . Content analysis. Therefore, it is well-suited to a coding operation involving a developed framework in the media communication context . Through an in-depth analysis of mainstream media communication, we were able to reveal and establish the relationship between the variables in the proposed conceptual framework. First, to answer RQ1, we drew on previous studies of epidemic communication, health crisis communication, and public relations studies to code the topics of the content dimension exhibited in the government’s COVID-19 communication into the following six subdimensions on a sentence basis: (1) action , (2) new evidence , (3) reassurance , (4) disease prevention , (5) health care services , and (6) uncertainty . Second, to examine the communication styles of COVID-19 posts from People’s Daily , we built on prior studies and coded the two message styles in the style dimension into the subdimensions of (1) narrative and (2) nonnarrative on a sentence basis. To determine whether the narrative style of communication was employed, we examined if the post had a temporal or spatial sequence and revealed the writer’s feelings or thoughts. Last, we built on prior public relations studies and coded the number of interactive features used to facilitate the creation of the interactive dialogic loop. These interactive features included (1) links to external sources , (2) use of hashtags , (3) use of questions to solicit feedback , and (4) use of multimedia (see for the exemplifications of coding items and examples extracted from the collected posts). Regarding RQ2 and 3, we recoded the dimensions of content, style, and interactive loop using the dominant category for performing the analysis of variance (ANOVA) tests on content, style, and interactive loop on public engagement. For example, we found 43% of the sentences in post number 128 belonging to action and 29% to disease prevention ; 57% of sentences employed a narrative style of communication while 43% were nonnarrative; 1 link provided an external source, 2 pairs of hashtags, and 1 multimedia feature. We then recorded the content as action based on the dominant content topic, style as narrative based on the dominant use of narrative sentences, and interactive loop as use of hashtags based on the dominant use of hashtags. If the count of sentences or interactive features was the same, the primary coder checked the title, topic sentences, and context of the post to determine the dominant category. To address RQ2 and 3, we recorded the number of shares, comments, and likes of the sampled posts to investigate the relationship between People’s Daily communication and its impact on public engagement. Regarding the negative binomial regression (NB2) analysis, the coding results of RQ1 were adopted to investigate the effect of all subdimensions on public engagement. The number of shares, comments, and likes in RQ2 and 3 were also included for statistical analyses. The coding was conducted by the third author (the primary coder) and a well-trained coder who all possess a postgraduate degree in communication. To ensure intercoder reliability on the coding of dimensions, subdimensions, and public engagement, the coder was repeatedly trained on the coding scheme. Any disagreement between the author and coder was discussed in the coding process. The measure of intercoder reliability was based on the co-coding of 120 posts from the data pool of 608 posts (19% of the total number of posts sampled) . For all categories, the average agreement was higher than 0.83, and the average Cohen kappa was greater than 0.8, indicating an almost perfect agreement (see for intercoder checking results of all categories). To analyze the differences in the frequencies of the use of each subdimension in the communication of COVID-19–related news by the official social media (RQ1), we coded the presence of subdimensions in each of the 608 posts and then calculated the mean counts for each of the 12 subdimensions. We then employed the one-way ANOVA and the post-hoc Tukey test in SPSS (IBM Corp) to reveal the differences in the use of content, style, and interactive features in COVID-19 social media communication (RQ1) and the difference in the number of shares, comments, and likes in relation to the subdimensions of content, style, and interactive dimensions (RQ2). The two-way ANOVA was performed to examine the interaction effect of content and style on public engagement in the form of shares, comments, and likes (RQ3). To test the assumptions of normality in ANOVA, we performed the Kolmogorov-Smirnov and Shapiro-Wilk tests on the normality of the variables. Most variables were not normally distributed, but we decided to continue using ANOVA as it has proven to be robust and valid in testing the difference between independent variables, even if the normality assumption is violated . In addition, we conducted the test of homogeneity of variances when performing ANOVA. When the assumption of homogeneity of variances was violated, the ANOVA results were replaced with those of the Welch ANOVA. As for RQ2, which involved examining the relationship between the 12 subdimensions (independent variables) and the public’s responses in terms of the count number of shares, comments, and likes (dependent variables), we first employed Poisson regression, a count regression model in SPSS . However, real-world data sets are commonly known to violate the assumption in the Poisson regression with respect to overdispersion of outcome variables . As expected, such a violation was detected in our data set, and thus, we followed the common practice of replacing Poisson regression with the NB2 to improve the goodness of fit, especially Akaike information criterion and bayesian information criterion. NB2 is effective in fitting various types of data arising in technical and communication research , and NB2 is a more general model that relaxes the strong assumption that the underlying rate of the outcome is the same for each included participant . In response to RQ1 regarding the differences in the frequencies of each subdimension’s use in the communication of COVID-19–related news by the official social media, we found that new evidence in the content dimension was the most used subdimension (mean 0.749, standard error of the mean [SEM] 0.05) and significantly used much more than any other subdimensions ( a). Action was the second most prevalent subdimension (mean 1.210, SEM 0.08), and reassurance was the third most frequently used one (mean 0.506, SEM 0.05). Disease prevention (mean 0.276, SEM 0.04) and health care services (mean 0.315, SEM 0.04) ranked fourth and fifth respectively, and uncertainty was the least used subdimension (mean 0.077, SEM 0.02; a). In relation to the style dimension, the nonnarrative (mean 2.259, SEM 0.09) style was used approximately twice as much as the narrative (mean 1.110, SEM 0.07) one ( b). Concerning the interactive dimension, the use of multimedia (mean 1.586, SEM 0.08) and use of hashtags (mean 1.411, SEM 0.02) were the most prevalent subdimensions, with the use of multimedia being slightly but significantly higher than that of the use of hashtags ( c). By contrast, both links to external sources (mean 0.402, SEM 0.02) and use of questions to solicit feedback (mean 0.097, SEM 0.02) were used infrequently, with the use of questions to solicit feedback being used significantly less in comparison to all other subdimensions ( c). Regarding the levels of impact on public engagement from individual subdimensions in COVID-19 social media posts (RQ2), our results showed that posts of new evidence generated the least number of shares of all six subdimensions (mean 1327.81, SEM 165.90). Posts of new evidence had significantly fewer shares than posts of reassurance (mean 4065.32, SEM 689.88), disease prevention (mean 4455.71, SEM 604.95), and uncertainty (mean 5033.35, SEM 2242.13; a). However, the six subdimensions of content did not show differences in their impact on comments and likes ( b, c). For the style dimension, narrative posts generated significantly more shares than nonnarrative posts ( narrative : mean 3544.03, SEM 379.80 vs nonnarrative : mean 2237.06, SEM 204.18; d). Similar to content, the message style did not exert any impact on the number of comments and likes ( e, f). As for the interactive dimension, no significant differences were observed among the four subdimensions in terms of shares, comments, and likes ( g-i). Surprisingly, although they were the most frequently used subdimensions ( a, b), new evidence and the nonnarrative style had the least impact on the number of shares in their own dimensions ( a, d). To determine which of the twelve subdimensions were effective positive or negative predictors of public engagement in COVID-19 communication (RQ3), we fitted the share, comment, and like count data to a NB2 model. Our results in and indicated that, although new evidence was the most used content subdimension ( a), its posts received the lowest number of shares ( a), suggesting a negative correlation between new evidence and the number of shares. In our NB2 analysis, we confirmed that new evidence was a strong negative predictor of the number of shares (β=–.253, SE 0.068, P <.001; ). Similarly, nonnarrative was the most frequently used style ( b) but generated a lower number of shares as opposed to the narrative one ( d). Again, we confirmed that the nonnarrative style was indeed a strong negative predictor of the number of shares (β=–.223, SE 0.068, P <.001; ). By contrast, the narrative style was found to be a strong positive predictor of the number of shares (β=.283, SE 0.064, P <.001; ). For the interactive dimension, links to external sources was a strong positive predictor of the number of shares (β=.319, SE 0.087, P <.001), whereas the use of multimedia was a weak positive predictor of the number of shares (β=.057, SE 0.023, P =.02; ). Finally, we noted that the use of questions to solicit feedback was a strong negative predictor of the number of comments (β=–.177, SE 0.087, P =.04) and likes (β=–.290, SE 0.111, P =.01; ). Subdimensions are likely to function synergistically in affecting public engagement. To examine whether there was an interaction among the dimensions on public engagement (RQ3), we performed a two-way ANOVA analysis on the mean number of shares, comments, and likes of the dimensions. Our results confirmed a significant interaction effect between content and style on the number of likes . However, there was neither any interaction effect between content and the interactive dimension itself nor between style and the designated interactive dimension . To investigate the interactions between specific subdimensions, we performed simple main effect analyses to examine the interactions between specific subdimensions on the number of shares, comments, and likes. Between content and style, the different content subdimensions showed no significant differences in the number of shares between narrative and nonnarrative styles ( a). However, for the number of comments, the narrative style was significantly higher than that of nonnarrative in disease prevention posts ( narrative : mean 5978.83, SEM 972.37 vs nonnarrative: mean 2446.33, SEM 753.19; F 1,597 =8.249, P =<.001, η p 2 =0.014; b). Likewise, a higher number of likes was observed for the narrative style as opposed to nonnarrative in disease prevention posts ( narrative : mean 104,881.00, SEM 20,416.43 vs nonnarrative : mean 35,092.87 SEM 15,814.50; F 1,597 =7.303, P =.01, η p 2 =0.012; c). These results indicate that the pairing of disease prevention content with a narrative style generated a higher number of comments and likes. Between the content and the interactive dimensions, no significant differences were observed in the number of shares, comments, or likes for the narrative and nonnarrative styles ( d-f). Between the style and the interactive dimensions, the narrative style received significantly more shares than the nonnarrative one on the use of hashtag posts ( narrative : mean 3685.92, SEM 359.02 vs nonnarrative : mean 2145.31, SEM 245.16; F 1,601 =12.558, P <.001, η p 2 =0.021; g), highlighting that the pairing of narrative style with the use of hashtags generated a higher number of shares. For the number of comments and likes, no significant differences were found ( h, i). Principal Results Implications, Limitations, and Future Work Conclusions In summary, this study presents a novel, comprehensive framework of the factors that engage the public in COVID-19 communication by the government on social media through empirically testing the measures of health content frames, style of messages, and interactive features. By drawing on this knowledge and harnessing the power of social media, governments and health organizations can determine which aspects to emphasize in an attempt to reduce the spread of the new disease. Our results showed that a range of content frames, message styles, and interactive features was employed by the government to communicate about COVID-19 with the public on social media with a view to handling the health crisis. Yet different levels of engagement were revealed. In particular, new evidence and the nonnarrative style had the least impact on the number of shares ( a, d), although they were the most frequently used subdimensions ( a, b). Additionally, our NB2 results confirmed that new evidence and nonnarrative style were strong negative predictors of the number of shares . On the other hand, the two-way ANOVA indicated that the pairing of disease prevention posts with a narrative style generated a higher number of comments and likes ( b, c), while the NB2 results confirmed that the narrative style was a strong positive predictor of the number of shares . As found in an earlier study , posts on preventive and safety measures related to COVID-19 were most frequently employed by public health organizations in Singapore, the United States, and England, and our results on disease prevention posts were consistent with this study. In line with previous studies, our results also revealed the strong effect of the narrative style on public engagement . A narrative style of communication fosters the public’s identification and emotional involvement through the character’s sharing in a story event . Through this, health narratives can possibly raise the public’s awareness of health risks and encourage them to take actions to curb the spread of the disease . A previous study has demonstrated an interaction effect between content and style , and therefore, we expected an interaction between these two dimensions. Indeed, our data showed a significant interaction between content and style on the number of likes . With respect to the interaction between the subdimensions, our results showed that more shares were generated for posts related to disease prevention, reassurance, and uncertainty ( a) delivered in a narrative style ( d). Links to external sources and use of multimedia were also positive predictors of the number of shares . A “share” indicates a high engagement level because it involves a cognitive action of disseminating the post to others, which can potentially reach a large audience . Disease prevention is fundamental in a new epidemic and uncertainty needs to be addressed because, by indicating what is unknown, more transparency of information is provided, thereby helping to build trust . The public has a tendency to rely on social media during crises as the sites offer emotional support , indicating that the communication of uncertainty and reassurance might have served the purposes of offering emotional support and allaying anxiety. Our novel findings regarding the interaction between the subdimensions provide important insights for enhancing public engagement in epidemic communication on social media. This study contributes to the understanding of what drives the public to be engaged with COVID-19 communication by the government and adds to the body of knowledge on public engagement with epidemic communication on social media. First, our integrated, comprehensive framework of public engagement with government health communication regarding COVID-19 in China was empirically tested. People’s Daily currently has 117 million followers, but Sina Weibo on its own is widely used for social media communication in China , with 462 million active online users in 2018. Existing followers of People’s Daily can influence other Weibo users through sharing the posts, fostering a sense of community with them, and potentially helping to contain the spread of COVID-19. Both “comments” and “likes” were noted for disease prevention posts delivered in a narrative style ( b, c). A “comment” is indicative of a high engagement level because it requires the user to read a post and respond to it but the interpretation of a “like” is subject to change depending on the context. For example, one study suggested that a “like” is indicative of a lower engagement level , although within the context of epidemic communication, a “like” might be perceived as a user’s approval of the post’s importance. In view of this, both “likes” and “comments” are regarded as good indicators of health risk communication. Second, People’s Daily ’s approach of predominantly employing new evidence posts disseminated in a nonnarrative style in COVID-19 communication was not perceived as the ideal strategy to engage the public. We have gained insights into the subdimensions that can effectively enhance public engagement with epidemic communication; for instance, disease prevention posts delivered in a narrative style are viewed favorably. It is imperative for health organizations, governments, and researchers to use the public’s preferred subdimensions to increase the number of shares, comments, and likes with a view to effectively disseminating new epidemic information. One of the limitations of this study pertains to the sampling period. Because we only captured the posts from a certain period of time, the results might vary in different time periods of an evolving epidemic. Our developed framework on COVID-19 communication with the public can be further empirically tested to assess the strength of the three dimensions and applied to other cultural contexts. As social media are frequently accessed by young people while there are demographics that still use traditional mass media in different ways, COVID-19 communication can be examined in terms of impact through other channels of behavior or practice too. An investigation into the use of other popular social media platforms such as WeChat in China to disseminate COVID-19 information can be conducted to gain more insights into this topic.
25-Year Storage of Human Choroid Plexus in Methyl Salicylate Preserves Its Antigen Immunoreactivity
dc2ced04-fca9-47f6-b541-e8aee9c74112
10518196
Anatomy[mh]
Pathological or experimental histological specimens cannot be processed and examined immediately after they are obtained in many cases, and they are then are exposed to long-term storage in a fixative, most often formalin. As a rule, prolonged storage in the formalin does not impair the histomorphology of brain specimens, but the quality of some histological staining is worsened . Moreover, prolonged fixation in formalin can result in the irreversible loss of immunoreactivity for many antigens . However, tissue specimens can be preserved long-term not only in a fixative, but also in methyl salicylate. Methyl salicylate, the main component of wintergreen oil, is a commercially available, strong-smelling organic ester that is used in histological and histopathological laboratories as a perfect substitute for xylene at the clearing stage of preparation processing . As an excellent clearing agent, it is especially recommended for bone and muscle tissues that are adversely affected by conventional clearing with xylene . Previously, it has been shown that prolonged storage (up to three years) of rat brain specimens in methyl salicylate had no detectable effect on the immunoreactivity of common markers of normal and cancer brain cells, such as neuronal nuclear protein, neuron-specific enolase, glial fibrillary acidic protein (GFAP), vimentin, nestin, and doublecortin . These data encouraged us to conduct the present study, in which we assess and confirm the suitability of human choroid plexus samples stored in methyl salicylate for twenty-five years to routine histological staining and immunohistochemical revealing of antigens. The results showed excellent histological staining andcluster of differentiation 68 (CD68), mast cell tryptase, transmembrane protein 119 (TMEM119), and synaptophysin) by using both light and fluorescence microscopy. Fluorescence-based imaging techniques are often hampered by the background autofluorescence, which can arise from the endogenous sample components such as age pigment lipofuscin, elastin and collagen proteins, serotonin, catecholamines, and some others . Formaldehyde fixation is also known to induce an intense autofluorescence in samples of paraffin-embedded animal and human tissues making the fluorescence microscopic observation and immunofluorescence analysis more difficult . Considering the above, we also aimed to investigate whether prolonged storage in methyl salicylate causes an increase in autofluorescence in the choroid plexus samples. The study was conducted in accordance with the Declaration of Helsinki of 1975, and had been approved by the local Ethics Committee. Fragments of choroid plexus (some of them containing adjacent pineal gland) from the lateral and fourth ventricles were removed at routine autopsy from 14 persons (12 males and 2 females aged 20-59) in 1995-1996. The specimens were fixed in alcoholic formalin for 1-2 days and dehydrated in ethanol series. Then, some of the samples were embedded in paraffin routinely, and some were immersed into methyl salicylate (Vekton, St. Petersburg, Russia) in 20 ml glass containers and hermetically sealed. The containers were stored in the laboratory at room temperature. In 2021, after washing in three portions of absolute alcohol, the samples were processed in the spin tissue processor (STP 120, Thermo Fisher Scientific, Waltham, MA, USA) and embedded in paraffin by a routine procedure. Then, samples of choroid plexus in paraffin blocks prepared in 1995-96 (immediately after fixation) and 2021 (after storage in methyl salicylate) were cut to obtain 7 μm-thick sections using rotary (RM 2125RT, Leica Microsystems, Wetzlar, Germany) or sliding (Leica SM 2000R, Leica Microsystems, Wetzlar, Germany) microtome, mounted on poly-L-lysine-coated (Polysine™, Menzel-Gläser, Braunschweig, Germany) glass slides. For histological and immunohistochemical staining, the sections were deparaffinized with xylene, rehydrated in descending gradient of ethanol, and rinsed in distilled water. Hematoxylin and eosin staining was performed according to the commonly established procedure. For immunohistochemistry, the heat-induced antigen retrieval was performed in modified citrate buffer, pH 6.1 (S1700, Agilent-Dako, Santa Clara, CA, USA) for 25 min at 90ºC. Endogenous peroxidase was quenched by incubation in 3% aqueous solution of hydrogen peroxide for 10 minutes. After washing three times in the phosphate-buffered saline, pH 7.4, the sections were pretreated with the blocking solution (Protein Block, Spring Bioscience, Pleasanton, CA, USA) for 10 min at room temperature. Then, the sections were incubated with the primary antibodies against the following antigens: GFAP, vimentin, synaptophysin, CD68, TMEM119, mast cell tryptase, type IV collagen, α-smooth muscle actin, and von Willebrand factor. Information regarding the panel of antibodies tested in this study and their dilution is presented in . These antigens are specific markers of astrocytes, microglia, macrophages, synapses, mast cells, endothelial cells, basement membrane of vessels, and smooth muscle cells applicable for both normal and cancer tissues. To visualize the primary antibodies using light; for mouse antibodies, MACH 2 Mouse HRP-Polymer (BioCare Medical, Pacheco, CA, USA). The peroxidase label was detected using diaminobenzidine chromogen (DAB+; Agilent-Dako, Santa Clara, CA, USA). After immunohistochemical reactions, some sections were counterstained with hematoxylin or alcian blue. For all immunohistochemical reactions, the control reactions recommended by the antibody manufacturer were performed. For detection of the primary antibodies by confocal laser microscopy the following reagents were applied: for rabbit antibodies, Reveal Polyvalent HRP DAB (Spring Bioscience, Pleasanton, CA, USA) and anti-HRP Fab-fragment of goat immunoglobulin conjugated with Cy3 fluorochrome (1:100, Jackson ImmunoResearch, West Grove, PA, USA) with emission maximum (λem=569 nm) in red light or anti-rabbit Fab-fragment of donkey immunoglobulin conjugated with a Rhodamine Red-X (RRX) fluorochrome (1:50, Jackson ImmunoResearch, West Grove, PA, USA) with emission maximum (λem=590 nm) in red light; for mouse antibodies, biotinylated anti-mouse antibody (VectorLabs, USA) and streptavidin conjugated with a Rhodamine Red-X (RRX) fluorochrome (1:150, Jackson ImmunoResearch, West Grove, PA, USA) with emission maximum (λem=590 nm) in red light. For nuclei counterstaining, SYTOX Green (1:100, Thermo Fisher Scientific, Waltham, MA, USA) was used. The stained sections were coverslipped using fast-drying permanent non-fluorescent mounting medium Cytoseal™ 60 (Richard-Allan Scientific, Kalamazoo, MI, USA). For examination of sections in transmitted light, the Leica DM750 microscope and ICC50 camera (Leica Microsystems, Wetzlar, Germany) were used. The sections prepared for confocal laser microscopy were examined under a LSM800 confocal laser microscope (Carl Zeiss, Oberkochen, Germany) using Zen-2012 (Carl Zeiss, Oberkochen, Germany) software for image processing. Histology Immunohistochemistry Confocal Laser Microscopy primary assessment of the choroid plexus tissue was carried out on preparations stained routinely with hematoxylin and eosin . All tissue samples showed good preservation without visual manifestations of autolysis or tissue compression. The cell nuclei were stained in blue-violet color. The cytoplasm of epithelial cells of the choroid plexus and the smooth muscle cells in the blood vessel walls showed moderate oxyphilia. Red-brown erythrocytes were visible in the lumen of the vessels. Connective tissue fibers were pink in color. No differences in tinctorial properties were found between the tissue samples stored in paraffin blocks or methyl salicylate. All antigens studied preserve their immunoreactivity in the choroid plexus after prolonged storage in methyl salicylate. The used antibodies label their typical targets. The choroid epithelial cells were strongly vimentin-immunopositive and surrounded by β-catenin immunoreactivity . Immunoreaction for CD68, TMEM119, and mast cell tryptase labels macrophages, microglia, and mast cells, respectively . Antibody for α-smooth muscle actin reveals actin inside vascular smooth muscle cells, whereas type IV collagen and von Willebrand factor are concentrated around smooth muscle cells (basal lamina) and in vascular endothelium, respectively . GFAP- and synaptophysin-immunoreactivity was not found in the choroid plexus, but can easily be revealed in the adjacent pineal gland . It is important to note that all immunoreactive structures stain distinctly in the absence of intense background staining. Immunoreaction for all markers was the same regardless of whether it was carried out on the material stored in methyl salicylate or in paraffin blocks. Moreover, the impression was that immunostaining for type IV collagen and vimentin was generally more intense in the choroid plexus after storage in methyl salicylate than in paraffin blocks. Fluorescent immunohistochemistry is also applicable to methyl salicylate-stored material. Representative confocal microscopic image of type IV collagen immunoreactivity is shown in . All markers used were visualized by immunofluorescence as clearly and distinctly as by light microscopic immunohistochemistry. No significant autofluorescence was observed. Similarly, a high quality immunofluorescent label was seen in the pineal gland adjacent to the choroid plexus . The obtained results show for the first time that samples of human choroid plexus and pineal gland preserve their antigenicity after 25-year storage in methyl salicylate. This material is suitable for full-fledged immunohistochemical study of various markers using both light and fluorescence microscopy. The antibodies used in the present study clearly label their typical target structures with minimal background. The localization of vimentin, CD68, tryptase, α-smooth muscle actin, and type IV collagen in the human choroid plexus exposed to prolonged storage in methyl salicylate was similar to that described previously in freshly prepared human choroid plexus . To our knowledge, no prior studies have documented the presence of TMEM119 and von Willebrand factor in the choroid plexus of humans or experimental animals (despite an intensive search for TMEM119 in the murine choroid plexus . Thereby, this is the first report on the localization of TMEM119 and von Willebrand factor in choroid plexus. β-Catenin has been described in the choroid plexus of rats, but not in humans . Our present observation has shown that its distribution in humans is about the same as in rats. We failed to detect GFAP- and synaptophysin-immu-nopositive structures in the human choroid plexus. The absence of GFAP expression in the choroid plexus is consistent with the results of other researchers . As for synaptophysin, to the best of our knowledge, there are no data on its expression in the choroid plexus. To confirm that the lack of GFAP and synaptophysin is not due to poor antibody quality or improper immunohistochemical technique used, we examined the immunoreactivity of GFAP and synaptophysin in sections of pineal gland tissue presented in some choroid plexus specimens. Well-discernible numerous GFAP- and synaptophysin-immunopositive profiles can be identified in the human pineal gland using the same technique. Therefore, the obtained results indicate the absence of GFAP and synaptophysin expression in the human choroid plexus. Comparison of material stored in methyl salicylate or in paraffin blocks for twenty-five years revealed an insignificant difference in the intensity and specificity of their immunostaining. Moreover, in some cases, the methyl salicylate-stored choroid plexus appears to show better visualization of the immunohistochemical markers used than paraffin-stored material. The results obtained demonstrate that not only chromogenic immunohistochemistry, but also fluorescent immunohistochemistry is applicable to choroid plexus samples after long-term storage in methyl salicylate. This material exhibits no signs of increase in background autofluorescence, which is the major drawback to the acquisition of clear images in fluorescent immunohistochemistry. Our observations show that storage of choroid plexus samples in methyl salicylate does not impair the quality of images in a fluorescence microscope in any way. Currently, formalin is widely used in morphological and pathological laboratories as a conventional fixative and a universal preservative for a long-term storage of histologic material. However, formaldehyde is classified by the International Agency for Research on Cancer as a definitive human carcinogen (Group A1) and poses a significant threat to human health . This is a significant disadvantage of formalin usage in the laboratory. In comparison to formalin, methyl salicylate is significantly less dangerous for humans: it is harmful if swallowed and may irritate eyes or skin, but this is an inherent property of most other chemicals routinely used in histological and pathological laboratories . Moreover, methyl salicylate is used in the clinic as a topical analgesic and anti-inflammatory agent . Therefore, the use of methyl salicylate is much safer than the use of formalin. In addition, prolonged storage of biological samples in formalin can lead to irretrievable loss of many antigens making such material unsuitable for immunohistochemical investigation . On the contrary, the use of methyl salicylate, according to the data presented here, retains the antigenicity in at least the choroid plexus and the pineal gland after long-term storage. This claim is supported by finding for the first time of TMEM119-, von Willebrand factor- and β-catenin-immunoreactive structures in the human choroid plexus stored in methyl salicylate for 25 years. Previously, good preservation of the immunoreactivity of a number of antigens has been demonstrated in rat brain samples after prolonged (up to 3 years) storage in methyl salicylate . Thereby, methyl salicylate is preferred over formalin if long-term storage of either human or rat brain specimens is required, as it is safe and preserves the immunoreactivity of antigens in brain tissue. The present study demonstrates for the first time that storage of human choroid plexus and pineal gland in methyl salicylate for 25 years has no detectable influence on histomorphology and quality of standard histological staining. The results obtained show good immunohistochemical visualization of various brain antigens (vimentin, GFAP, type IV collagen, β-catenin, α-smooth muscle actin, von Willebrand factor, CD68, mast cell tryptase, synaptophysin, and TMEM119) by using both light and fluorescence microscopy. Storage in methyl salicylate for 25 years does not intensify the background autofluorescence in human choroid plexus and pineal samples and, thereby, does not impair the quality of immunofluorescence. Therefore, methyl salicylate can be recommended to preserve antigenicity and suitability for histological and immunohistochemical study of the stored material when long-term storage of brain tissue samples is needed. The research was conducted in the absence of any financial, personal, academically competitive or intellectual relationships that could be construed as a potential conflict of interest. The study was conducted within the state assignment of the Institute of Experimental Medicine.
How we achieve satisfaction in training – A German-wide survey on preferred training conditions among trainers and trainees for board certification in gastroenterology
edfe20f7-85b6-4c6c-a36e-98c440d0a26a
10914566
Internal Medicine[mh]
High-quality training of future gastroenterologists is important for improving patient care and reducing physician turnover . The contents and regulations of further training in gastroenterology are laid down in Germany in both state and federal law. To achieve uniform training legislation in the medical associations of the federal states, the German Medical Congress adopts a training regulation, which is recommended to the federal state medical associations for adoption. Nevertheless, there is considerable variability in implementation. In Germany, satisfaction with conditions in residency training remains low, with up to 60% of trainees not being satisfied with their training conditions . Around one-third of residents in Germany are considering changing their field of work, and more than half want to reduce working hours to part-time . Specific data on physician training satisfaction in gastroenterology in Germany are lacking. Conversely, around 80% of gastroenterology residents in Canada report being satisfied with their training programs . Furthermore, training conditions are an important reason for physician emigration, in addition to increased salary and improved work–life balance . These data underline the importance of training conditions for physician retention, which is essential regarding the projected lack of physicians in Germany . Staff shortage, high workload, and suboptimal work-life balance are commonly cited reasons for low trainee satisfaction . Although these issues are being increasingly addressed, solutions, such as increased training of physicians or shifting the workload from physicians to other healthcare workers (e.g., administrative staff, nurse practitioners, or physician assistants), remain a policy challenge. Nevertheless, single institutions still have at least some degree of freedom regarding the organization of their physician training program. Unfortunately, data on expectations and wishes regarding training for gastroenterology board certification in Germany are lacking. In the US, training conditions are seen more favorably by program directors compared to trainees in gastroenterology programs, pointing toward a disconnect in perception and expectations between trainers and trainees . This study assesses the views of trainers and trainees in gastroenterology and medical students. We surveyed members of the biggest German Gastroenterology Society, the German Society for Gastroenterology, Digestive, and Metabolic Disorders (Deutsche Gesellschaft für Gastroenterologie, Verdauungs- und Stoffwechselerkrankungen, DGVS) and German medical students. The survey aims to identify possibilities to improve training conditions. Training is not formally organized in programs in Germany. Still, physicians in training rotate through different wards and specialties. One of this survey’s focuses is determining the best way to organize these rotations. We also hypothesize that there would be significant differences concerning the position in a medical hierarchy structure, age, parental status, gender, and place of work. An anonymous questionnaire (original survey supplemental 1 , English translation of questions supplemental 2 ) using SoSci Survey software (Version 3.1.06) was circulated among trainers and trainees in gastroenterology and medical students in Germany; 6396 members of the DGVS were asked to participate via email. Invitations were additionally circulated through the student council mailing lists of all German medical schools. The survey was accessible from April 6th through May 7th, 2022, in the German language. The survey consisted of both single- and multiple-choice questions, and Likert-scale and fill-in responses were utilized . Response to all questions was voluntary, and every question was skippable. Query logic branched depending on the response to the current position and part-time status and included 21 to 25 items. The spatial alignment of Likert scales was randomly alternated to exclude the possibility of left bias . The authors of this paper jointly created the survey, which was subsequently pretested in March 2022 among members of the Young Gastroenterology Task Force (Arbeitsgruppe Junge Gastroenterologie) of the DGVS. Responses and data censoring Ethics approval Statistical analysis and representative study sample size calculation There were 1136 participants, 139 of whom did not answer all questions. Incomplete surveys were excluded from further analysis. Additionally, we excluded the responses from physicians currently looking for jobs (n = 3) because of small case numbers and respondents who decided not to disclose their training status. Another 18 responses without answers to single questions were censored. Ultimately, 958 responses were used for the final analysis. Given that 158 students answered our survey, the response rate for the DGVS was approximately 8%. The response rate for students was not calculable but most probably was considerably lower. Ethics approval was sought and granted before the circulation of the survey at the ethics committee of the Martin-Luther-Universität Halle-Wittenberg (2022-051). The study protocol adhered to all relevant data security and ethics guidelines. Data cleaning, aggregation, descriptive analysis, and visualizations were realized in Python (version 3.7). The full raw data and annotated code are available for reproduction in a Google Colab document, which can be accessed via a GitHub repository: https://github.com/GeneralGrube/JUGA_survey . A graphical overview over the responses to all questions can be found in supplemental 3 . The sample size needed for a representative sample was calculated following Kotrlik et al. . Given the 6396 members of the DGVS with active email addresses, a margin of error = 0.05, and a confidence interval of 95%, a sample of n = 364 was calculated. The sample size is therefore sufficient for DGVS members. Students currently enrolled at a German medical school were also included in this survey, but the sample size in this group is insufficient. Before analysis, the following subgroups were defined: workplace (university hospital, maximum provider, primary provider, and outpatient centers), age (≥ 42 years, <42 years), sex, professional position (physician in an outpatient center, department head, senior physician, board-certified physician, resident physician, or student), employment status (full-time or part-time), and parental status. Board-certified physicians (German: Facharzt) are physicians who passed the board exam. Generally, physicians in Germany are able to take the exam after a minimum of 5–6 years of training, depending on the field of board certification and additional prerequisites, which are different between federal states in Germany. ‘Senior physicians’ (German: Oberarzt) are physicians in an inpatient setting with a leadership role, promoted to senior physician, with being board-certified being a criteria that must be fulfilled in most cases, along with heterogeneous informal criteria depending on settings  Department heads, senior physicians, board-certified physicians, and outpatient-care physicians were defined as trainers and residents and students as trainees. Study cohort Most respondents favor fixed rotation schedules based on the length of training Most respondents prefer in-house training during work hours Broad consensus on responsibility for patient care and preferred structuring of working days Yearly training evaluation as a place for feedback on performance and training opportunities, less for discussing additional commitment Reduced work hours should not lead to disadvantages in physician training in the view of most respondents Significant disagreement about whether own research should be considered on the roster There is substantial disagreement on whether research should be fully reflected in the roster. While 87% of trainees agree, only a minority of 47% of trainers do. University staff especially (77%) would like to see research included in the roster . A slight majority of men (49%) and even more colleagues in the outpatient clinics (37%) think that research should not be considered in the roster. Of the 958 respondents included in the final analysis, 465 (49%) were older than 42. The age structure in subgroups defined by position is shown in . Among physicians in the hospital, the median age of heads of departments was 54.0 +/– 15.91 years; of senior physicians 43.0 +/– 13.43 years; of board-certified physicians 37.0 +/– 14.34 years; of resident physicians 30.0 +/– 4.35 years. Among physicians working in outpatient centers, the median age was 54.0 +/– 12.5 years, and among students, 24.0 +/– 5.54 years. Of note, resident physician respondents were equally distributed among years of training , while medical student respondents were in the later parts of their studies. There was a dominance of male respondents (n = 579; 60%) correlating with the predominance of male members of the DGVS. Less than half of the respondents (42.6%) care for children. Further characteristics, including place of work and position of respondents, are presented in . Approximately two-thirds responded that rotations to other specialties, emergency departments, intensive-care units, outpatient departments, and functional diagnostics should follow a fixed curriculum. Both trainers and trainees agreed to a similar degree. There was a shift in the response by seniority. Students and resident physicians preferred a fixed curriculum to a lesser extent than the board-certified physicians. Similarly, 51% of students preferred concurrent continuous sonographic or endoscopic training at the start of work, whereas 86% of residents preferred a fixed rotation in sonography and endoscopy . Physicians working at primary-care hospitals preferred fixed rotations to a lesser extent (64%) than university hospitals (88%), as well as full-time workers (68%) to a lesser extent than part-time workers (79%). In addition, 63% of respondents felt that the rotation order should be based on the length of training time rather than on performance and commitment. Overall, trainees agreed to a higher degree (68%) than trainers (61%), but surprisingly, there was considerable heterogeneity in this group with 42% of students having the opinion that performance and commitment should decide rotation order. This was more than double the share compared to resident physicians (20%). Also, a gradual shift from department heads to resident physicians was seen here, with department heads valuing performance and commitment. A large majority (80%) of respondents value education at several institutions higher than at a single institution. Again, we observed strong concordance among trainers (78%) and trainees (83%). Solely, department heads share this view less (67%) . Interestingly, about two-thirds of the respondents believe that overtime is necessary for good clinical training . Nevertheless, a majority of trainees (58%) disagree with this opinion. Of note, there was considerable dissent between students (67%) and resident physicians (42%) in the response to this question. Most trainers (76%), on the other hand, hold the opinion that good clinical education is impossible without overtime. Women (60%) overall agree less with the sentiment that overtime is essential for good clinical training than men (70%). Most respondents (87%) reported that in-house training opportunities should occur during working hours instead of after work, with trainees (92%) agreeing slightly more than trainers (85%). In particular, respondents < 42 years (92%) and resident physicians (93%) agreed with this view more than average. The lowest level of agreement was found in respondents performing outpatient care (75%). Interestingly, most respondents, especially trainees (76%) and respondents < 42 years (74%), preferred internal versus external training (71%) before taking their first steps in sonography or endoscopy. Physicians working in primary-care hospitals (64%) agreed to a lesser degree with this statement compared to those working at university hospitals (75%). The same was true for part-time workers (63%) compared to full-time workers (70%). In addition, 91% of respondents believe that external training should be paid for through a training budget from their institution. Almost every responder younger than 42 (96%) and resident physician (97%) supported this idea. Although physicians in outpatient care shared this view somewhat less, the acceptance remained high (84%). There is an excellent agreement (89%) that at an early stage of the training in endoscopy or ultrasound, the trainee/resident should be under direct supervision (in the same room) during examinations and not work independently with on-call supervision. There was a strong consensus about this statement in all subgroups investigated. There is substantial disagreement about whether advanced endoscopy techniques such as endoscopic retrograde cholangiopancreatography (ERCP), percutaneous transhepatic biliary drainage (PTBD), and endoscopic ultrasound should remain integral parts of board-certified training for gastroenterology . Trainees (69%) and especially students (72%) would prefer advanced endoscopy techniques to be included as part of the regular training to become gastroenterologist specialists. Trainers (51%) marginally agreed, but board-certified physicians without a management position (45%), physicians in outpatient care (44%), and part-time employees (43%) favored the introduction of a new additional sub-specialty for “interventional endoscopy”. The majority of respondents (71%) believe that the responsibility for patient care lies with the senior physician, an opinion shared by trainers (70%) and trainees (74%) to a similar degree. Resident doctors (76%) and those employed at university hospitals (74%) share this opinion, in particular. Two-thirds of the respondents would like the workday of the training assistant to be structured by the training assistants themselves. Trainees (72%) and part-time employees (75%) prefer a self-structured workday even more. In contrast, self-structuring of the working day is less favored in outpatient clinics (60%). There was a consensus that in the yearly training evaluation, the trainer should give feedback on performance (94%), discuss rotations (91%), inform about further training opportunities (86%), and show prospects at the training place (83%) during yearly training evaluations. Consensus about the first two points was most striking among physician residents (feedback on performance: 98%, rotations: 97%). Research opportunities were only an exciting topic for university employees (73%). Interestingly, 63% of all residents were interested in bringing up research in the yearly training evaluation, with residents in university hospitals wishing to discuss research to an even greater extent (72%). Besides regular patient care, discussing additional commitment in the clinic was most likely desired at university hospitals (61%). Of note, most respondents (66%) did not see the training interview as an opportunity to discuss development opportunities for soft skills. Around three-fourths of the respondents believe that part-time work must not lead to disadvantages in continuing education . There is substantially stronger agreement among trainees (88%) than trainers (69%), with especially strong agreement among women (87%), respondents < 42 years (85%), students (92%), and part-time employees (87%). Physicians in outpatient care (60%) and full-time employees (68%) showed lower agreement rates. A substantial proportion (59%) of respondents believe that part-time work in continuing education should be made possible in all areas of activity and supported by colleagues. Rates of agreement are similar between trainers (58%) and trainees (61%). Substantial differences are particularly evident between full-time and part-time employees (54% versus 75%). This study is the first survey of trainers and trainees for gastroenterology in Germany concerning the desired training conditions. Overall, we found a strong concordance in most responses between trainers and trainees and in the pre-defined subgroups: medical hierarchy level, place of work, gender, age, full-time/part-time work, and parental status . Consistently, earlier quantitative research showed no generational differences regarding self-reported work–life balance, work hours, and attitudes toward patient care among internal medicine physicians of different generations, besides significant perceived differences . Above all, trainers and trainees agree that internal training should take place during working hours and that external training should be supported through a training budget. There is substantial agreement that a supervisor’s presence should initially support resident physicians’ training in a new technique, such as endoscopy or sonography. Although these points seem not surprising initially, they might act as a foundation to jointly improve training conditions. Remarkably, most respondents believe that training for board certification in gastroenterology is better if performed at multiple institutions . Department heads share this view less, perhaps because they have high confidence in the training provided at the institutions they lead. An impressive 86% of respondents in private practice see training at multiple institutions as superior. Exclusively training in the outpatient setting is not possible in Germany. Therefore, respondents in private practice did change their employment at least once and, hence, are the group with the most direct experience. To our knowledge, no reliable data on resident mobility between institutions in Germany are available. However, the possibility of changing institutions while in training seems relatively underutilized. Perhaps increasing mobility in training is a way to improve training quality in the future. At the same time, the acceptance of mobility for career purposes is decreasing in different disciplines, a trend that might also hold true for health care. Considerable differences in responses to the questions of whether good clinical training is possible without overtime were documented in our survey . Trainers considered overtime essential for good continuing education, a view that trainees did not share. It is unclear whether the difference here stems from a generational difference as respondents under the age of 42 agreed strongly. Alternatively, it might be carried by the opinion firmly held by students that clinical training is hindered and not supported by working overtime. This view might be due to a lack of understanding of clinical training realities and might change when current students enter residency. However, whether a change to this belief will occur is unclear. Additionally, how this perspective would affect day-to-day work on the wards is even less clear. Of note, an older evaluation from 2009 revealed that physicians in Germany work roughly 4 million hours of overtime per year, with around 25% of overtime uncompensated . One can infer that senior physicians who believe that overtime is essential for good clinical education presumably misinterpret a lack of willingness to work overtime as a lack of enthusiasm toward high-quality training. This misunderstanding can further fuel a potential conflict. Maybe a wish for higher compatibility of private life and work is reflected in the responses of the next generation, as seen in a Swiss study of generation Z in 2022 . Joint efforts by trainees and trainers have to be made to find a consensus on how clinical training can be structured to ensure high-quality training with a limited number of extra work hours. The most striking heterogeneity in responses we observed was whether research time should be fully considered work time . Respondents under 42 years, trainees, and especially resident physicians want research to be reflected in the duty roster, while this idea does not find a majority among doctors older than 42 years or trainers. From our point of view, the response from trainees speaks against a lack of enthusiasm for research. Research time increases stress and strain in the clinical setting and, therefore, should be considered work time in their understanding. The reality of research on weekends and after the end of a shift is probably one of the main reasons for the lamentable lack of young researchers . There was also a considerable difference between female and male respondents. While 71% of women stated that research should be reflected in the roster, only 49% of men did. Of note, this effect might be bolstered by women in our survey being younger than men, but as it holds true among all age groups, a different gender-specific preference is to be assumed. Clinician-scientists believe that sacrifices must be made regarding family to be successful in their research career . In dual-physician couples, it was shown that mothers, unlike fathers, reduce work hours . Consistently, female clinician-scientists reduce their clinical work hours more often than their male counterparts . Reasons for this gender difference are numerous: health issues during pregnancy and after delivery, maternity protection, breastfeeding, socialization, lack of support from partner and social network, nearly no daycare places for children below 6 months, limited opening hours in daycare places and schools, lack of kindergarten teachers, lack of support and mentoring at the workplace, and many more. These facts might explain why women are more dependent than men on research being mapped in service time and not being a private matter. As the economic pressure on departments remains high, we think there is a strong need for a joint effort by doctors, professional organizations, hospitals, and political actors to guarantee that research is adequately reflected in the duty roster. Expanding clinician-scientist programs could be one of several solutions . However, as yet, implementing research time as part of the board certification is either not allowed at all or only partially accepted by regulatory authorities in Germany. If supporting clinician scientists’ career paths is a societal priority, joint forces on the trainer’s and the authority’s side are needed to remove obstacles on this path. Regarding integrating part-time work in clinical practice, our survey showed an ongoing conflict . Most respondents stated that part-time work must not lead to disadvantages in training, with trainees agreeing significantly more often than trainers do. Only a slim majority of overall respondents say that part-time work should be made possible even if it burdens co-workers. Strikingly, the dissent among trainers and trainees nearly completely vanishes here. As colleagues working part-time probably burden trainees in the day-to-day more often, this response is understandable but bears some structural inconsistency. Naturally, more respondents working part-time stated that it should be possible to work part-time even if it puts a strain on the working conditions of full-time colleagues. Notably, more women than men in our survey were working part-time, and significantly more women than men think that part-time employment must not lead to training disadvantages. Instead of framing this finding as a conflict between full- and part-time working physicians, we interpret it as a call to reimagine and reinvent our organizational structures to minimize or even diminish the negative impact of integrating part-time work into everyday work. Interestingly, there is considerable heterogeneity in the trainee group in our survey, as the responses to some questions strongly differ between resident physicians and students. For example, regarding ultrasound and endoscopy, most residents prefer rotations, while most students prefer continuous parallel training on their patients . During their studies, students strongly demand the teaching of hands-on skills and might hope to learn sonography and endoscopy as early as possible through parallel training. Residents may have experienced that the parallel learning of sonography and endoscopy, in addition to shifts on the ward, only succeeds to a limited extent. Accordingly, the discussion about rotations and future training perspectives during the yearly evaluation should be integral. The most remarkable differences in workplace expectations were found between university hospitals and outpatient clinics or primary providers. As expected, research has a much higher priority in university clinics. Physicians at university hospitals desire more than physicians in outpatient clinics features of a reasonable work–life balance, such as educational training courses or part-time work. It is unclear whether outpatient clinics provide these desired features more than university hospitals. Possibly, physicians in outpatient clinics just do not favor training courses during work time as much as physicians at university hospitals because their salaries depend much more on the number of patients they treat. However, the fact is that many physicians are switching from university hospitals to outpatient clinics to work part-time after becoming parents. It could be a chance for university hospitals to consider ongoing medical education during work hours to respond to the loss of physicians and to support a better work–life balance in the inpatient setting. While many items received high levels of consensus across subgroups, we observed issues on which gastroenterologists, regardless of subgroup affiliation, are highly divided: for example, whether advanced endoscopy techniques such as ERCP and PTBD should remain a part of training for board certification in Germany . Strikingly, in all predefined subgroups, we equally observed respondents intensely in favor or strongly opposed to the idea of creating a new additional designation, “interventional endoscopy”.  As medicine and scientific progress lead to more and more subspecialization, answering this question is closely linked to how endoscopy and gastroenterology patient care should be organized in the future . Our survey reveals that a consensus is still missing and that the divide runs through all ages and medical hierarchy levels. Several limitations should be considered when interpreting the findings of our survey. As we approached current and future German gastroenterologists through the DGVS, our study only represents members of the largest German professional society for gastroenterology. As resident physicians are less represented in the DGVS, this might result in a selection bias, especially regarding this subgroup of physicians. Also, our cohort of medical students is not representative of medical students in Germany, as there was no other measure of approaching the whole medical student body in Germany other than emailing all student councils. Hence, all results in this group should be interpreted cautiously and seen as exploratory. The response rate to our survey was relatively low, albeit still in the expected range for an email survey in a large cohort . Respondents are likely more interested in training conditions than nonrespondents. In our survey, we classified department heads, senior physicians, board-certified physicians, and outpatient care physicians as trainers and residents and students as trainees. This does not fully grasp the fluidity of the trainer and trainee roles in the German medical system. For example, residents provide training to students, and board-certified physicians are sometimes in training for additional (sub)specializations. The distinction by seniority is a pragmatic solution, but the considerable heterogeneity, especially between resident physician and student responses, underscores the need to understand neither trainees nor trainers as monolithic blocs. The censoring of data points always holds the risk of bias. As 39 responses were censored in our study, other biases through our approach cannot be ruled out but seem unlikely, as less than 4% of all completed responses were excluded. Due to privacy concerns, we did not collect any data enabling the correlation of trainees and trainers at the same institution. We can, therefore, not conclude if consensus on training conditions is weaker or stronger at single institutions compared to the national picture. We believe that interinstitutional heterogeneity exists and that a one-size-fits-all approach is not the answer to improving the quality of medical training. Solutions should always be found through direct communication and assessment of specific situations. In conclusion, there is considerable consensus about many aspects of training implementation for board certification in Germany. The authors strongly advise implementing changes to physician training, reflecting the preferences held by the clear majority (as defined as approval by more than 75% of respondents, , in bold) of both trainers and trainees. Additionally, there are aspects with strong preferences by trainees not shared by trainers. From an employer’s perspective, these implementations might decrease employee turnover through increased training satisfaction and should be thoroughly considered. Besides its limitations, our survey gives the first glimpse into the expectations and beliefs of trainers and trainees for board certification in gastroenterology in Germany. We hope that these data will create a basis on which training conditions can be discussed and improved with the help of all stakeholders.
Impact of a structured training program to enhance skills in phacoemulsification surgery
bdb03892-fb02-4de8-959a-89f80c9b25a9
8837341
Ophthalmology[mh]
After obtaining the Institutional Ethical Committee clearance, a retrospective observational study was conducted from March 2018 to October 2019 on the trainees who had undergone a short-term phacoemulsification training course at a tertiary eye care institute in India. In January 2019, there was a change in the short-term phacoemulsification training curriculum with the introduction of a more structured program. The trainees in this period were divided into two groups: Group 1 – before the introduction, and Group 2 – after the introduction of a structured training program. The PCR rate, mean ICO-OSCAR score, and the learning curve in phacoemulsification surgeries performed by the trainees in both the groups were assessed. The phacoemulsification training period was for 5 weeks, where each trainee performed 20 hands-on cases in total. The details of and differences in both the programs are shown in . The new structured training program had modules that introduced online cognitive enhancement with assessment, formative feedback–assisted wet-lab training and hands-on surgeries, surgical video review, and compulsory observation of surgical cases being performed. The Group 1 trainees did not have these modules. The premodule was off-site, and Modules 1 to 3 were on-site. Premodule 1 started 1 week prior to the on-site training, which included enhancing cognitive abilities by completing online modules on the basics of cataract, phacoemulsification techniques, phacoemulsification instrumentation, and online pretraining assessment. Module 1 was completed in Week 1 of on-site training, which had wet-lab training and group discussions. Wet-lab training included 5 days of intensive wet-lab course with the grading of the cases with OSSCAR score (Ophthalmology Simulated Surgical Competency Assessment Rubric) score and formative feedback–assisted training wherein repeat wet-lab cases were done if the trainee did not achieve the designated score. The trainee also underwent online theoretical sessions along with group discussion on incisions and wound construction, capsulorrhexis, phacodynamics, and complications of surgery. Observing phacoemulsification cases being performed by an expert and getting hands-on training in A-scan biometry were compulsory. Module 2 was completed in the second and the third week, which consisted of supervised phacoemulsification step surgery with ICO-OSCAR scoring of each case, occupational therapy observation, and surgical video recording review, and the online text included sessions on different types of foldable intraocular lenses and complication management in different scenarios. Module 3 was completed in Week 4, which consisted of independent phacoemulsification surgeries conducted under supervision with ICO-OSCAR scoring of each case, surgical video review, online cognitive learning objectives such as postoperative management in different scenarios, feedback, and exit exam. The sample size was calculated to compare the PCR rates of phacoemulsification surgeries performed by the trainees in the two types of training programs during the study period. We assumed the PCR rates to be 6.40% in Group 1 and 2.64% in Group 2, respectively. The sample size was calculated for 95%confidence interval, 80% power, and taking the ratio of the exposed to unexposed to be 1. So the sample size obtained was 509 cases in each group. The training program enrolled those ophthalmologists who had cleared postgraduation in ophthalmology and had an experience of performing minimum of 50 small-incision cataract surgeries. To assess the influence of age on the outcome variables, we divided the trainees into three groups, namely, ≤30 years (very young), 31–40 years (young), >40 years (middle aged). They were also divided based on the past surgical experience of phacoemulsification surgery into three groups, namely, no phacoemulsification surgeries performed (no phaco), assisted in phacoemulsification cases (assisted phaco), and performed up to 10 independent phacoemulsification cases (limited phaco). The ICO-OSCAR score was used to assess the trainee’s performance during the training program in both the groups, and they were categorized as per the guidelines given in the scoring. This cataract surgery evaluation breaks down the surgical procedure into 20 individual steps, and each step is graded on a scale of Novice, Beginner, Advanced Beginner, and Competent. A description of the performance necessary to achieve each grade in each step is given ( http://www.icoph.org/downloads/ICO-OSCAR-Phaco.pdf ). It collectively divides the surgeon’s skill into Novice (score 2), Beginner (score 3), Advanced Beginner (score 4), and Competent (score 5), which allows surgical skill improvement to be measured quantitatively. After the completion of the surgery, the obtained scores were reviewed with the trainee for immediate feedback, and a plan for improvement was developed. The trainees maintained a logbook to record their surgical performance. OSSCAR scoring was used to grade wet-lab cases in Group 2, wherein the trainee was graded as Novice, Beginner, and Advanced Beginner. The trainee was made to perform wet-lab cases till they reached the Advanced Beginner grade of the OSSCAR score. There were 10 trainers in total. All of them were inducted with the standard surgical training protocol, outcome measures, assessment rubrics, and thresholds for “take over” of cases. All the trainers had a minimum of 5 years of experience in phacoemulsification. As different faculty members were grading the same trainees, to minimize the interobserver variability, all faculty members graded a few recorded surgical videos together to ensure that their grading was similar. A senior surgeon evaluated the case preoperative, and only the cases with the following criteria were allotted to the trainees: Age >50 years, an uncomplicated cataract case with nuclear sclerosis Grades 2 to 3 with no posterior pole pathologies, and not one-eyed. The patients with the above characteristics were randomly allotted to the trainees, and each step of the surgery was supervised by a trainer, and if any complication occurred during the live surgery on the patients, the faculty would complete the case to ensure that the surgical outcome and vision were not compromised. An unpaired t test was used for the comparison of the mean OSCAR scores and the complication rates of all the cases and their association with age and past surgical experience between the two groups. A Chi-square test was used to compare the number of trainees in the three age groups and in the past surgical experience groups and to compare the visual acuity of the cases at discharge. The software used was SPSS (Statistical Package for Social Sciences) Version 25.0. A total of 660 surgeries were performed by 33 trainees in Group 1, and 580 surgeries were performed by 29 trainees in Group 2. shows the baseline characteristics of the trainees with respect to age and past surgical experience, and the difference between both the groups for each of the above variables was not statistically significant; hence, both the groups were comparable with each other. shows the mean OSCAR score and the complication rates of all cases in both the groups. The PCR rate was 20.30% in Group 1 and 9.14% in Group 2 with P < 0.001, which is statistically significant. The mean OSCAR score in Group 1 was 3.43, whereas in Group 2 it was 4.03 with P < 0.001, which is statistically significant. Unpaired t test was used to compare the results. also shows the visual acuity at discharge. A total of 429 cases (65%) in Group 1 and 487 cases (84%) in Group 2 had vision ≥6/18. Thirty (4.5%) cases in Group 1 and 14 cases (2.41%) in Group 2 had vision <6/60. After performing the Chi-square test, the P value was 0.013, which is significant. shows the association of age and past surgical experience with the mean OSCAR score and complication rates of all cases. It is observed that in all the age groups and past surgical experience groups, Group 2 had a better OSCAR score and lower complication rates than Group 1 with P < 0.001, which is significant. In both the groups, it was seen that the younger age group and more previous surgical experience had better OSCAR scores and lower complication rates with P < 0.001, which is statistically significant. shows the learning curve plotted against the OSCAR score case-wise from Case 1 to Case 20. While Group 1 started off the training as Novice and exited as Advanced Beginners, Group 2 trainees started off as Beginners and exited as Near-Competent trainees. The trainees in Group 2 attained the Advanced Beginner stage 10 cases prior to that of Group 1. It was observed that the trainees in the structured training group (Group 2) had better OSCAR scores, lower complication rates, and better visual acuity at discharge than the trainees in the prestructured group (Group 1). The learning curve showed that Group 2 achieved competency 10 cases prior to Group 1 at exit. In our study, the mean OSCAR score before implementation of the structured training program was 3.43 and after implementation of the structured training program was 4.03. The poststructured training program OSCAR score of 4.03 is comparable with 4.3 obtained in the third month of training in the study done by Yu et al . The OSCAR score of the poststructured training program is comparable, although there is a minor difference that can be attributed to the period of training, which is 5 weeks in our program and 3 months in Yu et al .’s study. The authors concluded that the trainees succeeded in performing phacoemulsification safely and skillfully through a limited short period of training by wet-lab exposure, deliberate practice in patients, and frequent formative feedback provided by the OSCAR tool. Our study supports and expands the findings of Yu et al. , wherein we conclude that the modules had a significant impact on improving the OSCAR score and hence the overall outcome of the phacoemulsification surgery. In the multivariate analysis, we concluded that age has a limited effect on the outcome of surgeries, but the past surgical experience had a positive impact on the final surgical outcome. In assessing the learning curve, we observed that whereas the trainees in Group 1 exited as Advanced Beginner (mean OSCAR score 4.1), the trainees in Group 2 exited as Near-Competent surgeons (mean OSCAR score 4.77); hence, the trainees in Group 2 are competent enough to perform independent phacoemulsification cataract surgeries. We also observed that the trainees in Group 2 attained the Advanced Beginner stage 10 cases prior to that of Group 1, reinforcing the importance of Modules 1 and 2. A recent literature review of the complication rates of resident-performed cataract surgeries reported that the rate of vitreous loss ranged from 1.8% to 19% and that the rate of posterior capsule tear ranged from 0.6% to 18%. Borboli-Gerogiannis et al . concluded that the implementation of a comprehensive cataract surgery curriculum focusing on patient outcomes resulted in a decrease in the rate of intraoperative complications. Our study supports these findings, where we found a significant decrease in the PCR rates in Group 2 than in Group 1. In both the groups, it was seen that the posterior capsular tear rate is higher in older surgeons undergoing training. Whereas older surgeons would be thought to have more surgical experience, it can perhaps be inferred that this experience did not include exposure to phacoemulsification surgery. In the analysis with respect to past surgical experience in both the groups, it is seen that more the past surgical experience better the ICO-OSCAR score and lower the PCR rate. In the “no phaco” group, where the trainees had only performed small-incision cataract surgeries, the complication rate in Group 2 was 12.78% (23 cases of the 180 cases), which is above that recommended by the World Health Organization, which states that the complication rate should be less than 10%. Hence, it is concluded that the short-term training programs are not suitable for entirely inexperienced trainees in phacoemulsification surgery. These trainees need to be enrolled in a separate program where the training program and methodology are different, with more wet-lab exposure and training for a longer period. Cognitive enhancing in a trainee is important in understanding the situation while performing the surgery, decision-making, and action versus outcome awareness of the decision taken. It is important to teach and assess decision-making along with the technical aspects of the surgery as improper decision-making is the cause of many complications in the surgery. Our study supports this observation, wherein we observed that a thorough prior knowledge about the phacoemulsification machine, phacodynamics, anatomical layout, various surgical techniques, and complication management eases the learning curve and supports the decision-making. Wet-lab practices help in understanding the complex anatomy as well as attaining hand–eye coordination and practicing basic microsurgical techniques. Multiple studies have reported that simulator and wet-lab training improves the surgical performance in ophthalmology, shortens the learning curve of the residents, shortens the surgical time, and decreases the surgical morbidity and the risk of iatrogenic trauma. We observed that the wet-lab sessions helped in getting foot pedal control, basic control over the phacoemulsification machine, and understanding the fluidics apart from the above-mentioned objectives. Also, it was observed that using assessment rubrics and repetition of wet-lab sessions to achieve the desired competency level improves efficiency and safety and decreases the complication rate. Video recording review with feedback from the trainer has proven to be beneficial in understanding the shortcomings and surgical training. It also allows the trainee to assess their own surgical performance and rectify in the following surgeries. Trainees are expected to perform rigorous self-evaluation of their surgical skills as an index of professional competence. It benefits a trainee to smoothly transition from phacoemulsification step surgery to complete independent surgeries with supervisory discussions. Observational learning is an effective method for learning surgical skills. Research has shown that observational learning activates the same cortical motor regions as physical practice and allows complex skills to be learned more quickly. We also promote observational learning, and we assume that the significant improvement in the overall OSCAR score in Group 2 as compared with Group 1 and a decrease in the complication rate in Group 2 are as a result of all the modules combined together. Strategically introducing phacoemulsification stepwise also helps in confidence gaining and improving the surgical outcomes. The interpretation of the present study is as follows: A structured training curriculum with the enhancement of the cognitive abilities, wet-lab training with formative feedback, phase-wise introduction of surgery, observing surgeries being performed, and video recording review with feedback-assisted cases can make a significant difference in the training experience of a trainee in achieving better surgical outcomes and a decrease in complication rates. More past surgical experience is associated with better surgical outcomes. This article tries to introduce a layout of the structured training program for short-term phacoemulsification training programs to obtain the maximum benefit in a shorter span of time along with the guidelines for trainee selection for these short-term phacoemulsification training programs. The limitation of this study was that as there was an emphasis on monitoring outcomes and a low threshold of “take over” of cases, which might have contributed to the lower complication rates that we observed in the structured training group. The other limitation is the lack of follow-up. A follow-up on these trainees in the poststructured training group is required to know the performance in their individual setups or organizations with respect to their outcomes. In the short-term phacoemulsification program a structured training curriculum can have a significant impact in the surgical outcome with considerable decrease in the complication rate and flattening of the learning curve. This positive impact is attributed to formative feedback assisted wet-lab training, cognitive enhancement, OT observation, stepwise introduction of the surgical steps and video-recording review
Pulmonary Embolism Following Living Donor Hepatectomy: A Report of 4 Cases and Literature Review
2598e025-bd17-4d6a-a9de-15296023e28d
11846252
Surgical Procedures, Operative[mh]
Liver transplantation is a well-established treatment for end-stage liver disease. However, the shortage of deceased donor organs has been a major limitation, especially as the number of patients awaiting transplantation continues to rise . Living donor liver transplantation (LDLT) has become a way to expand the source of donors, and an effective method to replace cadaveric liver transplantation in areas with insufficient cadaver liver supply. In Asian countries, LDLT is becoming more widely adopted . Given that living donors are typically healthy individuals, ensuring donor safety is crucial to the success of LDLT . Despite thorough preoperative evaluations for such donors, the occurrence of nonsurgical complications like pulmonary embolism (PE) remains unpredictable. Although rare, with an incidence of approximately 0.174%, PE is a significant cause of morbidity and mortality in living donor following liver donor hepatectomy (LDH) . This report presents 4 cases of acute PE in living donors undergoing LDH and discusses their clinical features, diagnosis, and treatment. Ethical ApprovalCase 1Case 2Case 3Case 4This study was approved by the Ethics Committee of the Beijing Friendship Hospital, Capital Medical University (No. 2024-P2-362-01). A 46-year-old male donated the right half liver for his brother with end-stage liver disease due to hepatitis B. The preoperative evaluation met the criteria for a living liver donation, and the patient underwent a laparotomy right hepatectomy under general anesthesia. The patient had a history of smoking 20 cigarettes per day for 20 years. His preoperative blood test results were as follows: hemoglobin 142 g/L, platelets 375×10 9 /L, prothrombin time (PT) 11.6 s, activated partial thromboplastin time (APTT) 34 s, and D-dimer 0.08 mg/L. His chest computed tomography (CT) scan was normal, and no lower-extremity vascular ultrasound was performed. The patient received standard anesthetic management. Low central venous pressure (LCVP) technique was used to reduce intraoperative bleeding, and Pringle’s maneuver was not used. The intraoperative blood loss was 100 mL, urine output was 500 mL, total fluid intake was 1700 mL, and no blood products were transfused. On postoperative day (POD) 2, approximately 180ml of bile-like fluid was drained from the abdominal drainage tube, and the abdominal ultrasound results suggested bile leakage. Later in the evening, he experienced chest tightness, discomfort, and a drop in pulse oxygen saturation (SpO 2 ) to 89%. The D-dimer level was elevated at 5.112 mg/L, and arterial blood gas analysis showed hypoxemia (PaO 2 : 52.4 mmHg). The patient was administered 3 L/min of oxygen via nasal cannula, and SpO 2 increased to 95%. To rule out the possibility of PE, an enhanced CT scan was performed immediately, which revealed filling defects in the branches of both pulmonary arteries, indicating pulmonary embolism . Venous ultrasound revealed deep vein thrombosis (DVT) in both legs (Thrombosis was detected in both posterior tibial and peroneal veins, as well as in the left anterior tibial and intermuscular veins of the calf). It was considered that the acute pulmonary embolism (APE) might have been caused by the dislodgement of a deep vein thrombosis from the lower limbs. He was treated with warfarin (4.5 mg/day) and nadroparin calcium (0.4 mL/day) while being advised to remain on bed rest and avoid physical activity. By postoperative day 15, the patient’s clinical symptoms had improved, and a follow-up D-dimer test revealed a level of 1.642 mg/L, indicating an improvement in the hypercoagulable state. The patient was discharged in good condition. A follow-up ultrasound of the lower-limb veins 1 month after surgery still showed thrombosis in the left posterior tibial vein, but by 2-month follow-up, lower-limb deep venous blood flow had returned to normal. A 42-year-old woman donated the left half liver for her son, who suffered from urea cycle disorder. She was previously healthy, with no history of blood clots, smoking or alcohol consumption. The preoperative blood test results were as follows: hemoglobin 132 g/L, platelets 259×10 9 /L, PT 11.5 s, APTT 29.1 s, and D-dimer 0.078 mg/L. No lower-extremity vascular ultrasound was performed. Anesthesia management and operative procedures were to be performed according to our center’s specific expertise and clinical routine. The patient underwent a laparotomy left lobectomy with the middle hepatic vein (MHV) under general anesthesia. The intraoperative blood loss was 200 mL, urine output 200 mL, total fluid intake 1300 mL, and no blood products transfused. She was extubated one hour after being transferred to the intensive care unit (ICU). On POD 6, she reported chest tightness accompanied by fever and was diagnosed with bilateral pleural effusion and partial atelectasis through chest CT. Arterial blood gas analysis showed a PaO 2 of 86.2 mmHg and a PaCO 2 of 25.3 mmHg. The blood test revealed a C-reactive protein level of 17.9 mg/L. Pneumonia was suspected, and the patient was treated with intravenous cefoperazone-sulbactam (1.5 g) for infection, resulting in relief of symptoms. On POD 8, after activity, she developed dyspnea and was diagnosed with PE through pulmonary vascular enhanced CT which showed multiple filling defects in the pulmonary arteries . Ultrasound confirmed DVT in the right leg. She was treated with nadroparin calcium (0.4 mL/day) and switched to rivaroxaban (10 mg/day) after 5 days. After anticoagulant treatment, the donor’s symptoms improved and discharged on POD 15. A follow-up chest CT 3 months post-surgery showed no significant embolism. A 65-year-old male donated the right half liver for his son with liver failure caused by hepatitis B. The donor had a 2 years history of hypertension (grade 1, moderate risk), which was managed with regular oral nifedipine sustained-release tablets, with blood pressure generally well controlled. He had no history of smoking or alcohol consumption. The patient previously underwent internal fixation surgery for a fracture of the left forearm. The preoperative blood test results were as follows: hemoglobin 123 g/L, platelets 226×10 9 /L, PT 11.1 s, APTT 31.3 s, and D-dimer 0.125 mg/L. CT examination was conducted preoperatively, but the lower limbs were not examined for venous thrombus formation. Under general anesthesia, the patient underwent a laparotomy right hepatectomy. Intraoperatively, central venous pressure (CVP) was lowered with tachycardia (10 mg), nitroglycerine (0.3–0.5 ug/kg/min), and combined with reverse Trendelenburg position. Dopamine and norepinephrine were pumped to maintain blood pressure. The intraoperative blood loss was 800 mL, urine output was 600 mL, total fluid intake was 2200 mL. Postoperatively, the patient was transferred to the ICU and was extubated 5 hours later. On POD 2, he developed nausea, cyanosis, and severe hypoxemia (SpO 2 : 82%). Heart rate was 94 bpm, and blood pressure 111/62 mmHg. Auscultation revealed decreased breath sounds in the left lung. Arterial blood gas analysis showed a PaCO 2 of 34.2 mmHg, PaO 2 of 48.9 mmHg, and the serum D-dimer level was 6.402 mg/L. An urgent CT scan of chest indicated left lower lobe atelectasis, partial right lower lobe atelectasis, and bilateral pleural effusion. The patient was given nasal oxygen at 4 L/min, with SpO 2 fluctuating between 80% and 85%. Due to poor improvement in SpO 2 , the oxygen flow rate was adjusted to 5 L/min, resulting in symptom relief and SpO 2 rising to 95–97%. Additionally, nadroparin calcium injection (0.4 mL/day) was used to prevent thrombosis. The repeated D-dimer was elevated to 8.012 mg/L on POD 3. Enhanced CT scan of pulmonary vasculature revealed a thrombus in the right lower lung lobe . Concurrently, the lower-extremity venous ultrasound showed thrombosis in the left peroneal vein and partial thrombosis in the intermuscular veins of both calves. Based on these findings, the donor was diagnosed with APE, and continued to receive a subcutaneous injection of nadroparin calcium (0.4 mL/day) for anticoagulant therapy. After 9 days of nadroparin treatment, his condition improved and a follow-up venous ultrasound showed partial recanalization of the veins. Both the pulmonary atelectasis and pleural effusion showed improvement. Nadroparin calcium was discontinued and he was switched to rivaroxaban 10 mg administered orally once-daily. The donor recovered well and was discharged on POD 19. A 57-year-old woman donated the left half liver for her son with liver failure. The patient had a history of splenectomy due to trauma 20 years earlier. She was no history of blood clots, smoking or alcohol consumption. Preoperative blood tests revealed: hemoglobin 142 g/L, platelets 360×10 9 /L, PT 11.2 s, APTT 20 s, and D-dimer 0.175 mg/L. Preoperatively, the patient did not undergo lower-extremity vascular ultrasound. Laparotomy left hepatectomy with middle hepatic vein was performed under general anesthesia with intraoperative application of bilateral lower-limb compression devices for deep vein thrombosis prophylaxis. The intraoperative blood loss was 200 mL, urine output was 700 mL, total fluid intake was 1600 mL. On POD 3, she suddenly developed dyspnea after ambulation. An emergent arterial blood gas analysis was performed, showing a PaCO 2 of 26.9 mmHg, PaO 2 of 70.7 mmHg, and a SpO 2 of 95.80%. The serum D-dimer level was found to be 2.489 mg/L. At this time, the enhanced CT scan of pulmonary arteries showed extensive PE in both lungs . Subsequent ultrasonography of the lower-extremity veins further revealed the presence of venous thrombosis. Due to the extent of the embolism, intravenous heparin therapy was initiated for anticoagulation treatment, with close monitoring of coagulation function. The heparin dose was gradually adjusted according to the coagulation results. On POD 8, the patient developed hematuria and abdominal hematomas. Due to the risk of bleeding, intravenous heparin infusion was discontinued and switched to subcutaneous injection of nadroparin calcium. On postoperative day 12, a follow-up CT scan showed no significant filling defects in the pulmonary arteries and its branches. The lower-limb venous ultrasound indicated patency of the deep veins in both legs, with continued thrombosis in the left calf intermuscular veins. The patient’s clinical symptoms improved, and treatment was switched to oral rivaroxaban (10 mg/day). Finally, the patient was discharged on POD 20. The general information and treatment summary of the 4 patients are shown in . LDLT is a life-saving procedure for recipients with end-stage liver disease, but provides no direct therapeutic benefit for the donor. Zero donor mortality remains the ideal goal of LDLT. However, PE is an unpredictable but potentially fatal complication following LDH, it is also a major risk factor for perioperative disability. Although PE is relatively rare, there have been reports of deaths directly related to PE in living liver donors postoperatively . The death of a donor can greatly affect the donor’s willingness to donate and increase their concern about the risks of the surgery. The occurrence of PE may also prolong hospital stay, increase medical expenses, raise in-hospital mortality, and reduce satisfaction among patients and their families. Therefore, early diagnosis and timely treatment are crucial for patient survival. PE is a serious medical condition, typically results from thrombus formation in the lower-limb, which can dislodge and travel to the lungs, causing obstruction by preventing the circulation of distal blood flow . Thrombosis risk factors include venous stasis, blood hypercoagulability, and endothelial damage (Virchow’s triad) . There are several risk factors for thrombosis, including age 40 years or older, obesity, smoking, a history of thrombosis, surgery, hospitalization, varicose veins, thrombophilia, oral contraceptive use, and pregnancy . In hepatic resection, the incidence of PE exceeds 6% . Melloul et al identified major liver resection in normal liver parenchyma as an independent risk factor for PE, likely due to the disruption of coagulation balance in the early postoperative period, which creates a procoagulant state. Thromboelastography performed on living liver donors also demonstrated that 50% of donors showed a sustained hypercoagulable state for up to 10 days after surgery . Therefore, early postoperative procoagulant features may increase the risk of venous thromboembolism (VTE) after LDH. We report 4 cases of acute pulmonary thromboembolism arising from lower-limb venous thrombosis dislodgement following LDH. The patients, aged 42 to 65 years, included 1 patient who was 65 at the time of surgery, notably older than previously reported PE patients. Preoperative lower-extremity venous ultrasound to exclude thrombosis was not performed, and none of the 4 cases received prophylactic anticoagulation. Fortunately, these patients were promptly diagnosed via enhanced CT scans and lower-extremity venous ultrasound, and the final prognosis was favorable by anticoagulation. Although 4 patients underwent open surgery, no postoperative PE has been observed in donors undergoing LDH at our center. This does not imply a lower incidence of perioperative PE in laparoscopic surgery compared to open surgery. PE is an obstruction within the pulmonary vasculature caused by blood clots, air, tumors, or fat. During laparoscopic hepatectomy, LCVP, reverse Trendelenburg position, and pneumoperitoneum increase the risk of gas embolism, potentially leading to higher mortality and postoperative complications . Intraoperatively, findings of elevated blood CO 2 levels, decreased blood pressure, and arrhythmia should raise suspicion of gas embolism. The detection of a large amount of gas in the right heart via transesophageal ultrasound is helpful for diagnosis PE . We reviewed previously published literature and found several reports of PE occurring in living liver donors postoperatively. Notably, an international survey on VTE events after liver donor hepatectomy reported data from 51 transplant centers from 20 countries, which performed a total of 11500 LDLT between 2016 and 2020. Of these centers, 20 (39%) and 15 (29%) reported at least one perioperative DVT or PE event, respectively. One donor’s death was directly related to postoperative PE. The overall incidence of DVT and PE in living donor liver resection was 3.65 and 1.74 per 1,000 cases, respectively . However, this survey did not provide detailed information on the diagnosis and treatment of PE cases. Reviewing 5 case reports and one single-center retrospective study , we found detailed descriptions of PE diagnosis and treatment in living liver donors, with one case resulting in death due to PE. Of note, all donors had undergone preoperative evaluations that did not identify any contraindications to donation or a history of thrombosis. Among the reported cases, 3 of the donors underwent right hepatectomy, and 3 had left hepatectomy. In our study, 2 patients underwent right hepatectomy, and 2 had left hepatectomy. Major hepatectomy, the presence of underlying malignancies, and the absence of DVT prophylaxis are key risk factors for postoperative PE . Furthermore, many studies have shown that right liver lobe of donors tends to have a higher complication rate compared to left lateral or left liver lobe of donor . The symptoms of PE are non-specific, making it challenging to identify. PE should be suspected in patients who present with chest pain or difficulty breathing without an obvious cause. D-dimer testing and chest imaging, particularly pulmonary angiography and CT, are crucial in diagnosing PE. D-dimer is a specific degradation product formed from the activation and breakdown of fibrin, primarily reflecting fibrinolytic function . A D-dimer threshold of 500 ng/mL offers a negative predictive value for PE of 97% to 100%. However, due to its potential low specificity and sometimes limited sensitivity, D-dimer testing is mainly used to rule out PE in clinical practice. Some centers recommend performing contrast-enhanced CT scans for patients with elevated D-dimer levels ≥20 ng/mL, regardless of clinical symptoms, to exclude PE or other VTE . Currently, enhanced CT scan of the pulmonary angiography is the preferred method for diagnosing PE due to its non-invasive nature, convenience, and high sensitivity and specificity . In our center, all 4 cases of PE were confirmed via enhanced CT of chest within 2 to 8 days postoperatively, with initial symptoms of chest tightness, dyspnea, and decreased oxygen saturation. Simultaneously, doppler ultrasound of the lower limbs revealed DVT in all cases. As a non-invasive, safe, and highly accurate diagnostic tool, lower-limb venous ultrasound is now the preferred method for diagnosing DVT. A history of VTE is one of the most significant risk factors for postoperative VTE. Therefore, preoperative screening with venous ultrasound should be standard for all potential donors, and donors with a prior history or evidence of VTE should be excluded from donation to ensure donor safety. For patients with suspected PE and hemodynamic instability (defined as systolic blood pressure <90 mmHg and end-organ hypoperfusion), point-of-care ultrasound (PoCUS) can be valuable in identifying non-specific signs of PE, such as right ventricular dilation, a flattened interventricular septum, and the McConnell’s sign (characterized by decreased motion of the free wall of the right ventricle with normal or even hyperdynamic motion of the apex) . In rare cases, PoCUS may directly detect a thrombus moving between the heart and pulmonary arteries, providing a definitive diagnosis of PE. The advantage of PoCUS lies in its role as a diagnostic rather than an exclusionary tool. For patients with a high clinical suspicion of PE who are unstable or in the acute phase, PoCUS can be employed to make rapid treatment decisions and provide invaluable immediate information before performing more comprehensive chest imaging safely. Preventive measures for VTE include both pharmacological thromboprophylaxis (PTP) and mechanical thromboprophylaxis (MTP). Among 6 case reports, 1 donor did not receive any thromboprophylaxis measures, 1 female donor received both PTP and MTP, and the other 4 received only 1 form of prevention. In our report, 3 of the 4 donors used compression stockings intraoperatively to prevent thrombosis, and 1 female donor received MTP (intermittent pneumatic compression). None of the 4 patients received PTP treatment. Recent studies support the use of PTP in liver surgery and have found that PTP combined with MTP significantly reduces the incidence of VTE without increasing the risk of bleeding after liver resection . An international survey has shown that dual-mode thromboprophylaxis is the most common strategy, although some transplant centers still use a single-mode prevention or no routine prevention . All donors should receive an individualized prophylactic anticoagulation regimen after surgery. For living liver donors with a higher risk of thrombosis, a single prevention mode may be insufficient; thus, a combined pharmacological and mechanical dual-mode prevention strategy should be considered for optimal protection. PE is a potentially life-threatening severe complication following LDH, requiring rapid diagnosis and intervention. Targeted preventive measures and intensified postoperative monitoring should be implemented for individuals at high risk of thrombosis perioperatively. Suspicion of PE should prompt rapid diagnosis by clinical evaluation, D-dimer testing, pulmonary angiography, and PoCUS. Early diagnosis and timely treatment are critical to prevent mortality in living liver donors due to PE.
Shared Decisions: A Qualitative Study on Clinician and Patient Perspectives on Statin Therapy and Statin‐Associated Side Effects
bbdaf7d9-fd64-41f2-a122-6b22d96b8ae1
7763718
Patient Education as Topic[mh]
What Is New? What Are the Clinical Implications? Using qualitative interviews among patients with statin‐associated side effects, we identified 5 domains that drive management and communication between clinicians and patients around statin therapy. These 5 domains can be used to develop an aid to improve communication and management in patients with statin‐associated side effects. The Consolidated Criteria for Reporting Qualitative Research guided our reporting of methods and results. Because of the nature of the data, study data will not be made available to other researchers. Patient Interviews Clinician Interviews Statistical Analysis We included patients with a documented history of ASCVD (ischemic heart disease, ischemic cerebrovascular disease, or peripheral arterial disease), aged ≥18 years, receiving care at the Michael E. DeBakey Veterans Affairs Medical Center. Patients with ASCVD were initially identified using the International Classification of Diseases, Ninth Revision, Clinical Modification ( ICD‐9-CM ) diagnosis and the Current Procedural Terminology codes. The positive predictive value for the identification of patients with ASCVD was 95% for this algorithm compared with manual chart review of 200 random patients from this cohort. After manual chart review to confirm presence of ASCVD, further inclusion criteria were used to identify patients for qualitative interviews. These criteria included receipt of primary care at the Michael E. DeBakey Veterans Affairs Medical Center, at least one SASE documented in the electronic health record, and an inability to tolerate moderate‐ or high‐intensity statin therapy, as defined in the 2013 cholesterol management guideline. SASEs were identified using the Department of Veteran Affairs' adverse drug reaction system and confirmed by manual chart review. Last, patients were excluded if they had a history of metastatic cancer or if they were receiving hospice care. Using these criteria and patient consent, 21 patients with a history of ASCVD and a history of SASEs were screened by the study's research coordinator. We further excluded 4 of these 21 patients for the following reasons: a female patient with history of hypertension but no ASCVD on chart review, a female patient with SASE to nonstatin therapy only, a female patient who could not clearly recall SASE to statin therapy, and a male patient with nonalcoholic steatohepatitis in whom statin therapy was not used but otherwise there was no documented SASE that the patient could recall. Therefore, our final sample size for patient interviews included 17 patients who were interviewed about their experiences with SASEs and clinician‐patient communication around risks and benefits of statin therapy. Patients were invited to participate in a brief telephone interview via an opt‐out letter. Twenty clinicians who regularly prescribe and/or manage statin therapy at a large Veterans Affairs Medical Center in the southeastern United States were interviewed. We included cardiologists, primary care physicians, primary care nurse practitioners, and clinical pharmacists that regularly prescribe and/or manage SASEs to obtain diverse perspectives consistent with a maximum variation sampling strategy. Clinicians were contacted via an opt‐out e‐mail inviting them to participate in a brief telephone interview about their perceptions of SASEs and their statin management and communication strategies with patients with SASEs. Our sample size for both patients and clinicians was guided by a maximum variation sampling approach, a purposive sampling approach , that ensured diverse perspectives in patients with SASEs from various race/ethnicity backgrounds. This sampling approach also allowed us to capture diverse perspectives from a varied sample based on provider type (ie, physicians, advanced practice providers [nurse practitioners or physician assistants], and pharmacists), number of years of practice in the veterans affairs system, and practice specialty (internal medicine or cardiology). The core themes and shared patterns crosscutting this variation facilitated identification of components of a future communication aid with a potential to be widely adopted among a diverse group of clinicians. After receiving approvals by the Institutional Review Board and Veterans Health Administration Research and Development Committee, a qualitative methodologist conducted all clinician and patient interviews between July 2018 and May 2019, to ensure consistency in data collection. Patients and clinicians gave verbal consent, and all interviews were audio recorded and professionally transcribed. Interviews were semistructured, and the full interview guides can be found in Tables . Given sample heterogeneity, especially for the clinicians, thematic saturation was not the goal of our interviews. Rather, our goal was to identify themes that crosscut this heterogeneity within and across both groups, which is consistent with a maximum variation sampling approach. Directed content analysis approach guided our analysis and was facilitated by the Atlas.ti qualitative software (v.8; Atlas.ti Scientific Development GmbH, Berlin, Germany). A directed content analysis approach was used given the availability of prior research on this topic. By using a directed content analysis approach, our aim was to extend the prior findings, which were mostly inferred using either large structured data sets or survey questions. These concepts included the variability in communication around SASEs and the impact of social networks on patients' perception of SASEs. Improving the efficiency of the analytic approach by anchoring it on prior research also allowed us to explore the interplay of perception on statins and SASEs between various stakeholders involved in the interview process (clinicians and patients). Furthermore, this approach not only facilitated detailed capture of clinician and patient perspectives on SASEs, it also allowed our team to further understand how these concepts can inform the design, content, and development of a communication aid for a future large‐scale implementation study. The study's qualitative methodologist and research coordinator developed and revised individual codebooks for clinician and patient interviews. The qualitative methodologist and the research coordinator each coded 2 transcripts and compared their findings through a process of negotiated consensus where coding discrepancies were discussed and resolved. On consensus, remaining transcripts were analyzed independently, with the qualitative methodologist and the research coordinator spot‐checking each other's work for accuracy. Codebooks were largely composed of a priori (ie, deductive) codes gleaned from the interview guide; a few a posteriori (ie, inductive) codes were also developed. All transcripts were individually coded, and analysts regularly met in consensus meetings to compare findings, discuss coding discrepancies, and modify the codebook to improve clarity of the codes. If needed, previously coded transcripts were revisited to revise coding. Interim results were presented to the full study team for discussion. Team discussion focused on the scope of the analyses and facilitated the identification of the most relevant codes to include in the final analysis; codes were then “pile sorted” into 5 main topic domains by the study's qualitative methodologist. , Within each topic domain, closely related and/or overlapping codes were combined to streamline the data. Points of congruence and divergence in clinician and patient perspectives were identified in each domain, which facilitated and informed the identification of the major themes. Our final sample size included 17 patients. Our patient sample was largely men (94.1%) with an average age of 66 years (SD, 8.07 years). Approximately 65% were Black patients and 35% were White patients (Table ). Fifteen patients (62.5%) had history of ischemic heart disease, 6 patients (25%) had history of ischemic cerebrovascular disease, and 3 patients (12.5%) had history of peripheral arterial disease. The mean number of years since the last adverse effect from statins was 4.70 years (SD, 2.91 years). Patient interviews lasted between 11 and 50 minutes. We interviewed 20 clinicians. These included cardiologists (n=4), primary care physicians (n=5), primary care nurse practitioners (n=6), and clinical pharmacists (n=5). Clinicians were 60% White individuals, 50% women, 20% Black individuals, 20% Asian, and 10% Hispanic. Interviewed clinicians were on average 11 years in clinical practice (Table ). Among the clinicians who were physicians, 5 were board certified in internal medicine, 1 in family medicine, and 4 in cardiovascular medicine. Clinician interviews lasted between 17 minutes and 1 hour 24 minutes, with an average length of 20 to 40 minutes. Patient and clinician data were integrated within our discussion of the 5 themes described below ( ). SASEs Are a Highly Individualized Experience Clinician‐Patient Communication Around Statins/SASEs Is Variable Managing SASEs Is Essentially “Trial and Error” The Internet, Social Networks, and Other Media Sources Influence Patients' Perceptions of Statins A Simple Aid Can Improve Clinician‐Patient Communication Around Statins/SASEs Clinicians and patients agreed that a decision support tool that summarized recent guidelines, was simple and algorithmic, and included resources and visuals to improve communication is needed. A patient stated: Because if you do see a picture like that, I know me, a lot of times a picture is worth a thousand words. Another patient mentioned the utility of videos explaining risks and benefits of statin therapy. …you know, with all the technology now and everything, in fact, your team could probably come up with something…post a video you can click on, you know, make it required if you're taking, or prescribed statins that it's a requirement that you watch this…that's very inexpensive and that's very easy, everybody's got a smart phone now, they can click on it and watch it for 5 minutes. Five minutes should be plenty, I mean if it's kind of laid out. Clinicians expressed how a decision aid could facilitate their statin decision‐making and adverse effect management. The clinical decision support tool should be simple and in the form of an algorithm or a decision tree that is colorful with few branches, circles, arrows, or boxes to minimize confusion. This decision aid should summarize recent guidelines on statin use, and include statin starting dosages, options for various statins, safety profiles, strategies to manage SASEs (eg, nonstatin alternatives, ruling out secondary causes of SASEs, and statin titration), and strategies to rechallenge statin intolerant patients. Clinicians felt the decision aid would be most useful during the clinical encounter and should be available in a variety of formats to meet clinician preferences (eg, mobile application, pocket card, and desktop computer link). Clinicians stated: It needs to be brief. I personally like things that are more colorful than not because I think in color. I need something that is, if it's going to be a flowchart, it needs to be more of a true flowchart and not a spider web. Those are too hard to follow. It needs to be something I can easily access and I know has been, that is updated and reliable. … I would say that's something that I would think other providers would look at. They don't want to read through pages and pages of recommendations but just a simple algorithm. Clinicians also desired a patient‐centered communication aid that included disease management education (eg, managing high cholesterol and managing cardiovascular disease), information they can share with patients about the differences between statins and nonstatin therapies, the myths and truths about statins, SASEs, and natural therapies, and information conveying self‐care strategies to alleviate myalgias, that adverse effects are reversible, and that not all patients will experience SASEs. The communication aid should be available in a variety of delivery formats (eg, in paper versions, such as posters and pamphlets, as well as electronically accessible, such as videos, websites, and PDFs) to reinforce information, and be engaging and interactive, geared toward patients with low literacy levels, and highly visual. Clinicians and patients suggested a few elements to make the communication aid more visually appealing. These included depicting a patient having a heart attack (eg, clutching chest) or stroke (eg, drooping face) to reinforce how the benefits of statins (ie, prevention of an initial or subsequent event) outweigh the risks. They also suggested visuals depicting the frequency of true statin myalgias in comparison to the overall number of patients taking statin medications, illustrating the severity of statin adverse effects on a scale from mild to severe, developing a short video depicting an older patient talking about statin adverse effects, visuals that educate patients about why statins are prescribed, and a visual depiction of extreme muscle tiredness or weakness. Our clinician and patient interview data suggested that the experience of SASEs varied from one patient to another in terms of severity and timing after initiating statin therapy. According to our clinicians, SASEs did not always neatly fit into the commonly used categories of “mild” (eg, myalgias that are tolerable), “moderate” (eg, myalgias that impact activities of daily living), and “severe” (eg, rhabdomyolysis). So, what may be a mild adverse effect for one patient may be perceived as severe for another patient. For example, a cardiology clinician explained how myalgias may be considered severe by a 60‐year‐old patient with limited functional capacity, thereby leading to statin discontinuation. However, an older 70‐year‐old highly functional patient may not be too bothered by myalgias and would remain adherent with statin therapy. Compared with clinicians, patients attributed a higher number of symptoms or health conditions to the effects of statin therapy. Table provides a breakdown of SASEs that clinicians reported, which stands in contrast to patient‐reported SASEs. Although clinicians and patients report myalgias as an SASE, patients also attributed the onset of type 2 diabetes mellitus, memory loss, and dermatologic issues, for example, to statins. Patients also reported individual differences in the onset of SASEs. SASE onset could occur within a few weeks of statin use or occur after several years, as one patient described: Well, as you know, early on I was younger and everything, and wasn't bothered too much by medications. You know, I had to take medications but didn't suffer too much the side effects, but as I got older, side effects became much more obvious. The individualized nature of SASEs was also demonstrated by how soon patients became aware that statins caused their symptoms. One patient immediately attributed his/her itching and hives to the statin therapy, whereas another patient said that it took time for him/her before he/she attributed statin as the cause of his/her symptoms. The patient explained: …I only figured it out by accident. I would never think that a drug [could cause side effects], no, I couldn't put that two and two together. Our interviews revealed that the amount of information related to SASEs conveyed by the clinicians to patients varied, some patients recalled learning little about SASEs, and there was variability in the discussions around risk‐benefit of statin therapy for secondary ASCVD prevention. One patient stated: I think, you know, I vaguely remember my provider saying that there are some side effects, but it's nothing to really worry about. The way that she said it was if it's a symptom that turns out to be serious, then get in contact with her, and I guess we would discuss it. It was kind of vague in terms of that, but I don't remember her explicitly telling me what each individual side effect would be or what to look out for. Another patient stated: She told me about the side effect of the soreness. That's the main complaint, and that's exactly what I started getting as far as for me. …She said the medication works. She said if you have to, spread it out like 1 day on, 1 day off, she said, but whatever you do, keep taking it because it works. Clinicians varied in how much information they conveyed to patients about SASEs, which was confirmed in our patient interviews. One primary care clinician explained how “generally you pick out the most important” adverse effects to discuss given time limitations. One patient understood that clinicians are pressed for time during the clinical encounter, but stated: …I mean I understand some of that stuff, but on the other hand, you know, you're not dealing with a '57 Chevy, it's a person, it's a human life, you know, and it's very important, you know, to take the time to tell you about stuff. In addition to time limitations, clinicians avoided providing patients with the entire list of potential SASEs as it could make patients more aware or “ tuned in ” and develop SASEs that may not be statin related. A clinician explained: I think that an important part of prescribing statins is to understand what aches and pains the patient has before you begin, because once you mention that these drugs can or might cause symptoms, then people are going to be tuned in to whether they have changes. Patients recalled learning little about SASEs or having more in‐depth discussions about SASEs with their clinicians. Muscle pain and gastrointestinal issues were the adverse effects patients recalled being told about by their clinicians most frequently. “It was mostly about muscle pain,” recalled one patient, “ they (ie, the clinician) didn't mention the memory [loss] at all .” Print information on statins (eg, pamphlets) was not frequently provided to patients; a few patients also suggested that clinicians rely on the prescription insert as the sole information source on risks and benefits of any medication. Both patient and clinician interviews described communication around reasons for statin initiation and how the benefits of statins outweigh the risks. For example, one patient was told that statins could cause adverse effects, but she/he was encouraged to try the statin for its health benefits: She (clinician) knew how I feared about certain medicine, that's why she would always sit down and talk with me about things. And she told me that all medicine has side effects but…she said some patients never had a side effect. Then she told me try [the statin], and it was to help lower my cholesterol. And I would benefit from them. Statin benefits were especially emphasized for patients for secondary prevention of ASCVD, as one pharmacist noted: It's almost like you (ie, secondary prevention patients) really don't have a choice, and then the primary [prevention group] is like you may have a choice, but your better choice would be to take it. However, clinicians noted that statin initiation is more difficult for primary prevention patients in whom there are no current symptoms. Yet, the decision to engage in statin therapy should align with patients' wishes. A cardiology clinician who frequently managed statin intolerance stated, I can't guide them (ie, the patients), and they have to guide me in terms of taking statins to extend life (ie, quantity of life) but weighing that against the impact of side effects on the patient's quality of life. Both clinician and patient interviews indicated how patients frequently voiced their concerns about SASEs to their clinicians. Patient response was mixed when it came to clinicians inquiring about the onset of SASEs. Some patients felt their clinicians communicated with them about adverse effects when they came for routine tests, but a few patients struggled talking with their clinicians when SASEs developed. For example, one clinician was not convinced that a patient's perceived adverse effects were caused by statin therapy: I told him (ie, the clinician) …I'm having some type of reaction from the medication. And he told me no, that wasn't reaction from that medication. But I already had Googled it online and it had said, you know, having red spots or red streaks down your leg was a reaction to the medication. And I showed it to him…. Another patient believed his/her clinician had the right intentions initiating statins, even switching statin medications and reducing the dose. However, the patient still struggled communicating with the clinician about balancing the benefits of statins with what the patient felt was intolerable statin‐induced myalgia: And I knew he (ie, clinician) was right, so what do you tell him? I mean…it's not that I don't want to take them (ie, statins), it's that I should just, I can't take them, they won't work. Our analyses revealed that although clinicians' goal was to find a tolerable dose of statin therapy in patients with clinical ASCVD mostly using a “trial and error approach,” some patients found this approach frustrating given their disabling symptoms. Clinicians stated that their overall goal for patients with SASEs was to find a tolerable statin medication and dose that no longer caused significant, life‐disrupting adverse effects. However, management of SASEs was often considered “trial and error.” Eliciting patients' experiences with SASEs facilitated the management process. This included asking the patient questions about the adverse effects he/she was experiencing (eg, When did the adverse effects begin? What type of adverse effects? Correlation with statin initiation?). This also included assessing if the patient was adherent to the medication, determining his/her willingness to continue with statins after experiencing SASEs, and regularly communicating with patients about their experiences with adverse effects. Clinicians described management approaches, such as running laboratory tests to assess creatine kinase and/or liver function and evaluation to rule out secondary causes, such as low vitamin D, hypothyroidism, or arthritis. Some clinicians asked patients to take a statin “holiday” to suspend the medication to see if symptoms resolve. Changing statin medications and/or the dose titration were additional SASE management options. Nonstatin alternatives, such as CoQ10 (taken concurrently with the statin), ezetimibe, and proprotein convertase subtilisin/kexin type 9 inhibitors, were other alternatives suggested by clinicians. A few clinicians also referred patients to a cardiologist or pharmacist. Some patients were frustrated by this “trial and error” approach to SASE management. One patient said: So I tried it (ie, the statin) with this new doctor, went through that whole process starting with the first statin, then going to the second statin. Did that whole process all over again, and I didn't like it. Another patient explained how his/her clinician changed his/her statin dose, then changed the statin drug, until after the third medication change it was decided that the patient “wasn't able to take those type of medications.” Patients also discussed self‐titration efforts to help with adverse effects, with several stopping the medication on their own when adverse effects became intolerable. Patients also discussed use of nonstatin alternatives, like ezetimibe, fenofibrate, and proprotein convertase subtilisin/kexin type 9 inhibitors, niacin, vitamin supplements (eg, vitamins K and D), CoQ10, aspirin, fish, and flax seed oils. Physical activity and dietary changes were stated as additional alternatives to lowering cholesterol in place of the statin. Our analyses in this domain revealed that although patients used a vast array of resources (internet, social media, and television) and social networks (friends, family members, and other patients) to inform their views and perception about statin therapy, clinicians worried about the authenticity of such sources. Patient reluctance and/or negative perceptions of statins were fueled, in part, by information they accessed outside of the clinical encounter. Patients reported using WebMD, Facebook, Epocrates, and search engines like Google and Yahoo to learn about statins and SASEs. Clinician interviews confirmed this patient behavior. A pharmacist said: So most of my patients are scared of statins, I will say, especially I think it's more about being more well informed. A lot of them will go do online Google searches on WebMD and whatnot, so most of our patients have that preconception of statins are going to cause muscle pain or muscle cramps. A primary care clinician characterized patients as “Googleologists” and believed that although an informed patient helps clinicians “stay on your toes,” she/he questioned the accuracy of statin information gleaned from Google searches. Similarly, a cardiologist described his/her struggles explaining to patients. …that Googling is good, but we have more evidence‐based research and clinical trials. Both patient and clinician interviews revealed how social networks were another source of information on statins. According to clinicians, patient's friends and/or family members may share their negative experiences with statins, which influences how the patient perceives the drug. One primary care clinician stated: …some of them (ie, patients) will say yes, my sister‐in‐law is on it or yes my friends are on it and we don't like this medication or something like that. Similarly, patients heard about SASEs from friends, relatives, other patients, and neighbors: “I had heard a lot of different things, a lot of different stories about statins from my neighbors, and my wife's friend, and this, that, and other, who were taking them, and had been taking them, and had side effects from them,” noted one patient. She/he wanted to discontinue statin therapy after hearing others' experiences, but his/her clinician recommended statin medication for secondary prevention. Television advertisements, billboards, print advertisements, and the prescription insert were additional sources of information that influenced patient's perceptions of statins. For television advertisements, a cardiology clinician felt that patients “ quote the TV for everything ” and the television advertisements lead the patient to ask questions about statin adverse effects during the clinical encounter, such as “ is this something that's going to damage my liver? ” “ Is this something that's going to make me diabetic? ” “ Is this going to cause Parkinson's or Alzheimer's down the road? ” Similarly, one patient described not knowing the cause of his/her pain until he/she saw an advertisement on television: But see, my pain went on for quite a while before I knew what was causing it, what medication was causing it. And that's where I had seen the ad, on TV, and I started talking about it with my friends and neighbors and kinfolks, and that's what it was [ie, the statin]. We interviewed 20 clinicians who regularly prescribe and manage patients receiving statin therapy and 17 patients with ASCVD to understand their perspectives on statin intolerance. Our findings indicate several areas where care could be improved to increase guideline‐concordant statin therapy use and communication between clinicians and patients about statin therapy. One area involves taking a more patient‐centered approach to conceptualizing the severity of SASEs. Categories of mild/moderate/severe should take into account the degree that perceived symptoms impact patient quality of life. What may be “mild” to one patient may be “severe” to another; patients also varied in adverse effect onset, number of symptoms, and how symptoms impacted their quality of life and activities of daily living. More effective and open communication about the impact of SASEs on patient quality of life and activities of daily living is needed. For example, encouraging clinicians to inquire more deeply about the impact of adverse effects may help build trust and rapport between patients and clinicians so the patient feels heard, which may positively impact patients' willingness to use statin therapy. Our findings also highlight the importance of why clinicians need to inquire and understand how social networks and media sources drive patients' perception about statin therapy and SASEs, at times even more than what is communicated by clinicians in a healthcare setting. Our findings are in line with a previous study, which reported how patients' perceptions of SASEs influence statin therapy use and adherence. On the other hand, our results identified several emergent themes for SASEs are highly variable from one patient to another and the “trial and error approach” to managing SASEs can be frustrating for patients. Our results also identified what components do patients and clinicians desire when developing an aid to improve communication and management of SASEs. Clinicians cited the “power of suggestion” as another reason to avoid extensive discussion of SASEs to prevent patients from incorrectly associating their symptoms with the statin medication. This thinking is in line with the “nocebo effect,” which is a phenomenon that refers to adverse events that result from expectations of harm from a therapeutic intervention. , Patients relied on internet tools, such as Google, WebMD, or Facebook, social networks, and media as sources of (mis)information that influenced their statin perceptions and decision‐making capacity. This is supported by a study in which >40% of patients reported that their healthcare decisions are affected by social media. Therefore, clinicians should take a more active role to discuss potential statin misinformation at the time statins are initiated, or when patients are rechallenged after the onset of adverse effects. This conversation should cover known adverse effects of statins in addition to focus on their potential benefits. As patients are becoming more active consumers of health information than in the past, clinicians should help guide them to trusted sources of information, either on the internet or through educational handouts for patients. , , Clinicians should balance the discussion of statin risks and benefits to ensure that the decision to initiate or rechallenge statin therapy, even for high‐risk secondary prevention patients, is a shared decision between the patient and the clinician. As one clinician noted, only the patient can “guide” the clinician in terms of a quantity versus quality of life decision about statin therapy. Clinicians reported that a decision‐making and communication aid would equip them to navigate through decision checkpoints and optimize communication with their patients to efficiently bridge the current gaps in care. A simple and highly visual decision tool to facilitate clinician decisions about statin initiation, changing/titrating statins after development of adverse effects, and possible alternate treatments could be useful. Clinicians could use this communication aid during a patient's clinic visit to visually depict the development of atheroma and its effects, make them aware of their level of risk, show the frequency of true SASEs versus perceived SASEs among all statin users, and depict the severity of statin adverse effects on a graded scale with checkpoints and alternative treatment options. This could potentially make patients more involved in the decision‐making process and increase their willingness to try strategies recommended by their clinicians in light of SASEs. Clinicians recommended multiple formats for the communication aid, including mobile applications, desktop shortcuts, and pocket cards. This would make the materials easily accessible to the clinicians in the relatively short clinic visit rather than the clinicians having to navigate the websites and lose time that could otherwise be used for improved communication and trust building between the clinicians and the patients. Although several decision aids, such as the statin choice decision aid and diabetes mellitus medication choice aid, have been available and may facilitate shared decision‐making, studies have also shown that their use even when embedded within the electronic health record is low. This is not unexpected as clinicians are notably overburdened with information, and studies have shown clinical reminder fatigue and burnout that results in ignoring the decision support. , Therefore, the design and implementation of such decision aid must be done with careful review of existing workflows, garnering participation and buy‐in by the clinical users and using human factor best practices. From introducing a different statin to lowering the strength or adding nonstatin alternatives, most clinicians strove to keep patients on statins at a tolerable level to ensure good quality of life. However, this “trial and error” statin management approach, as recommended by treatment guidelines, was frustrating for some patients. Therefore, it is important for clinicians to set expectations at the time of statin initiation about SASEs as part of the benefit versus risk discussion. Clinicians should also clearly lay out their treatment plan to the patients, emphasizing that almost two thirds of the patients with SASEs are able to tolerate some form of statin therapy with this “trial and error approach” , and that they are there to work with the patient if he/she has further SASEs. Clinicians should also reassure patients that if they do not tolerate this “trial and error” approach to statin therapy, there are other medication options, especially in patients with established cardiovascular disease. To improve communication with patients and to facilitate shared decision‐making in treatment options, most of the interviewed clinicians agreed that an adverse effect management algorithm with treatment pathways and options would benefit both patients and clinicians. It would allow the clinicians to follow a more evidence‐based treatment approach when dealing with patients with SASEs, and it would assist them in communicating with the patients the importance of taking statins and the overall goal to keep the patients on statins while allowing them to have a good quality of life. The patients would benefit in that the treatment pathways would be clearer and process less frustrating. Our findings are limited given that we conducted one‐time interviews. Patient interviews may also have been limited by recall bias and length of time since first statin initiation. However, many patients had clear recollections of SASEs and the impact of those SASEs on their lives. Given that this study was performed within the Department of Veterans Affairs, we were limited in terms of the number of women patients with ASCVD who were included in the qualitative interviews. Although this study focused on the veteran patient population receiving care in a single medical center, our results are consistent with prior observations in the literature. , This provides reassurance about the generalizability of our findings and increases the transferability of our findings to nonveteran patient populations. Statin use among patients with clinical ASCVD remains suboptimal. This may be attributable to clinician‐ and patient‐related factors, including poor clinician‐patient communication, the individualized nature of SASEs, suboptimal management of SASEs, and greater influence on patients' statin‐associated perceptions by nonclinician resources. A targeted communication aid used to improve clinician‐patient communication with a decision aid to guide clinicians on different treatment options for patients with SASEs could improve statin use in this patient population. Finally, patients who use the internet or social media for some of their medical information may appreciate receiving a list of trustworthy medical sites and sources of scientifically sound medical information. This work was supported by research funding from the Department of Veterans Affairs Health Services Research and Development (IIR 16‐072). This work was also supported by the Houston Veterans Affairs Health Services Research and Development Center for Innovations grant (CIN 13‐413). Dr Virani has received honorarium from the American College of Cardiology (Associate Editor for Innovations, acc.org ). He also serves on the steering committee for the PALM (Patient and Provider Assessment of Lipid Management) registry at the Duke Clinical Research Institute (no financial remuneration). Dr Ballantyne reports grant/research support, all significant. (All paid to institution, not individual): Abbott Diagnostic, Akcea, Amgen, Esperion, Novartis, Regeneron, Roche Diagnostic, Sanofi Synthelabo, National Institutes of Health, American Heart Association, American Diabetes Association, consulting for Abbott Diagnostics, Akcea, Amarin, Amgen, Astra Zeneca*, Boehringer Ingelheim, Denka Seiken, Esperion, Intercept, Janssen, Matinas BioPharma Inc, Merck*, Novartis, Novo Nordisk, Regeneron, Roche Diagnostic, Sanofi‐Synthelabo* (*significant where noted; remainder modest). Dr Navar reports grant/research support from National Heart, Lung, and Blood Institute (K01HL133416); research and consulting from Janssen, Amgen, Amarin, Sanofi, and Regeneron; and consulting from Astra Zeneca, Novo Nordisk, Esperion, BI, and Novartis. The remaining authors have no disclosures to report. Tables S1–S2 Click here for additional data file.
Medication errors during simulated paediatric resuscitations: a prospective, observational human reliability analysis
f7a5d00d-30bd-41ac-80d9-837468db7a1b
6886970
Pediatrics[mh]
Background Medication errors are among the leading cause of avoidable harm in healthcare worldwide and up to three times more common in children than in adults. The paediatric emergency environment, characterised by urgency and fraught with interruptions, is one of the clinical areas most vulnerable to error. Medication administration in emergencies is complex as it requires successful interactions between different teams of medical and nursing staff, as well as between individual members of these teams. An additional challenge relates to obtaining relevant medication information and translating this into the required dose and concentration of the correct drug to be administered by the correct route for the clinical indication, all in a necessarily short space of time. Medication errors in general, and medication administration errors in particular, are both under-detected and under-reported, such that little is known of their incidence or impact during resuscitation. However, medication errors have been reported in 7 out of 10 simulated paediatric resuscitations, with other recent simulation studies suggesting 26% to 70% of administered medicines being given at the wrong dose. Laboratory studies analysing syringes prepared for anaesthetic use have found at least 15% to be greater than 20% discrepant from the intended drug concentration. The broader, systems view of medical error, heralded by the Institute of Medicine’s ‘To Err is Human’ report, saw the widespread adoption of Reason’s organisational accident model in healthcare. More recently, human reliability analysis (HRA) techniques, previously commonplace only in other high risk industries, have become increasingly used in healthcare research. HRA is based on the understanding that neither humans nor systems can be error-proof, and asserts that to improve safety and reliability, a thorough analysis of system vulnerabilities at a task level is needed, taking into account human-human and human-machine interactions. Medication safety researchers have previously used an HRA technique, the systematic human error reduction and prediction approach (SHERPA), to identify system vulnerabilities in ward-based medication administration, anaesthesia and general surgery. This approach, however, has not been used quantitatively in medication safety research and has not been applied to paediatric resuscitation. Our objectives were to describe the incidence, nature and severity of medication errors in simulated paediatric resuscitations, and then use HRA to understand the contributory role played by individual process step discrepancies with a focus on those discrepancies contributing to large magnitude and/or clinically significant errors. Study design and setting Patient and public involvement Participants Clinical scenarios Data sources and measurement Outcome measures Data management and analysis This prospective observational study was conducted from April 2017 to November 2017 in a medical simulation facility within a large teaching hospital. The hospital has a paediatric emergency department (seeing 27 000 children each year) and a comprehensive paediatric inpatient service (admitting 5000 each year). The hospital used electronic prescribing in the inpatient setting, but during resuscitations, medications were more commonly ordered on paper prescription charts. We recruited resuscitation teams of four clinicians, that were randomised to participate in one of two standardised simulated paediatric resuscitation scenarios. The study was approved by the Health Research Authority and the hospital concerned. National Health Service ethics approval was not required. Participants gave written informed consent. The research team held a workshop with parents to get their feedback on the proposal, develop the patient and public involvement and engagement (PPIE) plans, and identify future areas for research. We actively sought attendees through INVOLVE’s ‘People in Research’ website, social media and Imperial College London’s existing networks. Our team has also participated in a PPIE event run by the Royal College of Paediatrics and Child Health, in collaboration with MedsIQ ( http://www.medsiq.org ) and Medicines for Children ( http://www.medicinesforchildren.org.uk ), two UK-based paediatric medication safety initiatives. Eligible participants were a convenience sample of medical and nursing staff from the departments of paediatrics and paediatric emergency medicine at the study hospital. Participants were assigned into teams comprising a senior doctor (a specialist registrar, with at least a year of prior experience as a registrar), a junior doctor, a senior nurse (with at least 5 years’ nursing experience) and a junior nurse. The two scenarios were: Prolonged status epilepticus in an 8-month-old, 8 kg child. Presumed meningococcal sepsis in a 10-month-old, 9 kg child. The two scenarios were designed by a collaboration of paediatric nurses, emergency physicians, intensivists, general paediatricians and anaesthetists. Face validity was established by an independent expert panel of six, with representation from each of these professional groups, including two lead paediatric clinical nurse educators. It was deemed that the two scenarios were both similarly demanding and clinically sound, with treatment recommendations corresponding closely to UK Resuscitation Council and Royal College of Paediatrics and Child Health teaching cases. 10.1136/bmjopen-2019-032686.supp1 Supplementary data A simulated paediatric resuscitation bay was created. The mannequin used was a SimBaby V.2 (Laerdal Medical, Stavanger, Norway), and the syringe pump stack consisted of Alaris PK MK4 units (Becton Dickinson, Franklin Lakes, USA). All relevant print materials (eg, British National Formulary for Children and local guidelines) and hospital information technology systems as well as external internet access were available. Participants were requested to prescribe, prepare and administer medications exactly as per usual practice, to use mobile applications or websites as they would in clinical practice and to telephone specialist colleagues if required. A paediatric intensivist, the hospital lead for paediatric simulation, ran the scenarios. She provided standardised clinical information as live feedback and answered questions regarding the child’s response to treatment or their current condition when needed. A Scotia Medical Observation Training System (smots, Scotia UK, Edinburgh, UK), with two 3-axis, ceiling-mounted video cameras, and three mobile, high-definition cameras equipped with boom microphones, was used. Both nurses in each team wore head-mounted high-definition video cameras (GoPro Inc, California, USA). The video recordings were analysed by a research nurse with 10 years’ experience in paediatric intensive care. We reserved the term ‘medication error’ to describe an overall error with respect to a particular drug administration as a whole, after having been administered to a patient, and the term ‘discrepancy’ to refer to observed deviations from expected local practice at the level of the individual task. This approach has been successfully used by other medication safety researchers. Variation in clinician practice and human adaption to the complex process of medication ordering, preparation and administration typically results in many minor task discrepancies that may not individually, or even in combination, result in a medication error or patient harm. Other discrepancies almost always result in medication errors. For example, a prescribing or pump-programming discrepancy is highly likely to result in a medication error. To identify the most important task discrepancies, we assessed all observed discrepancies to establish the extent to which they may or may not have contributed to any resultant medication errors. summarises the study objectives and associated analyses. Medication errors Discrepancies at the level of the task Significance assessment of task discrepancies All task discrepancies were classified by the nurse assessor according to the contribution made by the discrepancy as follows: No contribution : the discrepancy did not contribute to a medication error. Minor contribution : some contribution made to a medication error. Major contribution: the task discrepancy led directly to a medication error. Medication errors included any errors in dose, administration rate, concentration, drug, route of administration, method of administration, timing or delay in administration. Operational definitions for each of these are given in . Briefly, dosing errors were defined as a greater than 10% deviation from the recommended dosing range (DRDR) at the study site. Any deviation from the recommended rate of administration (DRDRate) was calculated in a similar manner and deviations of more than 10% were considered to be medication errors. Where there was a greater than 25% discrepancy in the DRDR or DRDRate, the errors were considered as ‘large magnitude’. Deviations from the recommended concentration (DRC) of greater than 10% from the concentration specified in local guidance were also included as medication errors. To identify delayed administrations, the time taken for the dose to be ‘ready for delivery’ was calculated as the time for the doctors to obtain any medication information required plus the nurse-led preparation time. The time to be ready for deliver was considered ‘prolonged’ when a particular team took more than double the median time for that specific drug across the entire study without clinical cause for the delay as determined by the nurse assessor. For example, if a medication administration was interrupted to reassess the patient clinically or to administer another medication as a priority, a prolonged time would be excluded as an error on clinical grounds. Severity assessment There are few validated tools that can be used to assess the potential severity of medication errors without knowledge of patient outcomes and that are thus usable in simulated studies. One of these tools is that of Dean and Barber, based on four to five experts independently assessing each error on a 0 to 10 scale, and their mean score used as an index of severity. Mean scores under 3 suggest errors of minor severity, those between 3 and 7 as moderate and those greater than 7 as severe. We used this approach, with two paediatric intensivists, one paediatric anaesthetist, one senior critical care nurse and one senior clinical pharmacist assessing each error. A hierarchical task analysis (HTA) was developed based on a similar framework for ward-based medication administration and assessed for face validity by five senior nurses in the study hospital. A generic human error taxonomy, based on the SHERPA external error modes with one additional error mode, ‘information not sought’, was used to code observed discrepancies against the HTA. Where there were more than two discrepancies at a single step for a specific administration, the nurse assessor made a subjective assessment of which had the greater overall consequence and assigned an error mode to that discrepancy only. To capture ‘root-cause’ system vulnerabilities, steps where an action was performed correctly, but which perpetuated a previous medication error, were not classed as discrepancies. An example would be a correct volume calculation based on an incorrectly prescribed dose. In this example, the volume calculation which persisted but did not directly cause the error wouldn’t be classed as a discrepancy whereas the incorrect prescription would be. were described according to the type of error, method of administration (eg, injection, infusion, continuous infusion) and stage of the medication use process (eg, medication ordering, preparation, administration) during which the error occurred. Error rates were calculated using the total number of applicable administrations as the denominator. Step discrepancies were presented as counts, grouped by task and contribution to medication error. Discrepancy rates were calculated as the percentage of discrepancies that made a major, minor or no contribution to an error, with the number of observed discrepancies at each process step as the denominator. Of those discrepancies making major contributions to medication errors, the proportion that led to clinically significant errors (severity score >3) and/or large magnitude errors (DRDR or DRDRate >25%) was also calculated. There is no literature that quantifies the extent to which a step discrepancy having a minor or major contribution to an error is of greater significance than a step discrepancy that makes no contribution to an error. For a weighted, ‘heat map’ HRA analysis, it was therefore necessary to attribute different weights to discrepancies that resulted in error to those that did not. Substep discrepancies were therefore weighted, agreed by the expert panel, as follows: No contribution: weight=1 Minor contribution: weight=10. Major contribution: weight=40. The total weighted significance score for each step was thereby calculated for each error mode. Interobserver reliability One of the 15 simulations was reanalysed by an additional independent nurse not involved in the simulations. Spearman’s rank correlation coefficient was calculated for 38 continuous variables (eg, doses, elapsed times) and Cohen’s kappa for 20 categorical variables (eg, reconstitution fluid, parameters for labelling quality assessment). The data set concerning the medication errors and timing parameters was treated separately from the data set containing the parameters for the HRA (the task discrepancies). Data were collected during 15 simulations according to participant availability. Eight simulations were of the prolonged seizures case, and seven of the meningococcal sepsis case. Participants comprised 30 doctors and 30 nurses , each of whom completed one simulation. For categorical variables, Cohen’s kappa values were between 0.862 (medication error) and 0.954 (HRA). Continuous variables were only present in the medication error data set, for which Spearman’s rank coefficient was greater than 0.904 for all variables. Medication errors Hierarchical task analysis Errors and discrepancies by stage of medication use and process substep Overall, 884 step discrepancies were observed, excluding dependent downstream discrepancies after an initial discrepancy. Of these 884 step discrepancies, 174 (20%) were linked to a medication error, with 70 (8%) assessed as making a major contribution to an error, 104 (12%) making a minor contribution and 710 (80%) making no contribution. shows the significance-weighted HRA data represented as a heat-map demonstrating the relative contributions of discrepancies at each step and by each error mode to medication errors. summarises the discrepancy counts per step as well as the percentage of both large magnitude and clinically significant errors with a major contribution made at each HTA step. Errors and discrepancies during medication ordering Errors and discrepancies during medication preparation Double-checking Errors and discrepancies during the administration phase Of all observed discrepancies, only 28 (3%) occurred during administration. These resulted in 11 wrong rate errors (five moderate, two severe), nine wrong method errors (three moderate, one severe) and one severe wrong time error. Discrepancies during the administration phase constituted a third of all discrepancies that made a major contribution to a clinically significant error. Infusions in particular were prone to administration errors. Of the 17 discrepancies observed during infusion rate calculations or when programming the infusion pump for intermittent infusions, 14 were of major consequence, and accounted for 23% of all clinically severe errors. Seven discrepancies (four making a major contribution to a medication error) occurred when determining the delivery rate for continuous infusions. Participants conducted 180 medication administrations. Overall, errors were observed reaching the patient for 52 drug administrations (29%) and at least once in every simulation. Of these errors, 30 (58%) were assessed as being of minor severity, 16 (31%) as moderate and 6 (12%) as severe. There were 27 large magnitude errors (52% of all errors), in which the DRDR/DRDRate was greater than 25%. Of all erroneous administrations that reached the patient, only two (4%) were noticed by staff after administration and therefore may have been reported in clinical practice. A detailed error analysis is provided in and description of the 10 most severe errors in . The full HTA is shown as and shows all steps assessed in the paediatric emergency drug administration process. We observed 170 discrepancies during the ordering phase. Five of the 22 clinically significant medication errors were due to discrepancies during medication ordering, with three of these due to incorrect dose information retrieval from the British National Formulary for Children. The majority of the remaining discrepancies (136) were due to incomplete verbal medication orders based on which drug preparation commenced, two of which resulted in medication errors. Of the 180 written medication orders examined, there were six discrepancies observed, all of which were corrected by the clinicians and therefore made no contribution to any dosing errors, but did result in one delayed administration. During medication preparation, 310 discrepancies were observed, representing 35% of all observed discrepancies. These contributed to one medication error involving the wrong drug, 10 dose errors and 20 diluent or dilution errors. The retrieval of preparation and administration information from the online intravenous medications guide was the step mostly likely to contribute to medication errors during medication preparation, with 42 discrepancies (19 major contribution to a medication error, one minor), resulting in nine clinically significant medication errors (severity score >3). The retrieval of incorrect information and taking an excessively long time to identify the correct information within the guidance were the most common discrepancy error modes. There were seven discrepancies when converting milligrams to millilitres of undiluted drug, and 18 discrepancies (six making a major contribution) when, after having made the correct calculations, nurses withdrew either the incorrect amount of undiluted drug or the incorrect amount of diluent into the syringe. Overall, 259 discrepancies were observed during the double-checking phase, 72 (28%) of which made a minor contribution to a medication error. Checking the route and method (eg, infusion or bolus) of administration was the most frequently omitted. We observed 29 medication errors that were made during medication ordering and preparation but had not been yet administered to the patient at the point of double-checking. These errors were thus potentially ‘interceptable’ but all ultimately reached the patient. Of these errors, in 14 cases, the double-checking interaction between the nurses included the incorrect step but failed to identify it as incorrect. This prospective observational study is the first in paediatric emergency medicine to include a quantitative HRA, allowing identification of the task discrepancies making the greatest contribution to medication error. We identified at least one medication error in all 15 simulations, and a large magnitude or clinically significant error in 12 of these. Comparison with previous literature Limitations Implications for research and practice This study highlights the need for research to optimise clinicians’ use of electronic resources containing medication preparation and administration information. We were not able to pinpoint the precise steps at which the current electronic intravenous medications guidance system in the study hospital proved vulnerable to misinterpretation. Research to further understand the steps that need attention may serve as a useful basis from which to refine, and if needed, redesign such systems. This study reaffirms that performing complex arithmetic in high-stress clinical environments is a considerable contributor to medication error. With the purpose of addressing medication safety in paediatric resuscitation, the literature has been dominated by studies looking at ‘resuscitation aids’, most commonly length-based tapes. These aids couple weight estimation with a suggested dose for a limited number of medications, but usually do not provide comprehensive preparation and administration support. It is not likely that length-based tapes would have decreased the rate of medication error in this study. Further clinical research is required to determine the effectiveness of new digital tools that do support preparation and administration, such as those that have shown promising results in simulated studies. Human factors methods have been used in other high-risk industries to define system vulnerabilities for building safer systems. By using quantitative HRA, this study provides evidence for the prioritisation of research efforts directed towards new interventions to address the most important system weaknesses. In terms of implications for practice, one of the most unexpected findings in this investigation was the uncovering of ‘purely mechanical’ task discrepancies resulting in medication errors. During drug preparation, clinicians were observed drawing up incorrect volumes of medications or diluents even though all calculations were correct. This suggests that efforts seeking to address medication safety in cognitively demanding environments using clinical education strategies or contemporary technologies must do so without disregarding the seemingly ‘simplest’ aspects of drug preparation. The reliability of information exchanges between healthcare professionals similarly needs improvement. Verbal medication orders in particular are inconsistent and error-ridden. Particular attention should be paid to medication orders given verbally in the emergency setting, using approaches such as the recipient verbally confirming the medication and dose being prepared. More importantly however, there is an urgent need for research to explore how to bring greater effectiveness to checking and double-checking more broadly. These are steps intended to defend patients from error, but which are too often ineffective. Historical heterogeneity of the definitions of medication error and the variability in reporting metrics make comparison with previous literature difficult. Additionally, there are few simulated studies and no relevant clinical studies in paediatric resuscitation, making comparator data scarce. Prescribing error rates in the (non-simulated) emergency setting have been reported to be between 10.1% and 16% of all orders; our study reports a lower rate of 5%, although this difference may be at least partly due to different error definitions. Our study reinforces similar findings in another recent analysis suggesting that preparation and administration errors may be more common, but is the first to highlight the extent to which these errors go undetected. Other simulated studies have reported error rates for the administration of intravenous bolus medication of between 15.5% and 26.5% ; in our study, it was 31%. The referenced studies, however, reported only on dose errors, and not any other medication error types. Only seven out of the 24 medication errors we observed for bolus doses were dose errors. Medications given by intermittent infusion were the most error-prone in our study. There are no studies that investigate emergency administration of intermittent infusions in sufficient detail to provide a basis against which to compare this finding. Medications given by continuous infusion are potentially the most complicated in paediatric emergencies. In addition to the preparation steps for intermittent infusions, staff generally have to convert infusion rates in milligrams or micrograms per kilogram per minute to infusion rates of millilitres per hour. A recent trial of a digital application reported errors in 70% of continuous vasopressor infusions in the control arm. However, despite the increased cognitive demand, we observed the lowest incidence of medication error for continuous infusions, at 18%. Administration of continuous infusions in our hospital seemed to be relatively well supported by an online/paper tool. Although considerable efforts were made to replicate the paediatric emergency department environment in a state-of-the-art facility, the simulation environment may not reflect the clinical environment during a genuine emergency. Participants were not blinded to the purpose of the study, and therefore this investigation is potentially subject to preparation bias. This investigation was conducted at a single site and all three of the nurse video assessors were from the same academic unit in the study hospital. It is thus possible that, despite best efforts at standardisation, the results may be skewed to reflect the expectations of medication practice at the study site. Although this is one of the largest simulations studies in paediatric emergency medicine investigating medication errors, the sample size made it impossible to make probabilistic assessments of the relationship between step discrepancies and medication errors. Furthermore, the discrepancy assessment was made by a single nurse assessor. Save for the extract on which inter-rater reliability was tested, we did not have the resources to evaluate 884 discrepancies by more than one clinician, reaching consensus on each. Finally, we did not investigate the potential role of clinical pharmacists in the resuscitation setting. There is convincing evidence that the presence of clinical pharmacists reduces medication errors in this setting, and the emergency department at the study site does, at times, benefit from the assistance and expertise of clinical pharmacists. However, their presence is not routine during resuscitations. Overall, we identified errors in 29% of all simulated medication administrations, only two of which were detected by participants, with 40% of these likely to result in moderate or severe harm. HRA revealed a number of error-prone steps, many of which occurred during preparation and administration of correctly ordered medications. The task most likely to result in erroneous medication administration was ineffective retrieval of correct medication preparation and administration instructions from intravenous medication guidance. This study has highlighted an urgent need to optimise existing systems and to commission new approaches to increase the reliability of human interactions with the emergency medication administration process. Reviewer comments Author's manuscript
Multifaceted Approaches in Epithelial Cell Adhesion Molecule-Mediated Circulating Tumor Cell Isolation
0dbf10d4-9f3d-4172-a91e-09b37dc68981
11901967
Biopsy[mh]
Molecular recognition-mediated circulating tumor cell (CTC) capture is an advanced technique used to isolate and identify cancer cells circulating in the bloodstream. Since CTCs are shed from primary tumors and travel through blood vessels, they play a crucial role in the development of cancer metastasis, making them a valuable target for diagnostic purposes [ , , , , , ]. CTCs exhibit resistance to anoikis, a form of programmed cell death induced by detachment from the extracellular matrix. This resistance allows CTCs to survive in the bloodstream, facilitating their dissemination and metastatic potential . In addition, CTCs can possess stem cell-like characteristics (including self-renewal and differentiation capabilities), related to epithelial-to-mesenchymal transition [ , , , ], when epithelial cells acquire mesenchymal traits . Immune evasion by the expression of peptides that bind to class I molecules of the major histocompatibility complex , aggregation with platelets, and the formation of clusters also promote their survival and facilitate the metastatic process . CTCs are typically larger than most cellular blood components , and often exhibit morphological heterogeneity compared to the more uniform morphology of normal blood cells . CTCs exhibit distinct electrophysiological traits, which can be exploited for their isolation and characterization . Cell stiffness is a biomechanical property that distinguishes, e.g., ovarian cancer cells from non-malignant cells . Cancer cells exhibit higher mechanical resistance than their healthy counterparts and might be privileged to migrate into distant organs . Detecting and analyzing CTCs can provide critical insights into cancer progression and treatment efficacy ( ). The challenge lies in capturing these cells effectively from the blood, as they are present in extremely low concentrations among millions of normal blood cells [ , , , , ]. The Epithelial Cell Adhesion Molecule (EpCAM) and other epithelial markers have been the cornerstone in the detection and capture of circulating tumor cells (CTCs) through liquid biopsy techniques [ , , , ]. However, their effectiveness is limited during epithelial-to-mesenchymal transition (EMT), a process where epithelial cells acquire mesenchymal traits, leading to the downregulation of epithelial markers . EMT exhibits variability at both the individual cell and population levels. Instead of fully converting to a mesenchymal state, cells often acquire a mix of epithelial and mesenchymal markers, morphologies, and behaviors [ , , ] ( ). This transition enhances the metastatic potential of tumor cells but also hinders their detection using EpCAM-based methods . Incorporating other markers into CTC detection or combining them with EpCAM is a reliable strategy to enhance the sensitivity and specificity of liquid biopsies , particularly in cases where EpCAM expression is diminished due to EMT. By targeting a combination of epithelial and mesenchymal markers, researchers can aim to develop more comprehensive and reliable methods for CTC isolation and analysis, thereby improving the prognostic and therapeutic management of cancer patients. CTCs with low or no EpCAM expression could serve as a valuable alternative for tumor analysis, even though they may be less prognostically significant compared to EpCAM high-expressing CTCs. They are especially useful when no CTCs can be detected using EpCAM-based methods . The aim of this review is to reveal the role of the EpCAM in circulating tumor cell isolation, addressing its limitations due to the heterogeneity of CTCs and the epithelial-to-mesenchymal transition. The article explores alternative biomarkers and innovative strategies for CTC capture, evaluates EpCAM-based and independent enrichment techniques, and emphasizes the need for multifaceted approaches that combine analysis of molecular and biophysical characteristics to improve liquid biopsy sensitivity and specificity for cancer diagnostics, prognostics, and therapeutic monitoring. One of the most well-known antigens for epithelial-type CTCs is the epithelial cell adhesion molecule (EpCAM), a transmembrane glycoprotein, first reported in 1979 . EpCAM plays a significant role in cell adhesion, signaling, migration, proliferation, and differentiation in epithelial cells [ , , , , ]. Therefore, this protein has become a focal point in cancer research due to its high expression in many epithelial-derived cancers , where it is often overexpressed compared to normal tissues [ , , ]. In healthy epithelial cells, the EpCAM mediates cell–cell adhesion and helps in maintaining the tissue structure and integrity. It plays a key role in the formation of tight junctions, which hold cells together within tissues. The EpCAM is also involved in regulating cell proliferation and differentiation through signaling pathways. While it is expressed at lower levels in most normal tissues, it becomes upregulated in regenerative or proliferative processes . In many carcinomas, the EpCAM is overexpressed, a feature that has been linked to increased cell proliferation, migration, and invasion. This overexpression is a hallmark of various cancers, including breast, colon, prostate, ovarian, and pancreatic cancers . In addition to its usual feature as cancer stem cell (CSCs) or CTC biomarker, the EpCAM is a relevant object of CTC capture and targeted therapy [ , , , , , , , , , , , ]. Circulating tumor cells (CTCs) are promising biomarkers, as they can signify active systemic cancer. The current gold standard for CTC detection is the CellSearch (CS) system (Menarini Silicon Biosystems, Bologna, Italy), a liquid biopsy analysis device primarily based on positive selection, approved by the U.S. Food and Drug Administration (FDA) for detecting and enumerating circulating tumor cells in patients with metastatic cancers, including breast, prostate, and colorectal cancers [ , , , , ] ( ). It operates by isolating EpCAM + /CK + /CD45 − /DAPI + (CS-CTC) CTCs from 7.5 mL blood samples using immunomagnetic separation [ , , , ]. Despite improvements, the sensitivity of the CellSearch system is still not always adequate, especially in cancers with low CTC counts. Detecting rare CTCs in early-stage cancers or in cases with minimal residual disease can be challenging . The reliance on the EpCAM for CTC capture may lead to underestimation of CTCs undergoing epithelial–mesenchymal transition, during which EpCAM expression diminishes . In addition, variability in sample handling and processing can also impact the reproducibility of the results. Standardizing protocols across laboratories is essential to ensure consistent and reliable outcomes. In summary, while the CellSearch system has significantly contributed to the field of liquid biopsy-based cancer recognition by enabling the detection and enumeration of CTCs, ongoing research aims to address its limitations and enhance its clinical applicability . The IsoFlux system (Fluxion Biosciences, Inc., Alameda, CA, USA) is a semi-automated platform that enriches circulating tumor cells (CTCs) by combining EpCAM-based immunomagnetic positive selection with microfluidic technology, similar to the FDA-approved CellSearch system. Blood samples are mixed with antibody-coated magnetic beads, and then the prepared sample is introduced into the microfluidic cartridge, equipped with a cap that enhances cell recovery by providing a high-cell-density environment for the captured cells. As the sample flows through the cartridge, a magnetic field is applied, causing the magnetic beads bound to CTCs to adhere to the upper surface of the microfluidic channel. This magnetic capture ensures that CTCs are effectively separated from other blood components. After the capture phase, the cartridge is removed from the instrument, and the beads with the captured CTCs are retrieved from the cap. This mechanism ensures transfer efficiency, allowing for high-quality, viable cells to be collected for downstream analyses . Studies have demonstrated an increase in recovery rates from 40% to 90% when using cancer cell lines with IsoFlux. This technology has been effectively utilized to isolate CTCs in patients with bladder, prostate, ovarian, liver, and pancreatic cancers . CellCollector (Gilupi, Potsdam, Germany) is the first in vivo cell capture technique with CE approval, showing potential in detecting CTCs in cases of various cancer types, including lung, prostate, and neuroendocrine tumors. This device is a medical wire bearing anti-EpCAM antibodies directly placed in the bloodstream of patients through a catheter inserted in the vein of the arm. While it remains implanted in the vein for 30 min, it gets in contact with a larger volume of blood, allowing the capture of CTCs in vivo . Earlier studies have shown that the CellCollector system may detect CTCs in a higher proportion of patients compared to CellSearch, which may be due to eliminating the problem of small sample volumes and reducing the need for extensive sample preparation . For instance, in lung cancer patients, the CellCollector identified CTCs in 58% of cases, whereas CellSearch detected them in 27%. Similarly, in neuroendocrine tumor patients, the CellCollector found CTCs in 97% of cases, compared to 47% with CellSearch . However, EpCAM dependence is still a concern, since the device potentially misses certain CTC populations. Also, the technique is much more invasive compared to CellSearch, because it requires the insertion of a permanent catheter into a vein, which may be less convenient for patients . In summary, CellSearch is FDA-approved and widely used in clinical practice, while CellCollector is mainly utilized in research settings. Both technologies depend on EpCAM for CTC capture, which may limit their ability to detect CTCs undergoing EMT. This is a common limitation that researchers are currently addressing by exploring additional markers and methodologies. Integration of Other Biomarkers into EpCAM-Based CTC Enrichment Other biomarkers used in specific types of cancers include human epidermal growth factor receptor 2 (HER2) and epidermal growth factor receptor (EGFR). HER2 (also known as ErbB2) is a member of the human epidermal growth factor receptor family, which is a group of tyrosine kinase receptors [ , , , ]. While HER2 does not have its own ligand-binding domain, it forms heterodimers with other receptors in the EGFR family . This dimerization activates downstream signaling pathways involved in cell proliferation, survival, and differentiation . HER2 is overexpressed in certain cancers, most notably breast cancer, as well as some cases of gastric, ovarian, and lung cancers [ , , ]. Of all breast cancer cases, 10–20% are HER2 positive and the remaining 80–85% are considered HER2 negative, even though HER2 expression can be detected by immuno-histochemistry (IHC) . HER2 overexpression is often associated with aggressive tumor behavior, poor prognosis, and increased risk of metastasis [ , , , ]. HER2 is used as a biomarker to capture and identify CTCs, and it is also a key target in cancer treatment [ , , , , , ]. Monoclonal antibodies, such as Trastuzumab and pertuzumab, are used to treat cancer by blocking HER2 signaling, leading to reduced tumor cell proliferation [ , , , ]. EGFR (also known as ErbB1 or HER1) is a transmembrane tyrosin kinase receptor . Upon binding its ligand (e.g., epidermal growth factor), EGFR undergoes dimerization and autophosphorylation, which activates signaling pathways like the MAPK/ERK and PI3K/AKT pathways driving cell division, survival, and migration . It is often overexpressed or mutated in various cancers, such as lung , head and neck , colorectal [ , , ], and glioblastoma . Mutations lead to uncontrolled receptor activation and are associated with increased tumor growth and spread. EGFR is used to capture and label CTCs, especially in non-small cell lung cancer (NSCLC) and head and neck cancers [ , , , ]. Combining HER2 and EGFR with EpCAM in circulating tumor cell enrichment strategies enhances the capture of heterogeneous CTC populations, addressing the limitations of solely relying on EpCAM-based methods. By targeting multiple surface proteins, this approach captures CTCs that may not express EpCAM but do express HER2 or EGFR, thereby encompassing a broader range of tumor cell phenotypes . Utilizing a combination of markers reduces false negatives associated with single-marker reliance and increases the likelihood of detecting CTCs with variable marker expression . Isolating CTCs expressing specific markers like HER2 and EGFR enables detailed molecular characterization, aiding the selection of targeted therapies and monitoring treatment efficacy. Integrating antibodies against EpCAM, HER2, and EGFR in CTC isolation platforms improved capture rates and provided a more comprehensive profile of CTC populations in breast and non-small cell lung cancer patients [ , , ]. However, the understanding of HER2 expression is evolving from a simple binary classification of HER2-positive and HER2-negative cancers to a wider range. This shift acknowledges that HER2 expression levels vary, influencing treatment decisions and outcomes; therefore, there is a need for novel biomarkers . Mesenchymal markers could also be viable choices for detection and characterization of CTCs in liquid biopsy-based strategies. Activated leukocyte cell adhesion molecule (ALCAM) was identified as a significant marker for circulating tumor cells (CTCs) with low epithelial cell adhesion molecule (EpCAM) expression in pancreatic ductal adenocarcinoma as well as brain metastases of non-small cell lung cancer and validated as a novel alternative surface marker on EpCAM-low CTCs. Targeting N-cadherin, which is usually upregulated in EMT , has also been explored to improve the isolation of CTCs. Combining N-cadherin targeting with traditional epithelial markers like EpCAM has been shown to enhance CTC detection and broadened the variety of captured CTC phenotypes. For instance, a study demonstrated that combining EpCAM with N-cadherin-targeted isolation improved CTC detection and expanded the range of captured CTC phenotypes. However, caution is advised when targeting N-cadherin for CTC isolation, as it may also capture circulating endothelial cells (CECs), leading to false positives. To distinguish between CTCs and CECs, additional markers such as vascular endothelial–cadherin can be used . Combining EpCAM with cell-surface vimentin (CSV) in circulating tumor cell enrichment could also be a promising strategy to enhance the capture of heterogeneous CTC populations, while addressing the limitations of relying solely on EpCAM-based methods [ , , , , ]. Vimentin is an intermediate filament protein that is normally expressed in mesenchymal cells and is part of the cytoskeletal structure; it also provides structural integrity, supports cell shape, and is involved in cellular processes like migration and wound healing. Vimentin is a key player in EMT, a process where epithelial cells acquire mesenchymal characteristics. CTCs undergoing EMT are often more invasive and resistant to capture by traditional markers like EpCAM . Anti-vimentin antibodies can be used to isolate these EMT-CTCs , which are crucial for studying the metastatic process [ , , ]. The GenoCTC (Gencurix, Seoul, Korea) device utilizes microfluidic magnetophoresis and a specialized isolation chip with optimized ferromagnetic wire patterns to enrich circulating tumor cells using anti-human EpCAM beads, targeting mesenchymal–epithelial transition (MET) markers, or anti-human vimentin beads, targeting epithelial–mesenchymal transition (EMT) markers . The On-chip Sort (On-Chip Biotechnologies, Tokyo, Japan) is an innovative benchtop cell sorter that utilizes disposable microfluidic chips for efficient cell sorting . In this process, fluorescent antibodies are utilized for positive selection of different cells in the processed sample. Antibodies against a variety of markers, such as cytokeratin, vimentin, and CD45, can be used during the cell sorting. This method was successfully used in spike-in experiments comprising a series of lung cancer cell lines with different EpCAM expression levels . Similar to the previous method, circulating tumor cells in peripheral blood from high-risk populations and cancer patients were enriched and identified using a positive sorting method that employed liposome magnetic beads targeting the epithelial cell adhesion molecule (EpCAM) and vimentin . The M30 neoepitope is a specific fragment of cytokeratin 18 (CK18) exposed during early apoptosis, which serves as a valuable biomarker for detecting apoptotic epithelial cells, including circulating tumor cells (CTCs) [ , , , ]. A study demonstrated that M30-positive CTCs could be detected in over 70% of carcinoma patients, integrating the CellSearch system with M30-antibodies. The proportion of M30-positive CTCs varied between 50% and 80%. This finding suggests that monitoring M30 expression on CTCs can provide insights into disease progression and treatment response . Integration of circulating tumor DNA (ctDNA) analysis in the test can improve the sensitivity and specificity of liquid biopsies, enabling more comprehensive tumor profiling. For instance, combining EpCAM-, or other surface-based CTC detection methods, with ctDNA analysis reportedly enhanced the diagnostic accuracy of, e.g., the sensitivity of primary lung cancer diagnosis, which may be clinically useful and could enhance early detection of the disease . Combined use of CTC and circulating cell-free DNA (ccfDNA) in liquid biopsy tests has shown potential in metastatic breast cancer management. A study revealed that CTC and ccfDNA levels had a combined effect on patient outcomes. Patients with high levels of both CTC and ccfDNA exhibited increased mortality risk, compared to those with low levels of both CTC and ccfDNA . 3.1. Negative Selection 3.2. Label-Free CTC Enrichment: A Challenger of Immune-Affinity-Based Methods The Parsortix system (ANGLE plc, Guildford, UK) is a liquid biopsy technology for capturing and harvesting CTCs from blood samples, which sorts target cells based on their biophysical characteristics . It features a microfluidic cassette that isolates cells according to their unique size and deformability, effectively differentiating them from other blood components. Unlike traditional methods that depend on specific surface markers, the Parsortix system does not require antibodies, allowing for the capture of a diverse range of CTC phenotypes, including those undergoing EMT [ , , ]. This approach ensures that the viability of the captured CTCs is preserved, enabling a variety of downstream applications . In May 2022, the Parsortix PC1 system achieved FDA clearance for isolating CTCs from blood samples of patients with metastatic breast cancer, marking a pivotal advancement in liquid biopsy technology and highlighting its promise for non-invasive cancer diagnostics and monitoring [ , , ]. ClearCell FX1 is a similar automated system, developed by Biolidics Limited (Singapore). Utilizing a label-free approach, similar to the above, it preserves the integrity and viability of CTCs, facilitating various downstream analyses. The system employs Dean Flow Fractionation (DFF) within a microfluidic biochip (CTChip FR1), to separate CTCs based on size, deformability, and inertia, eliminating the need for antibody-based capture [ , , , , ]. Label-free systems like Parsortix and ClearCell FX can capture a broader range of CTCs, including those with mesenchymal characteristics, whereas immunoaffinity methods may miss these due to marker variability. On the other hand, immunoaffinity methods often achieve higher purity by specifically targeting known markers, but they may not capture all CTC subtypes such as their label-free counterparts, which may co-isolate non-CTC cells, affecting purity . All discussed CTC enrichment technologies are summarized in . EpCAM-based positive selection methods directly bind to CTCs. However, they often co-isolate non-specific cells, leading to contamination by leukocytes or other blood cells. In addition, some CTCs may have low or no EpCAM expression due to the EMT process [ , , ]. Negative selection does not rely on EpCAM expression, allowing for the capture of a broader range of CTC phenotypes using a technique that removes unwanted cells, rather than directly capturing the target cells . This is typically achieved by using antibodies against markers found on non-target cells, such as CD45, a marker for leukocytes . In the CellSearch ® system, the primary method for isolating CTCs is positive selection by using magnetic nanoparticles coated with antibodies against EpCAM. However, the system also employs CD45-specific antibodies, a marker expressed on leukocytes, and these are used to identify white blood cells during the analysis phase, rather than as a separate negative selection step. In other words, the CellSearch system does not incorporate a distinct negative selection phase. Instead, it depends on positive selection for CTC enrichment and uses CD45 staining to exclude leukocytes during the downstream analysis, thereby enhancing purity of the sample [ , , , , ]. Negative selection eliminates CD45 + leukocytes, thereby reducing the non-specific background and allowing for a cleaner pool of CTCs for downstream analysis, enhancing the efficiency of EpCAM-dependent liquid biopsy methods. The use of superparamagnetic Dynabeads (Thermo Fisher Scientific, Waltham, MA, USA) that can be coated with antibodies to target specific cell surface markers (both anti-EpCAM for positive selection and anti-CD45 for negative selection), are a good example for this method . When coated with anti-CD45 antibodies, Dynabeads can effectively bind to and remove CD45-positive leukocytes from a sample. This process enriches the remaining cell population for CTCs, thereby enriching viable and untouched target cells for detection and quantification studies. The RosetteSep (Stemcell Technologies™, Vancouver, BC, Canada) CTC Enrichment System also utilizes a negative selection approach to isolate circulating tumor cells from peripheral blood. This method involves crosslinking unwanted cells, such as leukocytes, to red blood cells (RBCs) to form rosettes whish are subsequently removed through density gradient centrifugation, resulting in an enriched population of CTCs. By not relying on positive selection markers like EpCAM, the RosetteSep technique preserves a broader spectrum of CTC phenotypes, including those with low or no EpCAM expression. A study published in 2022 highlighted the application of the RosetteSep system in enriching CTCs for downstream analyses. The researchers employed a bead-based CTC enrichment strategy, which facilitated the isolation of CTCs without the bias introduced by EpCAM-based selection methods. This negative selection technique is particularly advantageous for capturing CTCs that have undergone epithelial–mesenchymal transition, a process during which cells may downregulate EpCAM expression. By avoiding EpCAM-based positive selection, the RosetteSep system ensures a more comprehensive representation of CTC heterogeneity, which is crucial for accurate cancer diagnosis and monitoring [ , , ]. Researchers developed a two-step process combining immunomagnetic bead-based negative selection with an ODEP-based (Optically Induced Dielectrophoresis) microfluidic device. ODEP is a technique that utilizes light patterns to create virtual electrodes on a photoconductive surface, enabling the manipulation of microscopic particles, including cells, without physical contact [ , , ]. This approach first depletes CD45-positive leukocytes using magnetic beads, followed by the use of ODEP to further purify the CTCs. The method achieved high cell purity (81.6–86.1%) and maintained cell viability, facilitating downstream analyses . A similar approach utilized negative selection to remove leukocytes, followed by ODEP in a microfluidic chip to isolate viable CTCs. This method significantly increased the purity of the resulting CTC fraction while maintaining their viability, making them suitable for further molecular and functional analyses . The CTC-iChip, developed at the Massachusetts General Hospital, is an advanced microfluidic device designed to isolate circulating tumor cells from blood samples through a label-free, negative selection process . The process begins with deterministic lateral displacement (DLD), a size-based separation technique that removes smaller components such as red blood cells and platelets from the blood sample. This step enriches the sample for nucleated cells, including CTCs and white blood cells (WBCs). Following DLD, the sample undergoes inertial focusing, aligning the remaining cells into a single stream within the microfluidic channel. This alignment is crucial for the precise application of downstream separation techniques. In the final step, the device employs magnetophoresis to deplete WBCs, which are labeled with magnetic particles via antibodies targeting specific markers (e.g., CD45, CD66b, and CD16). When subjected to a magnetic field, these labeled WBCs are deflected away from the main flow, leaving behind an enriched population of unlabeled, viable CTCs. By not relying on positive selection markers like EpCAM, the CTC-iChip captures a broader range of CTC phenotypes, including those that may have undergone epithelial–mesenchymal transition (EMT) and lack traditional epithelial markers. In addition, the negative selection approach preserves the viability of the isolated CTCs, making them suitable for various downstream applications, including molecular analyses and culturing. The device achieves significant depletion of non-target cells, resulting in high-purity CTC samples. The CTC-iChip can process blood samples at a rate of approximately 13 mL per hour, making it suitable for clinical applications where larger sample volumes are necessary. The ability to isolate viable CTCs without bias toward specific markers allows for comprehensive molecular characterization, aiding in personalized cancer treatment strategies. The efficiency and scalability of the CTC-iChip make it a promising tool for integrating liquid biopsy techniques into routine clinical practice [ , , , , , , ]. Certain WBC subpopulations, particularly granulocytes, however, can express low levels of CD45 and may also be stained in a non-specific fashion for cytokeratin, leading to potential misclassification as CTCs. To alleviate this issue, researchers have explored the use of additional exclusion markers such as CD15, a marker strongly expressed on granulocytes, which has thus been identified as effective in distinguishing these cells from CTCs. Incorporating CD15 into the exclusion criteria, alongside a highly specific CD45 antibody, significantly reduces false positive rates. Flow cytometry analyses confirm the specificity of CD15 for granulocyte subpopulations, enhancing the accuracy of CTC detection. Implementing a dual-exclusion strategy that combines CD15 and CD45 antibodies, along with optimized cytokeratin antibody selection, has been shown to reduce false-positive rates from 25% to 0.2%. This approach underscores the importance of robust exclusion criteria and high antibody specificity in immunoassays for CTC identification. By effectively eliminating interfering WBC subpopulations, this method enhances the reliability of CTC detection, facilitating more accurate patient monitoring and potentially improving clinical outcomes . Furthermore, the presence of CD45 + /EpCAM + cells also complicates CTC isolation and identification, hindering clinical translation. CTCs can form microemboli, comprising diverse phenotypic populations such as mesenchymal CTCs, and can aggregate into homotypic and heterotypic clusters. These clusters interact with other circulating cells, including immune cells and platelets, potentially enhancing the malignancy of the clusters. Notably, these microemboli represent a prognostically significant subset of CTCs . The isolation and analysis of circulating tumor cells (CTCs) are pivotal in advancing cancer diagnostics, prognostics, and therapeutic monitoring. Traditional EpCAM-based enrichment methods have significantly contributed to this field; however, their limitations, particularly concerning CTCs undergoing epithelial-to-mesenchymal transition (EMT), necessitate the exploration of alternative or complementary strategies. Incorporating additional markers such as HER2, EGFR, and vimentin into enrichment protocols has demonstrated enhanced capture efficiency and a more comprehensive representation of CTC heterogeneity. Furthermore, label-free technologies offer promising avenues by isolating CTCs based on biophysical properties, thereby circumventing the biases associated with marker-dependent methods. These methods generally preserve cell viability better, facilitating downstream analyses such as culture and molecular profiling. Immunoaffinity methods may impact cell viability due to antibody binding. In summary, the choice between label-free and immunoaffinity-based CTC isolation methods depends on the specific requirements of the study or clinical application, considering factors like the need for capturing heterogeneous CTC populations, desired purity levels, and further evaluation. The integration of negative selection techniques further refines CTC isolation by depleting non-target cells, thus enriching the sample for viable CTCs suitable for downstream analysis. In summary, these advancements underscore the importance of developing multifaceted approaches that address the inherent heterogeneity of CTCs. By leveraging a combination of molecular and physical characteristics, future liquid biopsy platforms can achieve higher sensitivity and specificity, ultimately enhancing their clinical utility in personalized cancer management.
A Pilot Study on Predicting Ocular Hypertension in Chinese Patients Undergoing Femtosecond Laser-Assisted Penetrating Keratoplasty
550f624a-7a7d-4de6-a90a-ebcce5ed197e
11898841
Ophthalmologic Surgical Procedures[mh]
Penetrating keratoplasty (PKP) is a widely utilized technique for treating various corneal diseases and restoring vision . However, ocular hypertension (OHT) remains a significant postoperative complication, with reported prevalence rates ranging from 5.5% to 68% - . Moreover, sustained elevation of intraocular pressure (IOP) can lead to corneal edema, which may ultimately result in corneal graft failure - . Femtosecond laser-assisted PKP (Fs-PKP) has improved outcomes compared to conventional PKP by enhancing incision precision, reducing early postoperative astigmatism, and lowering the rates of graft rejection and endothelial cell loss - . Despite these advancements, OHT remains a major concern after Fs-PKP. Previous epidemiological studies have comprehensively investigated the risk factors for developing OHT after keratoplasty from preoperative, intraoperative, and postoperative perspectives , - . These factors include preexisting glaucoma, elevated preoperative IOP, performing keratoplasty in conjunction with intraocular lens (IOL) removal or exchange, and the presence of aphakic or pseudophakic lens status and so on , - . However, many of these studies focus on specific aspects and often lack a comprehensive and systematic evaluation of all potential risk factors. Moreover, the applicability of these risk factors to patients undergoing Fs-PKP is uncertain. The precise cutting capability of the femtosecond laser significantly reduces the risk of anterior chamber collapse during surgery, thereby preserving the integrity of the anterior chamber angle structure. Additionally, the improved alignment between the corneal graft and the recipient bed minimizes postoperative disruption to the angle structure. These technological advancements contribute to maintaining stable aqueous outflow pathways, reducing the likelihood of intraocular pressure elevation and lowering the incidence of postoperative OHT - . Additionally, patients selected for Fs-PKP often have relatively healthier ocular conditions and may come from higher socioeconomic backgrounds, which could significantly impact postoperative outcomes. Furthermore, anatomical differences in the Chinese population, including shallower anterior chamber depth, thinner corneal thickness, and flatter corneal curvature compared to Western populations, may also affect postoperative IOP control and OHT incidence. Therefore, the risk factors associated with OHT following conventional PKP may not be fully applicable to Chinese patients undergoing Fs-PKP, given the unique anatomical and procedural differences in this population. Identifying these factors is crucial for enhancing clinical outcomes and developing tailored preventive strategies, ultimately improving patient care for those undergoing Fs-PKP. Recent research indicates that the emergence of predictive models, which integrate varied data types and employ sophisticated statistical and machine learning methods, offers significant advantages over traditional epidemiological studies by providing more comprehensive and precise risk evaluations - . Historically, predictive models in the field of corneal transplantation have primarily focused on graft failure in corneal transplantation - . In our previous work, we developed a predictive model for graft failure following Fs-PKP, which has proven useful in guiding preoperative risk assessments . However, there is currently no predictive model specifically for postoperative OHT following either Fs-PKP or conventional PKP, particularly within the context of the unique Chinese population. This gap in the literature highlights the need for a targeted predictive model to address this critical issue. Therefore, this study aims to develop and validate a predictive model for assessing the risk of OHT following Fs-PKP in the Chinese population. By incorporating a wide range of risk factors and leveraging advanced statistical techniques, we hope to provide a valuable tool for clinicians to identify high-risk patients preoperatively and to tailor postoperative management strategies accordingly. Participants and study design Study outcome and definitions Statistical analysis This retrospective cohort study, conducTRIPOD) guidelines ( ), included 274 patients who underwent Fs-PKP performed by the same ophthalmic surgeon at Nanjing First Hospital between January 2019 and June 2021. Nanjing First Hospital is a premier institution for corneal transplantation in Jiangsu Province, Eastern China, and is particularly esteemed for its proficiency in Fs-PKP procedures . Patients younger than 18 years, those with incomplete medical records, or those unable to comply with the follow-up schedule were excluded from the study. Additionally, for patients who underwent Fs-PKP in both eyes, only the first-operated eye was included in the analysis to avoid bias and redundancy, as both eyes typically share similar medical histories and systemic conditions. Consequently, the final cohort comprised 238 patients with 238 eyes. This study was approved by the Ethics Committee of Nanjing First Hospital (KY20240318-KS-01) and was conducted in strict accordance with the principles of the Declaration of Helsinki. Due to the retrospective nature of the study, the Ethics Committee waived the requirement for informed consent. Nonetheless, the research adhered rigorously to ethical guidelines throughout the process. As described in our previous study , the Fs-PKP procedure was conducted utilizing the Wave Light Femtosecond Laser 200, which facilitates the creation of precise mushroom-shaped incisions through a 200 kHz repetition rate and pulse energies between 160 and 200 nJ. Postoperative care adhered to a standardized regimen, with all patients receiving antibiotics and corticosteroids to minimize the risk of infection and control inflammation. Pharmacological management was customized for patients with underlying conditions. For those with a history of glaucoma or OHT, preoperative strategies were implemented to ensure IOP was maintained within normal limits prior to surgery. This involved the use of topical medications or surgical interventions as required to achieve effective IOP control. Furthermore, corticosteroids were withheld for two weeks following surgery in patients with fungal keratitis to prevent exacerbation of the infection. Patients diagnosed with infectious keratitis continued on antibiotic therapy to manage ongoing infections, while those with viral infections were administered both topical and systemic antiviral agents to ensure comprehensive viral suppression. The follow-up assessments were systematically scheduled at predefined intervals: one week before surgery, as well as at one week, one month, six months, one year, and two years after surgery . For patients who developed significant postoperative complications—such as persistent epithelial defects, elevated intraocular pressure, or infectious keratitis—additional follow-up evaluations were conducted at three, nine, or eighteen months to ensure comprehensive monitoring and timely management . Throughout all follow-up periods, IOP was closely monitored using a handheld iCare tonometer (iCare TA01i, Helsinki, Finland). Each IOP measurement was performed three times, and the average value was used to assess postoperative OHT. Based on our previous research, we identified 23 risk factors that may be associated with postoperative OHT following corneal grafts . These predictors are systematically categorized into three primary groups: recipients' parameters, which encompass various patient-specific factors; donors' characteristics, which include important details about the donor cornea; and surgery-related variables, which pertain to specific aspects of the surgical procedure. The primary outcome of the study was the occurrence of OHT, defined as a postoperative IOP greater than 21 mmHg (average of three measurements) or an increase of more than 10 mmHg over the baseline IOP, regardless of whether antiglaucoma medication or surgical intervention was required . Data processing Establishment of the nomogram prediction model Assessment of the nomogram prediction model Given the limitations of a small sample size and the absence of external validation, Bootstrap resampling with 500 iterations was employed to ensure the robustness and generalizability of the model, particularly in the receiver operating characteristic (ROC) curve analysis. Time-dependent ROC analyses at 6, 12, and 24 months were performed using the "time ROC" package in R to assess the model's performance across different time intervals. The model's discriminative ability was quantified by the C-index, and calibration curves were generated using the "rms" package to evaluate the accuracy of the predictions. The clinical utility of the nomogram was assessed through decision curve analysis (DCA), implemented via the "dca.R" package. Survival curves were plotted using Kaplan-Meier survival analysis and compared using the log-rank test, facilitated by the "survminer" and "survival" packages in R. Specific analyses related to LASSO, the nomogram, ROC curves, C-index, Survival curve and DCA were conducted using R version 3.2.4. Continuous variables were expressed as means ± standard deviation, while categorical variables were analyzed using chi-square tests. All statistical tests were two-sided, with a significance level set at P < 0.05. Data processing and initial statistical evaluations were performed using IBM SPSS Statistics version 22.0. The development of the nomogram prediction model followed a structured two-step approach. Initially, the least absolute shrinkage and selection operator (LASSO) method, implemented via the "glmnet" package in R, was utilized to identify key predictive features. These selected features were subsequently incorporated into a multivariate Cox regression analysis to determine the independent predictors of OHT. A nomogram was then constructed using the "rms" package in R, based on the significant predictors identified in the Cox regression analysis. Demographic baseline characteristics LASSO and multivariate Cox regression results Probability of OHT after Fs-PKP Construction and Validation of the Nomogram Risk stratifcation based on the nomogram To assess the nomogram's ability to stratify Fs-PKP patients into risk groups, we calculated total points for each patient and used the X-tile program to find an optimal cutoff of 151.4. Patients were then divided into low- and high-risk groups. Kaplan-Meier analysis revealed that the high-risk group had a significantly higher OHT rate (P<0.001; Figure ), confirming the nomogram's effectiveness in distinguishing risk levels. As illustrated in Figure , which outlines the study enrollment process, 274 participants were initially considered; however, 4 were excluded for being under 18, 5 for having incomplete records, and 27 for failing to comply with follow-up. This resulted in a final cohort of 238 patients (238 eyes). The demographic and baseline characteristics of these patients are summarized in Table . The LASSO method identified eight risk factors with substantial predictive value for our model. These risk factors are prior PKP history, viral ocular disease history, ocular trauma history, intraoperative plan, donor diabetes history, presence of hypopyon, presence of corneal neovascularization, and patient age (Figure c). Incorporating these predictors into the model significantly reduced the mean-squared error, enhancing the model's accuracy (Figure a). The LASSO linear regression model further validated the significance of these factors, with each demonstrating non-zero coefficients, thus confirming their statistical relevance in outcome prediction (Figure b). Multivariate Cox regression analysis refined the predictive model, identifying four independent predictors for OHT occurrence after Fs-PKP: prior PKP history (OR=2.39, 95% CI (1.40, 4.11), P=0.002), viral ocular disease history (OR=2.16, 95% CI (1.31, 3.58), P=0.003), ocular trauma history (OR=1.91, 95% CI (1.14, 3.18), P=0.013), and the intraoperative plan (OR=1.87, 95% CI (1.09, 3.22), P=0.023) (Figure d). Figure illustrated the cumulative probabilities of developing OHT at various time points after Fs-PKP based on the dataset: 18.07% OHT rate at 6 months (43/238), 31.09% at 12 months (74/238), and 31.93% at 24 months (76/238). Stratified log-rank tests were conducted on independent prognostic factors from Cox regression analysis to evaluate OHT rate differences (Figure b-f). Significant variations in OS were observed in relation to prior PKP history (P=0.004), viral ocular disease history (P=0.009), ocular trauma history (P=0.009), and the intraoperative plan (P=0.003). We constructed a predictive model (Figure a) to estimate the risk of OHT following FS-PKP surgery, utilizing multivariate cox regression results. Time-dependent ROC analysis at 6, 12, and 24 months showed the area under curve (AUC) values ranging from 0.677 to 0.756 (Figure b), which were corroborated by the bootstrap ROC results of 0.648 to 0.747. The nomogram demonstrated a C-index of 0.705 (95% CI: 0.646-0.764), which remained consistent at 0.705 (95% CI: 0.648-0.747) after 500 bootstrap resampling, underscoring its reliability (Figure c). Calibration curves (Figure d) validated the nomogram's precision in predicting clinical outcomes. DCA demonstrated that the model provided optimal net benefit within risk thresholds of 20% - 70%, showing greater clinical advantage when all predictive factors were incorporated, compared to individual factors (Figure e-f). In this study, we have innovatively developed and validated a nomogram to predict the risk of OHT in Chinese patients after Fs-PKP. The model integrates essential factors, including prior PKP history, viral ocular disease history, ocular trauma history, and intraoperative planning. By encompassing these pivotal variables, our nomogram offers clinicians a valuable and comprehensive tool to assess and mitigate the risk of postoperative OHT, ultimately improving patient care and surgical outcomes. OHT or glaucoma is a common complication following corneal transplantation, with reported global prevalence rates ranging from 5.5% to 68% , . This considerable variability is largely due to the lack of standardized diagnostic criteria. A major challenge in defining postoperative glaucoma lies in the difficulty of assessing visual field and optic nerve changes after surgery. To address these challenges, our study focused on examining the prevalence of OHT after Fs-PKP. One of the primary difficulties in defining OHT following Fs-PKP is the inconsistency that can arise from using different techniques to measure IOP. Although Goldmann applanation tonometry (GAT) is generally regarded as the gold standard for IOP assessment , its accuracy is reduced in eyes with altered corneal structures. Factors such as increased corneal thickness, pronounced astigmatism, irregular corneal surfaces, abnormal corneal rigidity, and modifications at the graft-host interface can all undermine the precision of GAT measurements . Given these limitations, we opted to use the iCare tonometer for IOP assessment in our study. Prior research has shown that iCare achieves acceptable agreement with GAT in both normal eyes and those with post-PKP corneal edema, while demonstrating less correlation with central corneal thickness and corneal curvature. This indicates that iCare may provide a more reliable IOP measurement in post-PKP patients, making it an appropriate tool for our study . Moreover, a meta-analysis involving 27,146 corneal transplant patients highlighted the diversity in IOP measurement methods used postoperatively. Although various instruments such as GAT, Tono-Pen, Mackay-Marg electronic applanation tonometer, and pneumotonometry were employed, most studies did not specify the measurement techniques utilized. Despite this variability, these studies consistently defined OHT as a postoperative IOP exceeding 21 mmHg or an increase of more than 10 mmHg from baseline . Considering this consensus and the limitations of other approaches, we adopted this definition in our study. By utilizing iCare for IOP assessment and adhering to this standardized definition, our findings revealed that the prevalence rate of OHT in Chinese patients undergoing Fs-PKP was 18.07% at 6 months, 31.09% at 12 months, and 31.93% at 24 months. This study provides a more accurate assessment of OHT prevalence rate in this specific patient population, considering the unique factors associated with Fs-PKP and the Asian demographic, thereby offering a valuable contribution to the field and addressing the limitations of previous research. Prior research has largely linked post-keratoplasty OHT to intraoperative alterations, such as the deformation of the anterior chamber angle and the collapse of the trabecular meshwork . These alterations frequently result from the loss of structural support from Descemet's membrane, culminating in heightened outflow resistance and increased IOP . Although Fs-PKP is considered to have a lower likelihood of altering the anterior chamber morphology intraoperatively - , our investigation still identified several risk factors, including a history of prior PKP and ocular trauma, and specific intraoperative strategies, which correspond to this mechanism. A history of PKP may compromise the structural integrity of the anterior chamber, leading to mechanical instability and subsequent angle changes. Ocular trauma has the potential to inflict damage on the trabecular meshwork and create peripheral anterior synechiae, thereby aggravating these structural alterations. Furthermore, combining PKP with other surgical interventions can profoundly impact the anterior chamber's configuration, elevating the likelihood of post-keratoplasty OHT. In addition to these mechanical factors, viral infections such as cytomegalovirus and herpes simplex virus have been associated with postoperative ocular hypertension through pathways involving inflammation and possible damage to the trabecular meshwork . These viruses can persist latently in the trabecular meshwork, leading to ongoing inflammation and elevated IOP. Despite administering both topical and systemic antiviral treatments to patients with a known history of these viral infections, as mandated by ethical considerations, we still found a significant association between these viral infections and OHT. This emphasizes the refractory nature of viral-induced OHT and highlights the challenges in managing such cases. Our findings reinforce the conventional understanding that the anatomy of the anterior segment and underlying primary conditions are pivotal in the etiology of post-keratoplasty OHT. Despite previous literature emphasizing preoperative glaucoma as a significant risk factor for postoperative OHT following corneal transplantation, our study presents a different perspective . In our cohort of patients undergoing Fs-PKP, the majority were preoperatively diagnosed with corneal leukoma (83 out of 238) and corneal ulcers (117 out of 238), conditions that are not typically linked to a history of elevated IOP. Previous literature has often highlighted iridocorneal endothelial (ICE) syndrome as a condition associated with preoperative glaucoma, suggesting a significant risk for postoperative OHT . However, we consider ICE syndrome to be more suitably treated with endothelial keratoplasty rather than Fs-PKP. As a result, only 10 patients in our study had a documented history of glaucoma. Moreover, in line with ethical considerations and to ensure surgical safety, we prioritized controlling IOP to within normal ranges for all patients before they underwent Fs-PKP. These factors may help explain why our study did not find a significant correlation between preoperative glaucoma and the occurrence of postoperative OHT, offering a different perspective from previous studies. In conventional PKP, anterior chamber hypopyon and corneal neovascularization are frequently reported as risk factors for postoperative OHT . Our study initially identified these factors as significant through LASSO regression analysis. However, considering the generally healthier ocular conditions of Fs-PKP patients and the administration of appropriate treatments, these factors may not independently contribute to OHT when evaluated alongside other variables. Additionally, younger patients have been reported to be more susceptible to developing OHT postoperatively, likely due to their heightened sensitivity to corticosteroids, a common cause of elevated IOP after corneal surgery . In our study, while age under 65 years was significant in LASSO regression analysis, it did not emerge as an independent predictor of OHT when other variables were considered. Furthermore, large graft sizes have also been reported as a risk factor for elevated IOP following surgery . However, it is important to note that due to the relatively small number of patients with large graft sizes in our cohort, this factor was not selected in the LASSO regression analysis. These findings suggest that the common risk factors for OHT in conventional PKP, such as anterior chamber hypopyon, corneal neovascularization, younger age, and large graft sizes, may not play the same role in Fs-PKP. This underscores a critical difference between conventional PKP and Fs-PKP and offers a more refined understanding of OHT risk factors specific to Fs-PKP. Nomograms offer significant advantages over traditional epidemiological methods by integrating multiple predictors into a unified, individualized risk assessment model, enabling more precise and personalized predictions crucial for clinical decision-making - . Historically, nomograms for keratoplasty have predominantly targeted graft failure as the primary outcome - , whereas our nomogram specifically addresses the incidence of postoperative OHT. By incorporating variables, our nomogram markedly enhances the accuracy and reliability. One of the key advantages of our nomogram is its ability to provide preoperative risk stratification. By calculating cutoff values, we were able to categorize patients into distinct risk groups. Our findings demonstrated that patients in the high-risk group were more likely to develop OHT postoperatively, underscoring the clinical utility of our nomogram in identifying patients who may require closer monitoring and early intervention. Several methodological strengths further bolster the effectiveness of our nomogram. The use of the LASSO model for variable selection prior to performing multivariate Cox regression effectively reduces multicollinearity among variables, enhancing the interpretability and robustness of our findings. Additionally, applying bootstrap resampling techniques mitigates limitations associated with small sample sizes. Through 500 bootstrap iterations, we ensured that our model's predictive performance remained robust and consistent, as evidenced by the stable C-index. This rigorous approach improves the generalizability of our findings to a broader patient population. Calibration curves confirm the precision of our predictive model in forecasting clinical outcomes. DCA indicates that the model offers optimal net benefit within risk thresholds of 20%-70%, demonstrating greater clinical advantage when all predictive factors are considered together rather than individually. This facilitates more accurate risk stratification and personalized treatment plans, ultimately improving patient outcomes. While our study has several strengths, certain limitations need to be acknowledged, along with corresponding recommendations for future research. First, the early phase of Fs-PKP adoption in China limited the number of eligible patients and the follow-up duration, resulting in a two-year observation period. While this relatively short timeframe may not fully capture long-term outcomes, our findings suggest minimal variation in the incidence of OHT between one and two years postoperatively, indicating relative stability of the anterior chamber angle during this period. Thus, the two-year follow-up provides a reasonable observation window for this study. Future studies should extend follow-up periods to evaluate long-term efficacy and identify late-onset complications. Second, the single-center nature of our study could introduce biases related to specific clinical protocols and patient demographics, potentially restricting the generalizability of our findings to other geographic and clinical contexts. Multicenter studies are needed to validate these results across diverse populations and clinical settings. Finally, our study did not leverage advanced technologies such as deep learning and artificial intelligence, which could enhance decision-making in postoperative IOP management. Future work could develop web-based nomograms for real-time personalized risk assessment and treatment recommendations. Machine learning could analyze large datasets to better identify high-risk patients and optimize interventions, while deep learning could integrate imaging data (e.g., anterior chamber angle and corneal topography) with clinical metrics to improve prediction accuracy. By addressing these limitations through extended follow-ups, multicenter studies, and advanced analytical tools, future research can refine OHT management strategies for Fs-PKP patients. In conclusion, our study introduces a novel nomogram for predicting OHT risk after Fs-PKP in Chinese patients. By incorporating key factors like prior PKP, viral ocular disease, ocular trauma, and intraoperative strategies, this model improves the accuracy of OHT risk assessment and offers a valuable tool for clinical decision-making. Supplementary table.
The relationship between personality traits and individual factors with perinatal depressive symptoms: a cross-sectional study
31a8cd9b-5693-4bcb-bbd4-16883338133a
10210385
Gynaecology[mh]
Pregnancy is a major event involving neuroendocrine, anatomical, psychological and relational changes in the life of women. This complex transition can be characterized by vulnerability for mental disorders. Many clinical conditions such as anxiety, mood and psychotic disorders can affect women in the perinatal period, and perinatal depression (PND) shows a high prevalence : across nations and cultures PND has raised significant concerns due to prevalence rates ranging from 5 to 25% of women in the perinatal period . Differently from the common, mild, and self-limiting “baby/maternity blues”, PND is an impairing depressive episode occurring during pregnancy or up to 12 months after childbirth . PND is especially alarming due to its multiple consequences affecting the mother as well as the couple and the mother-child relationship, with potentially severe consequences for the child’s development. Extensive studies have shown a causal relationship of symptoms of PND with child’s emotional development disorders and lower emotion recognition , neuropsychological and cognitive deficits , internalizing and externalizing disorders , sleep disturbances , persistent lower growth and worse long-term mental health outcomes . Ultimately, severe untreated PND can lead to self-harm, harming of the infant and, in tragic cases, to suicide and infanticide . Existing evidence highlighted several factors which are related to PND. As well known by researchers and clinicians, the major neuroendocrine modifications happening during and after pregnancy play a role in the onset of PND . In addition to such biological factors, a growing body of evidence highlights the role of psychological aspects in PND. Patients’ remote and recent anamnesis have been related to PND in well-designed multivariate models. Specifically, a history of childhood trauma, poor marital quality, singlehood, domestic abuse, younger age, poor social support, and low socioeconomic status have been observed to be linked to PND symptoms . Such evidence is consistent with epidemiological data on adverse childhood experiences, showing that certain early experiences (including exposure to psychological, physical, and sexual abuse as well as household dysfunction such as domestic violence, substance use, mental diseases, incarceration) are cumulatively related to greater degrees of adult illness burden . Interestingly, patients’ personality has also been related to the onset of PND . As frequently observed by specialists in the clinical practice, certain personality traits such as dependence, obsession, neuroticism, and severe self-criticism, are related to PND symptomatology . Emerging evidence also focused on the partner’s role and on how several aspects of the future father are closely intertwined to maternal mental health. The evidence shows a meaningful impact of fathers’ anxiety and depression, romantic and father-infant attachment style, material support, and co-parenting skills on the risk and evolution of maternal PND . Only little research tested mediating and moderating etiopathogenetic models leading to PND, and no studies up to date (to the best of our knowledge) examined the possible mediating role of personality traits in the relation between patients’ family of origin history and PND symptoms. More specifically, early experiences and personality features are thought to be interrelated , and it is possible that the experience of motherhood may interact with the women’s own experience of being a child taken care of . Existing research accounting for the role of the family of origin in the prediction of PND suggests that having poor family relationships during childhood, as well as in their married life, and insufficient support from their families during pregnancy, increases odds of PND in multivariate models . Considering the above, it is possible that certain characteristics of the family of origin can influence women’s personality, and that the interaction between such early experience and the occurrence of maladaptive forms of personality can ultimately lead to more severe depressive symptoms in the perinatal period. In order to elucidate the relationship between personality and depression in the perinatal period and in order to test the mentioned hypotheses, the first aim of the present study on women in their perinatal period was to explore personality traits and individual factors independently associated with depressive symptoms. The second aim of the present study was to test the mediating role of dysfunctional personality features in the relation between dysfunctional characteristics of the family of origin and severity of depressive symptoms; the findings of the first objective led us to consider the constructs of neuroticism and of having a parent affected by a psychiatric disorder within the analyses related to the second objective. SubjectsMeasuresStatistical analysis Descriptive statistics are reported as numbers and proportions for categorical variables, and as means and standard deviations for continuous variables. Prior to analysis, data were screened for missingness. For the 39 cases showing some missing values (0.263% of the dataset, with big-5 and EPDS being 100% complete), automatic multiple imputation analysis was used to impute missing data. An exploratory bivariate Spearman correlation analysis was performed to select, among the 24 individual and clinical collected variables, those variables significantly ( p < .05) related to EPDS. Such variables were then used to construct a multivariate linear regression model to investigate factors independently associated with EPDS total score (dependent variable, objective 1). Data used in the regression respected the basic assumptions associated with a linear regression model that were tested (i.e., homoscedasticity, normal distribution of errors, multicollinearity). Finally, based on the above-mentioned hypothesis and on the results of the correlation and regression analyses, we evaluated (using the original, non-imputed dataset) whether neuroticism (which is a dysfunctional personality feature found to be independently associated with EPDS) can operate as mediator in the relationship between history of a parent psychiatric disorder in the family of origin (which is a dysfunctional characteristic of the family of origin found to be bivariately, but not independently, associated with EPDS) and depressive symptoms (objective 2). Such analysis was performed using the extension “process” (model number four) of the Statistical Package for Social Science, with 10’000 bias-corrected bootstrapping procedure . In the model, EPDS total score was the dependent variable, neuroticism was the mediator, and parent’s psychiatric disorder in the family of origin was the independent variable. Those variables which were independently associated with EPDS total score in the regression model (or which approached significance, p < .06) were included as covariates in the model. The current study represents part of a larger clinical collaboration between the Perinatal Psychiatry out-patients Service and the Child-Neuropsychiatry Unit of “Umberto I” hospital of Rome. The project focuses on prevention and treatment of maternal mental disorders . Recruitment started in February 2019 and ended in February 2020. It took place in the gynecology department of Policlinico Umberto I University Hospital, which is the largest public hospital of the metropolitan city of Rome. Recruitment was mainly performed by two psychiatrists from the Perinatal Psychiatry Service of the same Hospital: once a week, every woman consecutively admitted to the gynecology department for routine visits in the perinatal period were presented the project and proposed to take part in the study. Also, a minority of cases were proposed to participate in the study if the personnel of the gynecology unit asked for a psychiatric evaluation of specific cases. Inclusion criteria were: (1) being pregnant or within six months of giving birth; (2) accepting to take part in the research and give informed consent. Exclusion criteria were: (1) Diagnosis of moderate/severe cognitive impairment (2) insufficient Italian language skills (3) presence of severe acute psychiatric conditions requiring hospitalization (including suicidality). More in detail, eligible women were indicated by the gynaecologist and approached by the psychiatrist, following the gynaecological assessment, who explained the study aims and procedures and gathered the informed consent in case of acceptance of participation. Participants were administered the assessment tools as one bundle, to fill in privately as part of the gynaecological visit in one of the ward’s rooms. In case of doubts on specific items or other questions, participants could refer to the psychiatrist. Sociodemographic variables and pregnancy related variables were gathered from the gynaecological clinical chart as well as from a specifically designed sociodemographic data collection sheet. A statistical power analysis was performed to determine the sample size using G*Power 3.1 software based on the study objectives. The first objective of the study; considering a potential amount of 24 predictors (as described below), the calculation indicated that, given a probability level of 0.05, a sample size of 169 was needed to provide a satisfactory statistical power (1–β = 0.80) to identify a medium effect size (f²=0.15) in a linear multiple regression model. The second objective of the study was to test the mediating role of neuroticism personality features in the relation between characteristics of the woman’s family of origin and severity of depressive symptoms; according to the sample size guidelines of Fritz and MacKinnon , in the mediation model, assuming medium effect sizes for two distinct pathways of the model (i.e., the effect of the independent variable on the mediator, and the effect of the mediator on the dependent variable), the analyses required a minimum sample size of 78 with the bootstrapping procedure to provide a statistical power of 0.80. Therefore, the current sample size (n = 241) has been chosen as it was considered to provide satisfactory statistical power in relation to the study objectives. All participants gave their written informed consent to participate. The study received the approval of the Local Ethical Committee. At the end of the assessment, patients showing clinical signs of significant mental distress/mental disorders were proposed further clinical assessment in the out-patients service for perinatal psychiatry. The following information were collected: - Sociodemographic variables: age , education (elementary-, middle-, high-school, and higher education), relational status (together, separated/alone), professional status (employed/unemployed); - clinical variables (yes/no): anamnesis of chronic medical illnesses , anamnesis of mental disorders , ongoing psychotherapy , use of psychiatric pharmacological therapy , recent significant griefs , psychiatric disorders in the family of origin (parents only), history of family conflict , partner’s mental disorders , and couple conflict ; - pregnancy related variables: number previous pregnancies , previous voluntary interruption of pregnancy (VIP) (yes/no) or miscarriages (yes/no), use of alcohol (yes/no) or tobacco (yes/no) during pregnancy, pregnancy complications (yes/no). The Edinburgh Postnatal Depression Scale (EPDS) was used to assess the presence and severity of PND symptoms. The EPDS is a cross-culturally validated 10-question form that a woman can complete in 2 to 3 min . Total score can vary between 0 and 30. A score above 10 indicates “possible depression”, while patients scoring 12 or more are considered as suffering from a depressive illness of varying severity. The last item of the test enquires about suicidality and is used to identify patients with any degree of this symptom (i.e., scoring > 0). Cronbach’s α of EPDS for this sample was 0.835, indicating a good reliability . The Big Five Inventory (BFI) was used to assess main personality traits. It is a self-administered questionnaire, consisting of 44 items measuring the main characterizing traits according to the “Five Factor Model of Personality” . They include: extraversion (8 items, which allow to identify traits such as sociability, assertiveness and positive emotionality) (Cronbach’s α = 0.756), agreeableness (9 items, referring to characteristics such as altruism, trust and tendency to provide support) (Cronbach’s α = 0.655), conscientiousness (9 items, referring to the ability to control impulses and self-discipline) (Cronbach’s α = 0.771), neuroticism (8 items, referring to traits of anxiety and negative emotionality) (Cronbach’s α = 0.755), and openness (10 items, identifying characteristics such as openness to experience, intellectual curiosity, aesthetic sense) (Cronbach’s α = 0.780). Each item consists of short sentences, patients were asked to assign a score on a scale from 1 = not at all agree to 5 = completely agree. Also, due to the different number of questions for each trait, each one has a different maximum score . total sample consisted of n = 241 women in their perinatal period. Table shows descriptive statistics of the sample. Mean age was 33.88 years (SD = 5.41; min = 16; max = 54). Only four (1.7%) patients were fully single mothers. Roughly half of the sample had a higher education, and the vast majority (82.2%) was employed. Although almost one out of five had a history of mental disorders, only few were under treatment with either psychotherapy or pharmacological therapy (7.9% and 6.2%, respectively). One in four patients reported a history of parent’s psychiatric disorder in the family of origin with one patient out of ten describing conflictual relations in the original family. Differently, 21 (8.7%) patients reported a partner’s psychiatric disorder, and the same percentage reported a conflictual relation with their partner. Table shows descriptive results related to the outcomes of interest. Overall, EPDS total score had a mean of 5.85 (SD = 4.63; min = 0; max = 23). A total of 31 patients (12.9) had an EPDS total score of 12 or more, suggestive of PND, and 12 (5%) reported suicidality as part of their condition. Personality traits are also presented in Table . As already mentioned, bivariate associations were tested with Spearman correlation test. The following variables were significantly correlated to EPDS total score: previous psychiatric disorders ( ρ = 0.165; p = .013); parent’s psychiatric disorder in the family of origin ( ρ = 0.223; p = .001); conflict in the family of origin ( ρ = 0.151; p = .020); conflictuality with the partner ( ρ = 0.232; p < .001); partner’s psychiatric disorder ( ρ = 0.137; p = .039); extraversion ( ρ=- 0.172; p = .007); agreeableness ( ρ=- 0.252; p < .001); conscientiousness ( ρ=- 0.243; p < .001); neuroticism ( ρ = 0.463; p < .001). A table presenting all bivariate correlations is available upon request. Such variables significantly associated with EPDS total score were then included in the multivariate regression model showed in Table . In the regression model with EPDS total score as dependent variable (Table ), having couple conflict, and neurotic personality traits were directly, independently correlated with EPDS total score (respectively: B = 2.337; p = .017; B = 0.303; p < .001), i.e., they were associated with EPDS also when controlling for the other variables (previous psychiatric disorder, a history of parent’s psychiatric disorder in the family of origin, conflict in the family of origin, partner psychiatric disorder, other analyzed personality traits). Having a partner with a psychiatric disorder approached significance (B = 0.82; p = .05). In order to test our hypothesis that personality features mediate the relation between characteristics of the family of origin and depressive symptoms, we built a mediation model using EPDS total score as a dependent variable, neuroticism (i.e. a dysfunctional form of personality which was independently associated with EPDS in the regression model) as mediator, and parent’s psychiatric disorder in the family of origin (i.e. a dysfunctional characteristic of the family of origin which was significantly associated with EPDS by Spearman correlation but was not independently associated with EPDS in the regression model) as predictor. The model was controlled for the other variables independently significantly associated with EPDS total score (or which approached significance) in the regression model (i.e., conflictuality with the partner, partner’s psychiatric disorder). Figure graphically shows the results of the mediation model. The total effect of parent’s psychiatric disorder in the family of origin on EPDS total score was significant (b = 2.3332; S.E.=0.6487; p = .0004; BCCI95%=1.0545—3.6119), and neuroticism partially mediated the relation between parent’s psychiatric disorder in the family of origin and EPDS total score with an indirect effect of b = 0.9688 (S.E.= 0.3154; BCCI95%=0.3655—1.6074). In this research we studied individual features, personality traits and depressive symptoms in a group of women in their perinatal period. Results showed that the personality trait neuroticism (measured by means of the BIG5) and having a conflictual relation with the partner were directly and independently associated with PND symptoms (measured through the EPDS). Further, our hypothesis that personality (i.e., neuroticism) plays a mediating role in the relation between issues related to the family of origin and PND symptoms was supported; indeed, in the tested mediation model the effect of the presence of a parent’s psychiatric disorder in the family of origin on EPDS total score was significantly mediated by the personality trait neuroticism, even when the effect of partner-related factors were controlled for. In relation to the first objective of the study, the results of the regression model suggested that neuroticism is strongly related to PND symptoms. Consistently, in our pool that there was no case of a participant scoring above the EPDS cut-off score for depression showing relatively low score in BIG-5 neuroticism scale. Building on this, couple conflict was also significantly related to EPDS, over and above confounding factors including a partner’s psychiatric disorder, possibly suggesting that conflictuality rather than a partner’s psychiatric disorder per se can play a role in perinatal depression. These findings raise the hypothesis that perinatal depression is a sui generis form of depression in which the symptomatology is closely related to personality and couple functioning. It is possible that, since pregnancy implies a profound redefinition of identity, dysfunctional aspects of personality in this period can facilitate the emergence of emotional distress. This is especially true when the partner is unable to function as containment and support within the couple. When these dysfunctional aspects are elicitated, depressive symptoms can worsen as maternal anxieties do not find adequate support neither at intrapsychic level (due to maladaptive personality traits such as neuroticism), nor at relational level (due to a problematic relation with the partner). Confirming the relevance of the family of origin for the structuring of personality and relational style and for PND-related symptoms, the mediation model which was implemented to address the second objective of the study suggested that having a parent affected by a psychiatric disorder can have a role in explaining higher degree of neuroticism and, indirectly, increased depressive symptoms. As already mentioned in the Introduction, early experiences and personality features are interrelated, and it is possible that the psychological personality features related to the experience of pregnancy/motherhood may interact with the women’s own experience of being a child taken care of in the potential development of depressive symptoms. Of relevance, other factors involved in the complex picture of peri-natal psychopathology, such as neuroendocrinological modifications happening during each phase of the pregnancy and in the pos-natal period , have not been investigated in the present study, and future research is needed to link available knowledge on psychological and biological factors to understand the etiopathogenesis of PND and help clinicians in its prevention and treatment. The present results need to be interpreted in the light of some limitations. First, the lack of longitudinal data does not allow to draw conclusions on risk factors leading to depressive symptoms, but it only allows to elucidate correlates of this condition; it would be of value to conduct long-term research on motherhood starting from the formation of the couple onward. Second, all anamnestic information provided by participants (e.g. presence or not of mental illnesses in the family of origin, conflicts with the partner) have not been verified and have been assessed without specific quantitative tools; also, EPDS and BIG5 scales are self-report measures, and this can be a potential source of bias, such as social desirability and inaccuracy biases; subsequently, the data included in the current study more closely show subjective rather than objective evaluations of participants on their own individual and psychological features, and should thus be interpreted with caution. Third, although the sample size is sufficient for the aims of our study, as suggested by a priory analyses, a multi-center study on a larger group could offer better representativity of the general population. Fourth, the present study is on depressive symptoms among women in the perinatal period, and it is thus not specifically focused on PND (i.e., the sample also included subjects without PND). Lastly, no information was available on the patients who refused to participate in the study, therefore no analyses of possible group differences between participants and non-participants could be performed. The present study also has some strengths: there was not a priori selection of patients, bringing to a more representative pool of women turning to a gynecology clinic; the assessment tools (EPDS and BIG5 scales) are widely validated and showed a good Cronbach’s α in the current sample; the analyses were performed statistically controlling for certain confounding factors. In conclusions, this study contributes to elucidate the complex relationships linking personal risk factors, personality, and depressive symptoms in women in the perinatal period. The study highlights the relevance of the couple relation and of neuroticism traits as factors related to depressive symptoms in women in the perinatal period, as well as the role of parent’s psychiatric disorders in the family of origin on neuroticism and indirectly on PND symptoms. The present evidence highlights some relational, psychological, and familiar features which could be object of early screening and recognition of cases at greater risk of PND . It is important to recognize such cases and to offer effective treatments, possibly involving the partner (such as couple therapy), in order to avoid negative outcomes on the entire family system with potential short- and long-term effects also on the development of the newborn.
Acute kidney injury and renal replacement therapy: terminology standardization
5bf55799-019a-4dab-9f77-c8ffee7ff083
9518623
Internal Medicine[mh]
Acute kidney injury (AKI) is a frequent complication in hospitalized patients, especially in ICUs, still causing high rates of morbidity and mortality. AKI can also occur in outpatients and in the community, often related to socioeconomic and cultural conditions. Several terms with similar meanings have been used in AKI and its dialysis modalities, causing confusion and disparities among patients, nephrologists, healthcare institutions, private care companies, insurance companies and government entities. These disparities can impact medical care, hospital organization and care, as well as the funding and reimbursement of AKI-related procedures. Thus, we worked on the terminology and consensual definitions, including the definitions of AKI, acute kidney disease (AKD) and chronic kidney disease (CKD). Additionally, all dialysis modalities and extracorporeal procedures related to AKI, currently approved and available in the country, were addressed in this document by the AKI Department of the Brazilian Society of Nephrology. Preferably, use the term Acute Kidney Injury, in order to maintain the acronym AKI, which is well established and widespread in our country. In addition, the term was also approved in the Ibero-American Consensus on the Uniformity of Nomenclatures, observing the deliberations proposed by the Kidney Disease: Improving Global Outcomes (KDIGO) panel - . An additional advantage of using the term “injury” is that it encompasses initial cases with cellular and tissue functional alterations to cases of established anatomical lesions - . Therefore, it is suggested to avoid the term acute kidney lesion (AKL), which would be restricted to cases of anatomic kidney lesion. We must also avoid the term “failure”, as it denotes a more advanced stage of renal failure. KF refers to a renal condition with a glomerular filtration rate lower than 15 mL/kg/1.73 m 2 or when dialysis is required ( ) . RRT refers to any therapy associated with the process of replacing native kidney function, such as hemodialysis, peritoneal dialysis and kidney transplantation. 3D AKI is ARF requiring artificial renal support/dialysis. It is recommended to use the term 3D stage AKI instead of dialysis-dependent AKI. AKS encompasses all methods of artificial clearance. Artificial renal support therapies are based on their modality and frequency: Frequency Modalities : - Continuous: uninterrupted method of clearance, used with equipment with autonomy for uninterrupted operation of more than 24 hours. - Intermittent: clearance method lasting less than 12 consecutive hours. Intermittent therapies are subdivided into: - Conventional: lasting up to 6 hours. - Prolonged: lasting from 6 to 12 hours. : The terminology in this text was based on the dialysis methods approved by ANVISA up to the date of document preparation. , - - Conventional hemodialysis. - Conventional ultrafiltration. - Conventional hemodiafiltration. - Prolonged hemodialysis. - Prolonged ultrafiltration. - Prolonged hemodiafiltration. - Continuous hemodialysis. - Continuous hemofiltration. - Continuous hemodiafiltration - . - Continuous ultrafiltration - . - - Intermittent manual peritoneal dialysis. - Continuous manual peritoneal dialysis. - Intermittent automated peritoneal dialysis. - Continuous low-volume automated peritoneal dialysis. - High volume continuous automated peritoneal dialysis. - - Therapeutic membrane plasmapheresis. - Treatment plasmapheresis by centrifugation. , - - Hemoperfusion to remove medium molecules, drugs and toxins. - Hemoperfusion to remove endotoxins. , , - Recirculation system for molecular adsorption. Molecular adsorbent recirculating system (MARS). - Single-pass albumin dialysis. Single-pass albumin dialysis (SPAD). - Hemoperfusion to remove bilirubin and bile salts (uses the same system as hemoperfusion to remove medium molecules). Note : Therapeutic plasma exchange may be considered as artificial liver support . , - Extracorporeal removal of carbon dioxide (CO 2 ). Extracorporeal CO 2 removal (ECCO 2 R). , - Conventional hemodialysis ultrafiltration ventional hemodiafiltration Prolonged hemodialysis ultrafiltration Prolonged hemodiafiltration Hemodialysis, hemofiltration and continuous hemodiafiltration Continuous ultrafiltration , - Duration: > 12 hours. - Technique: hemofiltration. - Removal mechanism: convection. - Equipment: continuous hemodialysis machines and specific equipment for isolated ultrafiltration, with autonomy for uninterrupted operation of more than 24 hours. - Device: membrane filters with biocompatible polymer. - Filter type: low or high flow. - Dialysis and replacement solution: not applicable. - Anticoagulation: heparin (unfractionated or low molecular weight),- Duration: up to 66 hours. - Technique: hemodialysis and hemofiltration. - Removal mechanism: diffusion and convection. - Equipment: hemodialysis machine with module for hemodiafiltration. - Device: membrane filters with biocompatible polymer. - Filter type: high flow. - Dialysis and replacement solution: polyelectrolytic concentrate for hemodialysispreferably in individualized rooms, hemodialysis rooms or in intensive and semi-intensive care units, with ultrapure water criteria. - Duration: 6 to 12 hours. - Technique: hemodialysis. - Removal mechanism: diffusion. - Equipment: hemodialysis machifiltration. - Removal mechanism: convection. - Equipment: hemodialysis machine and machines capable of performing ultrafiltration alone. - Device: membrane filters with biocompatible polymer. - Filter type: low flow, high flow, medium partition (medium cutoff) and high partition (high cutoff). - Dialysis solution: not applicabl6 to 12rooms for hemodialysis or in an intensive and semi-intensive care unit, with ultrapure water. - - Duration: > 12 hours. - Technique: hemodialysis, hemofiltration or hemodiafiltration. - Removal mechanism: diffusion, convection and adsorption. - Equipment: continuous hemodiafiltration machine with autonomy for uninterrupted operation for more than 24 hours. - Device: membrane filters with biocompatible polymer. - Filter type: high flow and high partition (high cutoff). - Dialysis and replacement solutions and surgical center. Intermittent manual peritoneal dialysis ontinuous manualIntermittent automated peritoneal dialysis Low volume continuous automated peritoneal dialysis igh volume continuous automated peritoneal dialysis - Duration: > 12 hours. - Standard volume: > 12 L up to 4VSolutions: hypertonic glucose or icodextrinnot applicable. - Anticoagulation: not applicable. - Solutions: hypertonic glucose, icodextriup to 12 hours. - Standard volume: up to 12 L. - Technique: peritoneal dialysis. - Removal mechanism: diffusion. - Equipment: cycling machinNote: the term “automated” implies the need for a specific machine for this therapy. - Duration: >. - Anticoagulation: not applicableTreatment membrane plasmaferesis Centrifuge plasmapheresis hemofiltration. - Removal mechanism: convection limited by the size of the molecule. - Equipment: hemodialysis or continuous hemodiafiltration machine with plasmapheresis module. - Device: membrane filters with biocompatible polymer. - Filter type: plasma filter6 hours. - Technique: centrifugation. - Removal mechanism: sedimentation by specific gravity. - Equipment: centrifuge machine. - Device: not applicable. - Filter type: not applicable. - Replacement solution: solution with human albumin, fresh frozen plasma or cryoprecipitateintensive and semi-intensive care units. , - Hemoperfusion for removal of medium molecules (0.5 - 58 Kda), drugs and toxinsHemoperfusion for endotoxin removal membrane- Duration: 2-4 hours. - Technique: hemoperfusion. - Removal mechanism: adsorption. - Equipment: specific machine for hemoperfusion, hemodialysis machine or continuous hemodiafiltration connected in series with the pre- or post-filter extracorporeal circuit. - Device: cartridge with synthetic resins with specific adsorptive capacity, citrate or without anticoagulation. - Place of treatment: intensive and semi-intensive care unit. , , Molecular adsorption recirculation system (mars) Single pass albumin dialysis (spad) Hemoperfusion for bilirubin and biliary salts removal 6-8 hours. - Technique: hemodialysis and perfusion for regeneration of the albumin solution. - Removal mechanism: diffusion and adsorption by albumin. - Equipment: specific machine coupled to a continuous hemodiafiltration or hemodialysis machine. - Device: membrane filters with biocompatible polymer, with parallel circuit of activated carbon cartridge and ion exchanger cartridge, for recirculation of solution with albumin. - Type of filter: low flow (for albumin regeneration) and high flow (for the passage of blood inside the capillary fibers in countercurrent with the albumin solutionHuman albumin solution for the parallel circuit6-8 hours. - Technique: hemodialysis. - Removal mechanism: diffussolution: specific electrolyte solutions for continuous hemodiafiltration with the addition of human albumi- Duration: 2-24 hours. - Technique: hemoperfusion. - Removal mechanism: adsorption. - Equipment: hemodialysis machines, continuous procedure machines or specific machines for hemoperfusion, or cartridges connected to the extracorporeal circulation circuit or to the extracorporeal blood oxygenation (ECMO) circuit, in these situations without the need for specific equipment. - Device: cartridges with resins or microspheres. - Filter type: not applicable. - Dialysis and replacement solution: - In case of use of a specific machine for hemoperfusion: not applicable. - If using a hemodialysis or hemofiltration machine: specific electrolyte solutions. - In case of continuous hemodiafiltration machine us, intensive care unit, semi-intensive care unit and surgical center. , Extracorporeal carbon dioxide removal (ECCO 2 R) - Duration: >48 h. - Technique: hemodialysis. - Removal mechanism: diffusion of gases. - Equipment: specific machine for hemoperfusion, continuous hemodiafiltration machine coupled to the oxygenating membrane. - Device: oxygenating membrane. - Filter type: not applicable. - Sweep gas: oxygenn case of use of continuous hemodiafiltration machine: specific electrolyte solutions for continuous hemodiafiltration. - Anticoagulation: heparin (unfractionated or low molecular weight) or without anticoagulation. - Place of treatment: intensive care unit. The AKI Department of the Brazilian Society of Nephrology prepared this document in order to standardize the terminology and definitions related to AKI. Terminology and consensual definitions were addressed, including definitions of AKI, acute kidney disease (AKI) and chronic kidney disease (CKD). Adescribed. The Brazilian Society of Nephrology hopes that this Consensus can standardize the terminology and provide technical support to all sectors involved in AKI assistance in Brazil.
Long-term follow-up after ureteral reimplantation in children: a 12-year analysis
b4516045-5438-46c0-9235-e9ab1c3fafae
11910405
Surgery[mh]
Ureteral reimplantation (UR) is a well-established procedure in the management of vesicoureteral reflux (VUR) and primary obstructive megaureter (POM) in children, providing success rates around 95–98% . However, postoperative follow-up practices vary substantially, influenced by institutional protocols and individual surgeon preferences. The American Urological Association's (AUA) recommendation for a postoperative ultrasound follow-up recognizes the marginal risk associated with postoperative ureterovesical junction obstruction (pUVJO). But it does not specify the frequency and extent of ultrasound follow-up . Similarly, the European Association of Urology (EAU) recommends postoperative follow-up until after puberty, but without specifying diagnostic parameters or ultrasound intervals . Indeed, a postoperative ultrasound at the three-month mark, devoid of pUVJO has been suggested as a robust predictor of an uneventful postoperative course . As a result, some have advocated that asymptomatic children may not benefit from extended, routine follow-up [ – ]. However, these conclusions arise from relatively short-term follow-up analysis and rely primarily on imaging studies, neglecting the broader spectrum of potential long-term complications and changes after UR. For instance, recurrent febrile UTIs may serve as indicators for postoperative VUR, and the exceptionally rare late presenting pUVJO, manifesting years after an otherwise uneventful postoperative period, remains widely unaddressed . Additionally, complications linked to the underlying congenital anomaly of the kidney and urinary tract (CAKUT), such as reflux nephropathy (RN), arterial hypertension, and chronic kidney disease (CKD), should not be disregarded. The existing body of evidence, particularly with respect to thorough, longitudinal long-term follow-up after UR, remains notably limited, leaving a conspicuous void in the understanding of the evolving health of these children [ – ]. Recognizing the existing limitations in follow-up duration and study design of prior investigations, this study seeks to address the potential emergence of late-presenting complications and changes post UR. Our aim is to conduct a thorough assessment of our study population to identify vulnerable individuals who may benefit from standardized, longitudinal long-term follow-up. Study approval : The study received approval from the Cantonal Ethics Committee of Zurich (BASEC-No. 2022-02253). Population : We conducted a retrospective review of 135 medical records of children who underwent UR at our institution between January 2006 and June 2013. Inclusion required a minimum 10-year postoperative follow-up after UR for primary VUR with breakthrough febrile UTIs, secondary VUR after obstructive ureterocele incision, or POM. We enrolled cases with unilateral and bilateral UR for single system kidneys as well as duplex kidneys. Each side was counted as a renal unit. Common sheath UR for duplex kidneys with complete ureteral duplication were counted as one renal unit. We excluded subjects with urogenital, anorectal or bladder abnormalities and those with an underlying malignant or non-malignant systemic disease. Seventy-seven children had insufficient follow-up duration and were not included in this study. Surgical technique : Ureteral reimplantation was performed using either the Cohen or Politano–Leadbetter technique, with the choice made intraoperatively at the surgeon's discretion. In duplex systems, ureters were reimplanted within a common sheath. Ureteral tapering was performed as needed, typically for massively dilated, non-peristaltic ureters. Standard postoperative follow-up : We offer clinical, blood pressure and ultrasound follow-up visits scheduled at 3, 12, 24 months, and subsequently every four years until adolescence, typically around 16 years of age, where the need for transitioning to adult urology practice will be determined. BUN/creatinine measurement is not routinely ordered but is performed in cases of bilateral renal changes, inadequate renal growth, or a history of abnormal results in previous exams. Our institutional protocol mandates the continuation of antibiotic prophylaxis for three months following surgery. Postoperative VCUG is not performed routinely but is reserved for patients with recurrent UTIs after surgery. Bladder retraining strategies are not routinely included in follow-up but may be offered if concomitant lower urinary tract dysfunction is diagnosed. Primary outcome : Surgical complications, including pUVJO (due to stenosed intravesical channel or retrovesical ureteral kinking), and febrile UTIs as indicators for postoperative VUR. We documented the time of diagnosis and the technique used for reoperation. Secondary outcome : Indicators of kidney impairment, such as parenchymal changes detected on ultrasound, arterial hypertension, or elevated levels of blood urea nitrogen (BUN) and creatinine. Data collection : Demographic data, including gender, age at surgery and duration of follow-up were collected. We recorded the presence of single or duplicated kidney, VUR grade and laterality, SFU grade and laterality, POM, preoperative RN, the indication for surgery, previous urogenital interventions or surgeries, the intraoperative details including the surgical technique employed (Cohen, Politano–Leadbetter, common-sheath UR), the need for ureteric tapering, the insertion of externalized ureteral catheters and details about postoperative antibiotic prophylaxis. Medical histories were evaluated, categorizing symptomatic individuals as those with a confirmed febrile UTI or afebrile UTI (defined as abnormal urine analysis, a positive urine culture and/or clinical signs suggestive for UTI). Additionally, we considered arterial hypertension, lower urinary tract dysfunction (LUTD) according to the committee of the international Children’s Continence society , increased serum creatinine/BUN and estimated glomerular filtration rate (eGFR). Ultrasound evaluation : A senior board-certified pediatric radiologist assessed preoperative ultrasound images, one-year ultrasounds for short-term evaluation, ultrasounds at 5 ± 2 years for mid-term analysis, and those at ≥ 10 years for long-term follow-up. These assessments were performed in accordance with the Society for Fetal Urology's (SFU) grading system . Additionally, parenchymal changes detected on ultrasound, indicative of RN, were documented. These included renal cortical thinning, loss of cortico-medullary differentiation, focal cortical scars, and reduced renal size. For cases with follow-up exceeding ten years, the most recent ultrasound was considered for long-term sonographic follow-up. Data analysis : Due to the limited cohort size, we used descriptive statistics. Of the subjects assessed, 53 renal units in 34 children met the inclusion criteria. Our study cohort consisted of 15 boys and 19 girls, with a mean age at surgery of 2.6 years (range 0.4–7.2 years). The average duration of follow-up was 12.2 years, ranging from 10.0 to 15.2 years. Indication for surgery included primary VUR in 34 units, secondary VUR after ureterocele incision in eight units, and POM in eleven units. Preoperative SFU grade was determined as grade I in 18, grade II in 19, grade III in eight, and grade IV in eight units. One renal unit showed no preoperative dilatation. Preoperative ultrasound revealed parenchymal changes suggestive of RN in three renal units. Procedures included 33 Cohen UR, twelve common sheath Cohen, six P–L, and two common sheath P–L. The study cohort and results are summarized in Table . Primary outcomes Secondary outcomes Postoperative UVJ obstruction was diagnosed in three cases within the first year following P–L reimplantation in four renal units (one bilateral procedure), with an average time of diagnosis at 8.3 months (range seven to nine months) after surgery. No cases of pUVJO were observed after Cohen procedures and none occurred during mid- and long-term follow-up. Three of the four units with obstruction had preoperative grade IV hydronephrosis, one had grade II. All children with pUVJO were asymptomatic, but exhibited persistent or worsening grade III or IV hydronephrosis, compared to the pre-operative ultrasound, at the three-month follow-up (Table ). The diagnosis of pUVJO was confirmed by prolonged washout or drainage curves and decrease in differential renal function on sequential renal nuclear functional studies (MAG3), guiding indication for re-operation. Following a more detailed description of the findings: Obstruction in the P–L channel was found during the redo Cohen procedure in a child initially treated for POM with diminished split kidney function. In another child with secondary grade III VUR following ureterocele incision in a duplicated system's upper pole, common sheath P–L reimplantation led to obstruction caused by a fibrous retrovesical upper pole ureter segment. Re-operation involved a successful laparoscopic ureteroureterostomy, anastomosing the obstructed upper pole ureter to the unobstructed lower pole ureter, leading to full resolution. This child developed constipation-associated LUTD during follow-up. The final case involved bilateral P-L with excisional ureteral tapering, was done for grade V VUR and recurrent febrile UTIs after failed endoscopic treatment with injection of Dextranomer hyaluronic acid copolymer (DxHA). A bilateral redo Cohen effectively resolved pUVJO, revealing kinking of the retrovesical ureter as the cause of obstructions on both sides. In three of the renal units with pUVJO, ureteral catheters were left in place for one week along with a suprapubic catheter. Nine children (seven girls and two boys), with twelve renal units (ten units in girls and two in boys), experienced postoperative febrile UTIs (Table ). Indication for UR was primary VUR and febrile UTIs in six units, VUR after ureterocele incision in four units and worsening obstruction in POM in two units (Table ). Seven units had preoperative SFU grade I hydronephrosis, two had grade III, and one each had no hydronephrosis, grade II or IV hydronephrosis. Preoperative interventions had been performed in five units, involving four ureterocele incisions and one endoscopic injection of DxHA. Surgical techniques included seven Cohen procedures and five common sheath Cohen for duplicated systems. Ureter tapering was done in one unit, and externalized ureteral catheters were left in place in four units for an average of seven days. Six out of nine children became symptomatic within the first year after UR, including three UTIs despite antibiotic prophylaxis. The remaining three children presented with febrile UTIs four years post-surgery with previously uneventful postoperative courses. Five subjects with postoperative febrile UTIs had duplex kidneys. It is worth noting that all febrile UTIs were isolated events, and further diagnostic workup with VCUG was omitted. One child developed symptoms of LUTD (dysfunctional voiding) during follow-up, successfully treated with pelvic floor exercises and biofeedback. None of the cases with fUTIs had ultrasound evidence of RN until follow-up was concluded. Three children, with four renal units, showed signs of RN on ultrasound. Among the four units, three had preoperative SFU grade II hydronephrosis, and one had grade I (Table ). One child with a duplex kidney, treated with common sheath Cohen for grade III VUR and recurrent febrile UTIs at 3.3 years, was diagnosed with lower pole RN only during late-term follow-up, twelve years after successful UR, with an uneventful postoperative recovery until then. The second child, with preoperative RN and a history of recurrent breakthrough UTIs in the setting of grade V VUR, developed mild CKD with an eGFR of 87 ml/min/1.73 m 2 calculated using the Cockcroft–Gault formula, recorded 12 years after bilateral Cohen procedure at 2.9 years. Previous follow-up visits were uneventful, without febrile UTIs, with normal blood pressure, creatinine and BUN. The third child with apparent preoperative RN received Cohen reimplantation at age 4.9 years for grade III VUR and a history of multiple febrile UTIs. Postoperative follow-up was uncomplicated, with normal blood pressure measurements, creatinine/BUN levels, and no occurrences of postoperative UTIs. Antibiotic prophylaxis for the individuals with RN was discontinued five months post-surgery on average. The remaining 33 renal units demonstrated uneventful postoperative courses over a mean follow-up period of twelve years (Table ). Preoperative hydronephrosis was categorized as SFU grade I in ten cases, grade II in 14 cases, grade III in six cases, and grade IV in three cases. This subgroup of uneventful courses included eight duplex kidneys. Indication for surgery was primary VUR with UTIs in 22 units, secondary VUR in three cases, and eight cases of POM. Preoperative interventions, including ureterocele incision in three cases, endoscopic injection of DxHA in two cases, and two cases of previous uncomplicated upper pole heminephrectomy were documented. Cohen was performed in 23 units, and common sheath Cohen for six units. P–L was done in three cases, and one common sheath P–L in a duplex kidney. Ureter tapering was carried out during two procedures, and externalized ureter catheters were left in place in four units for 1 week. Postoperative antibiotic prophylaxis was administered for an average of five months. Three subjects developed LUTD symptoms (dysfunctional voiding), not associated with UTIs and successfully treated with pelvic floor exercises and biofeedback. Our study found that pUVJO occurred in 7.5% of the renal units, with diagnoses made within eight months post-surgery. Notably, all cases of pUVJO were exclusively associated with P–L reimplantation and were asymptomatic. Single episodes of postoperative febrile UTIs occurred in a quarter of our study population, predominantly during the first year and in girls. Notably, no recurrent febrile UTIs, indicative for recurrent VUR, were diagnosed. More than a decade after surgery, one child developed CKD, on the basis of preoperatively known RN. During long-term follow-up, one child was diagnosed with RN, yet kidney function remained normal. No study participant developed arterial hypertension. The recently updated AUA guidelines recommend a renal ultrasound following UR. However, no specific time or frequency were provided due to insufficient data for evidence-based recommendations on post-surgical follow-up . The recommendation emphasizes the potential risk of pUVJO, leading to impaired urinary drainage and subsequent renal damage . The AUA identified an overall pUVJO rate of less than 1% through an analysis of multiple articles, without distinguishing between operative techniques . If hydronephrosis resolves and RN is ruled out, optional annual follow-up is suggested until adolescence, incorporating blood pressure, height, weight, and urinalysis for proteinuria and UTI . The EAU recommends postoperative follow-up until after puberty, but lacks specifics on diagnostic parameters or ultrasound intervals . In our study cohort, pUVJO was found more commonly than reported in the literature. However, it was never diagnosed in the late follow-up, extending beyond one year postoperatively. The uncertainty regarding the precise timing of pUVJO has been discussed in the literature . The onset of pUVJO remains less explored due to its rarity, necessitating multi-institutional studies for sufficient case accumulation and study power. According to the available evidence, diagnosis of pUVJO is typically made between one week to six months after UR. For instance, Falkensammer et al. noted self-limiting pUVJO in 4/126 (3%) cases diagnosed at the 6-month follow-up, remaining clinically asymptomatic with spontaneous resolution of hydronephrosis during continued follow-up . Ellsworth et al. observed pUVJO in 3/108 (3%) cases, requiring cystoscopic JJ-catheter placement within one week post-surgery . Ultrasound findings were normal after JJ-catheter removal one month later . However, rare and historical case reports mention delayed pUVJO occurring years after surgery with uneventful postoperative recovery . Weiss et al. presented two cases of late pUVJO post P–L procedures, attributed to stenosis of the reimplanted intramural ureter. pUVJO was resolved by redo-reimplantation and lysis of fibrotic tissue at the ureteral orifice respectively . A literature review on post-UR hydronephrosis reveals significant variations in the definition of pUVJO, complicating the interpretation and comparison of existing literature on pUVJO after UR . As also demonstrated in our cohort, pUVJO are more frequently reported following the P–L than Cohen intravesical reimplantation. Known drawbacks include the risk of prevesical ureter kinking with subsequent obstruction, while this approach allows for the construction of a longer tunnel, which proves beneficial in severe dilatation . Combined evaluation of three major studies, comprising data from more than 1,200 renal units, reveals an estimated obstruction rate after P–L ranging from 1 to 4% [ – ]. Workarounds to reduce the risk of prehiatal kinking, such as the Mathisen and Psoas hitch techniques, have been described [ – ]. In our cohort, one out of four units with pUVJO was associated with POM. Based on the existing literature, it appears unlikely that POM per se poses an elevated risk for pUVJO. Vereecken et al. found no pUVJO among 52 surgically corrected POMs, including 22 with ureter tapering . William et al. described a cohort of 31 reimplanted POMs with a 90% success rate, and without reporting pUVJO . Gimpel et al. investigated twelve subjects with POM undergoing surgical interventions, utilizing various techniques, and reported just one case of pUVJO requiring reoperation . Taking into account the characteristics of our cohort and the available evidence in the literature, the necessity of regular sonographic follow-up until adolescence for detecting pUVJO becomes questionable, particularly if the renal ultrasound conducted one year after UR reveals no signs of pUVJO, such as progressive or unremitting dilatation. Nine out of 34 children, equivalent to 26.5%, experienced febrile UTIs postoperatively, with seven out of nine cases occurring in girls. Six children developed symptoms within the first year after surgery. The high rate of postoperative UTIs is in line with studies providing larger case numbers focusing on pyelonephritis-free outcomes after VUR correction . In a retrospective review of over a thousand children, Nelson et al. found that 22% presented with at least one febrile UTI post-VUR correction, occurring on average 8.7 months post-surgery, with 73% of affected individuals being female . Multivariate risk analysis revealed that individuals of female gender, with preoperative VUR grade ≥ III, preoperative breakthrough UTIs, and apparent preoperative RN were more prone to develop postoperative pyelonephritis . Beetz et al. found that 18% of their 158 enrollees, had at least one postoperative febrile UTI in the first ten years post-VUR correction, with the highest probability during the initial postoperative year, especially in females . While RN, CKD and end-stage renal disease can result from recurrent febrile UTIs in VUR, and thus could be preventable by timely surgical correction, they may also be part of CAKUT, predisposing the children for kidney impairment . Despite four units showing RN, none of our enrollees developed arterial hypertension throughout follow-up. Still, blood pressure monitoring is of great importance, as arterial hypertension is common in pediatric and adult subjects with RN . A 15-year follow-up study revealed 13% of pediatric subjects with RN developed arterial hypertension, with bilateral RN linked to earlier hypertension onset (age 22 vs. 30 for unilateral RN) . Due to the enduring risk of children developing RN despite corrected VUR, regular blood pressure and urine protein tests are warranted in primary care, providing benefits for affected children and optimizing healthcare resource allocation. Our study spanned a substantial duration of twelve years, allowing for a longitudinal follow-up. This extensive timeframe provided us with the opportunity to conduct a comprehensive analysis of long-term surgical outcomes and to monitor the longitudinal development of kidney health within our study cohort. To the best of our knowledge, a longitudinal follow-up after UR of such duration has not been presented in the literature. This study has several limitations. The small cohort size is a primary concern. 101 of 135 patients had to be excluded, 77 of them due to the requirement of follow-up of at least 10 years. While this ensured robust longitudinal data, it limited the study's scope and generalizability. Additionally, the heterogeneity of underlying pathologies complicates result interpretation, as children with different conditions may respond differently to surgery. Future studies should stratify patients based on their pathology and aim to identify subgroups at higher risk. Reflux nephropathy was diagnosed based on ultrasound imaging, as renal isotope studies were not routinely performed, potentially missing subtle scarring. Lastly, the observed high obstruction rate prompted an internal review, investigating all patients who underwent UR in the last 20 years. This analysis revealed that all cases of pUVJO were captured within this cohort, suggesting selection bias. Our overall pUVJO rate is less than 1% aligning with rates reported in the literature. Children who experience uneventful recovery post-UR without signs of pUVJO within the first year demonstrate a low subsequent risk for surgical complications, suggesting limited long-term follow-up necessity. However, our findings highlight vulnerable groups, including those undergoing the P–L procedure, those with pre-operative RN, and girls, who are more prone to complications and may derive benefit from extended follow-up.
Machine learning‐based radiotherapy time prediction and treatment scheduling management
09d80c4b-d92c-498b-ba06-30d2874fd7a2
10476992
Internal Medicine[mh]
INTRODUCTION Cancer is among the most serious diseases worldwide, with a high level of morbidity and mortality. , In China, treatment load and medical care resources are unbalanced due to the large population, and local medical units are burdened with many cancer patients. This burden of providing healthcare to cancer patients demands a professional health workforce, advanced medications, and specialized devices for cancer management. According to public data, 50%−60% of cancer patients require radiation therapy as part of their treatment strategies. In other words, as one of the three (surgical treatment, chemotherapy, and radiotherapy) main approaches for cancer management, radiation therapy and the optimal use of radiotherapy resources play a significant role. Many patients who receive radiotherapy must wait a long time to start the course, and delays in radiation therapy can affect the effectiveness of the overall cancer treatment regimen. For this reason, some countries have introduced limits on waiting time for patients scheduled to receive radiotherapy. , However little progress has been made due to the low efficiency of treatment, and one of the most crucial reasons is the lack of efficient scheduling management. As shown in Figure , based on a study reported by the National Cancer Center of China, the mean duration between different phases of the radiotherapy process in the current radiotherapy workflow is a few days, while the mean waiting time to the first irradiation lasts more than 1 week. This not only suggests a shortage of radiation devices in China, but also the importance of improving the utilization efficiency of these machines. Figure illustrates the efficiency of each process/phase in the radiotherapy workflow. Radiotherapy can be performed in two ways: external irradiation using machines or internal irradiation using brachytherapy devices. Most departments of radiation oncology in China are equipped with linear accelerators. Therefore, the scheduling management of linacs (i.e., how the linac resources can be taken full advantage of) is the key problem to solve. In addition, improving the linac utilization rate has several benefits. First, the capability of linac in a single workday (i.e., the maximum number of patients per machine) can be increased. A significant amount of time is wasted during interpatient switching and this can be decreased by optimizing the treatment schedule. Furthermore, the workload (i.e., shifts or work hours) of radiotherapy specialists will be reduced if the machine is used more efficiently. More importantly, quality assurance for patients with acute cancer who undergo special radiotherapies [such as stereotactic body radiation therapy (SBRT), stereotactic radiation surgery (SRS), and adaptive radiotherapy (ART)] can be provided. However, the key point that affects appropriate treatment scheduling is an accurate estimation of the entire treatment time (TT) for each patient, including positioning. Factors that affect positioning and TT include cancer site, irradiation technique/mode, and immobilization method to position verification approach, the delivery dose per fraction to the dose rate that the performing linac can afford, likewise, if this is the first treatment fraction to whether the Image guidance (IGRT) is required. For this reason, multiple elements should be considered when assigning a specialized time slot to a patient during scheduling. However, current common strategies for radiotherapy scheduling include “as soon as possible,” linear programming, online stochastic algorithms, etc., , , and they do not always rely on precise time estimation‐based division. To our knowledge, no generally accepted guideline matches the predicted treatment duration for each patient to personalized treatment plans. In this study, a novel machine learning (ML)‐based scheduling method for radiotherapy is proposed. Addressing the scheduling problem by accurate prediction of individual treatment times, which are subsequently applied to time slot assignment was the mainstay of this approach. It comprises three steps. In the first step, by splitting the entire treatment into subphases, factors related to the duration of patient positioning and treatment execution were extracted, analyzed, and classified into a standardized group. Next, we built a real‐data‐based prediction model for total treatment duration using the ML method. The treatment duration in this study was divided into two parts: positioning time (PT) and TT. Using the ML method, both PT and TT are precisely predicted in minutes orders of magnitude, so that these data can be further used in the third step, which calculates all available time in the schedule in minutes orders of magnitude. With precise utilization of the occupied and available time of the linac, scheduling management shows a more efficient and flexible performance. Tasks such as treatment, quality, and maintenance that affect radiotherapy durations, can be arranged reasonably. MATERIALS AND METHODS In this study, we collaborated with. We collected data on individual treatment duration for each patient's workday, while the characters of positioning and parameters of the radiotherapy plan coordinate were recorded from November 2020 to August 2021, with 1665 data points in total. This radiotherapy center has one linac, Infinity (Elekta Solutions AB, Stockholm, Sweden), which serves approximately seven million people in Kunming as one of the nine linacs in this area. The average treatment task of the radiotherapy center is approximately 60 patients daily, suggesting that approximately 1200 patients cover all subtypes of cancer annually. Radiation therapy is a continuous course since fractionated prescription doses are administered on sequential days during the treatment period. As a result, the time of the first treatment fraction becomes the direct factor affecting the timeliness of cancer management. Several studies have revealed negative effects induced by a delay in starting radiotherapy. However, the cases in our study received a waiting time between 1 and 3 weeks from the radiotherapy plan confirmed by the oncologist to the irradiation of the first treatment fraction. The reasons for this are summarized as follows: (a) Overall fractions ranged from 25 to 33 fractions for each patient according to the prescription, unless the lower fractional radiotherapy is chosen, which is only in the case where SBRT/SRS is needed. Meanwhile, the duration of a single treatment is distributed from 5 to 25 min, depending on different immobilization and positioning approaches, as well as the parameters corresponding to dosimetry delivery and irradiation technologies. In other words, the treatment schedule is generally filled with a relatively fixed number of appointment slots sustained over a period of time. The newly confirmed radiotherapy plan can only be arranged in the treatment schedule either until the treatment phase belonging to one of the previously listed patients is completed or the work hours of the therapist are extended to accommodate these additional cases. (b) Two common strategies are applied in the definition of the time slot for radiotherapy scheduling: block and non‐block methods. In our case, the Department of Oncology at the Second Affiliated Hospital of Kunming Medical University adopted the block method, in which the therapists divided daily work hours into several sections and divided each section into four slots with the same length of time. A slot is occupied when an appointment is accepted; similarly, the same slots in the coming successive workdays are locked for this patient. It leads to a notable waste of time since the treatment duration greatly differs as patients change. The linac must remain idle when adjacent patients take less treatment time than in slot settings. In contrast, patients in later queues have to manifold wait as plans over time are placed intensively along the slot queue, let alone leave available time for new cases. (c) For specialized radiotherapy techniques, such as SBRT/SRS, and personalized management, such as ART, because the fractional dose and irradiation accuracy are increased, strict quality assurance (QA) is forced before treatment, which also occupies machine resources. In addition to patient QA, several activities, such as QA and device maintenance related to linac operation, must be considered in the time assignment of the linac. Table shows the relevant activity items, definitions, and time consumption for execution regarding the linac. In this study, an auto‐machine learning (auto‐ML) method is introduced, and the model for time prognosis is trained based on credible data. Therefore, the slots in the schedule are redefined as the predicted length of time instead of a fixed setting, in which way the utilization rate of linac can be greatly improved; thus, the saved time after optimization can be operated flexibly for additional cases, QA tasks, regular maintenance, and clinical scientific research purposes. 2.1 2.2 Auto‐ML model training In this study, auto‐ML was adopted to predict PT and TT in radiotherapy. , , Auto‐ML comprises four steps: feature extraction, model building, model optimization, and evaluation. In the first step, we used the deep feature synthesis (DFS)‐based method to process the source data. A flowchart of the DFS is shown in Figure . ALGORITHM 1 Generating features for the target entity In Figure , E B denotes the backward relationship tables and E F is the forward relationship tables in a given dataset E 1 , … , K , where each entity table has 1 , … , J features. The details of generating features for the target entity are provided in Algorithm . In Algorithm , d f e a t denotes the direct features applied over the forward relationships. e f e a t denotes the entity features, which derive features by computing a value for each entry x i , j by (1) x i , j ′ = e f e a t ( x : , j , i ) r f e a t denotes the relational features, which are applied over the backward relationships from E l to a new feature E k by (2) x i , j ′ k = r f e a t ( x : j | e k = i l ) In model building, we used the following optimization function as the model selection basis: (3) A ∗ ∈ arg min A ∈ A 1 k ∑ i = 1 K L ( A , D t r a i n ( i ) , D v a l i d ( i ) ) where L ( A , D t r a i n ( i ) , D v a l i d ( i ) ) denotes the loss of model A on the training set D t r a i n ( i ) and validation set D v a l i d ( i ) . After selecting the ML model, we used the following function to optimize the hyperparameters of the selected model: (4) λ ∗ ∈ arg min λ ∈ Λ 1 k ∑ i = 1 K L ( A λ , D t r a i n ( i ) , D v a l i d ( i ) ) where λ 1 , … , λ n is the hyperparameter that must be set in A , and A denotes the ML model and A = { A ( 1 ) , … , A ( k ) } . To demonstrate the time prediction accuracy of the proposed method, we use the accuracy in the range T (in seconds) of true treatment time. The prediction accuracy in the range T (in seconds) is calculated by (5) a c c = 1 n ∑ p = 1 n x p , i f x p − x T ≤ T , x p = 1 e l s e , x p = 0 where x p is the predicted time of case P and x T is the real time of case P. Features extraction In the preparation stage, data on the overall treatment duration for an individual patient were investigated following the diagnosis and irradiation plan to determine characteristic factors that potentially affect the length of positioning and treatment duration, respectively. Specifically, the entire treatment of one fraction is divided into PT and TT. For PT, actions are further separated into patient immobilization and positioning verification, in which the immobilization method, immobilization site, and implementation of specialized ancillaries are elements in which the positioning duration differs among patients. Similarly, for TT, the duration depends on the complexity of the treatment plan, which can be based on analysis quantized as factors including the treatment site, irradiation dose per fraction, irradiation technology, and number of control points. Table shows the feature factors extracted from the patient positioning and treatment phases. To guarantee that all feature data for modeling are reliable, all related terms are normalized through extraction from the Radiotherapy Planning System Monaco (Elekta Solutions AB, Stockholm, Sweden), Oncology Information System Mosaiq (Elekta Solutions AB, Stockholm, Sweden), and Radiotherapy Workflow Management System (RWMS) Via (Leg Limit., Kunming, China), recorded, and further used in ML model training. Table shows the information systems involved in the sources of feature data acquisition. The time duration of PT and TT were collected using RWMS, with the start and stop times of each step automatically recorded. As a treatment appointment began, therapists were enabled to access the patient's treatment plan to check personalized positioning requirements and performed appropriate positioning operation accordingly. The procedure was turned to “Irradiation” step when “Positioning” is complete. The “irradiation” process lasted differently depending on individual radiotherapy plan executed. Since the “start” and “complete” time points of “Positioning” and “Irradiation” step were automatically recorded in RWMS, the duration for PT and TT were then automatically calculated and qualitatively analyzed. All PT and TT data involved in our study were calculated automatically from RWMS, in which the timing system is a standard calendar system with year, month, day, hour, minute, and second. There were no decimal points for PT and TT times since all times data are calculated in “seconds” which in coordinate with the minimum recording unit in timing system. RESULTS 3.1 3.2 Predicting PT and TT In this section, all positioning and treatment data are from real radiotherapy data provided by our cooperating institution, the radiotherapy center of the Department of Oncology at the Second Affiliated Hospital of Kunming Medical University (SAHKMU). We collected 1665 radiotherapy samples in total and used 1165 of them as training data and the remaining 500 samples as testing data. To prove the prediction performance of the proposed model, a multiple linear regression (MLR) time prediction model was used in most hospitals as a baseline for experimental comparison. , , Most hospitals use the following models to predict PT: (6) Y p = C 1 + α 1 X 1 + α 2 X 2 + α 3 X 3 where α denotes the weights of different parameters, X 1 denotes the coding of the treatment site, and X 2 denotes whether the current treatment is the first time. X 3 is a technique used in radiotherapy. The TT can be predicted by: (7) Y t = C 2 + β 1 X 1 + β 2 X 2 + β 3 X 3 + β 4 X 4 where β denotes the weights of different parameters and X 4 denotes the modulation of the radiotherapy machine. By calculating the data, the weights in Eqs. and are as follows: C 1 = 3.551, α 1 = −0.0154, α 2 = 0.18287, and α 3 = 0.3846. C 2 = 3.5518, β 1 = −0.0569, β 2 = 0.3962, β 3 = 0.0014, β 5 = 0.496. Table shows a comparison of the MLR and the proposed method with respect to positioning accuracy and treatment accuracy. It can be seen that the proposed method outperforms the existing MLP methods in both positioning and treatment prediction accuracy. Within 60 s, the prediction accuracy of the proposed positioning and TTs reached 81% and 87%, respectively. Within 90 s, the prediction accuracy of the treatment time was 100%. Figure shows the prediction time comparison between the MLP and the proposed method. The times from 30 to 105 s in first row mean the error range between the predicted values and the real values of test data. As seen in Figure , our prediction accuracy has a prominent advantage over the traditional MLP‐based method in the small‐time error range, such as 30, 45, and 60 s. Figure also shows the potential of the proposed method in real radiation therapy, which can effectively save time and improve scheduling efficiency. Scheduling management tool based on the prediction result of auto‐ML Based on the prediction results using the auto‐ML algorithm introduced in Section , an improved scheduling management tool was designed to organize treatment appointments accurately for minutes. Figure illustrates the principle of the scheduling procedure. As shown in Figure , the treatment appointment schedule is divided into several large time slots, “Slot L,” during the operation time of the linac. The length of “Slot L” depends on the shift system of radiotherapy technicians/therapists or the reservation time of patient groups. For each “Slot L i ,” equal amounts of unit slot “Slot S” with an identical length of 1 min were further divided. Therefore, the predicted entire treatment duration for each patient occupies the corresponding length of “Slot S” when scheduling. A simple example is provided at the Radiotherapy Center of the Department of Oncology at SAHKMU. The operation time of the linac at SAHKMU is from 08:00 a.m. to 18:00 p.m., 10 h in total, during which ten 1‐h‐sized “Slot Ls” are divided. Daily QA, including dosimetry and mechanical accuracy checks for linac before the treatment of the first scheduled patient, is mandatory at SAHKMU, which takes approximately 20 min. Except that, altogether, 520 min are available to arrange patients’ appointments, with 60 “Slot Ss” included in each “Slot L i .” As scheduled patients accumulate, the rest time, namely, the available “Slot S,” is important since it indicates which kind of tasks are suitable to assign without affecting the next “Slot L.” Therefore, at the end of each “Slot L i ,” an indicator “State L i ” is placed to show the capacity of rest time in “L i .” Three indication colors, “Green,” “Orange,” and “Red,” are adopted, which means “available time in Slot Li more than 20 min,” “available time in Slot Li less than 10 min,” and “available time in Slot Li less than 5 min,” respectively. When an emergency case is needed, it helps adjust the treatment order with a clear indication of the rest time and length of tasks. In addition, the combination of tasks with different magnitudes prevents frequent errors or breakdowns by avoiding intensive long‐time sustaining irradiation. The scheduling tool proposed in this paper was applied for testing in the collaborated radiotherapy center: the Department of Oncology at the Second Affiliated Hospital of Kunming Medical University. The testing period last four months, during which circa one hour in average in each workday was saved from 10 work hours in total when arranging same quantity number of treatment appointments. Namely 10% optimization of scheduling efficiency was achieved as the proposed scheduling tool was applied. Meanwhile, with the saved work hours, six to eight new patients can be treated on single linear accelerator additionally. DISCUSSION Scheduling management, a newly recognized problem in precise radiotherapy, has been of considerable interest in recent years. However, in several studies, the time spent on pretreatment preparation is the only problem that has received the most attention, , while the efficiency of scheduling management, that is, the utility rate of the radiation device, is ignored. In this study, realistic clinical work was performed more appropriately. The entire treatment duration of each patient was divided into two parts for analysis: PT and TT. Features characterizing the duration of positioning and treatment are extracted correspondingly from a standardized RT information system, according to which a neural network algorithm is subsequently built to predict PT and TT separately. As the estimated positioning and TTs are calculated independently using the auto‐ML model, the overall treatment duration prediction for an individual patient is equal to the sum of these two results and is further used as a novel scheduling arrangement tool that manages the available working hours of the linac accurately to minutes. Our proposed time prediction method and scheduling management tool won the second prize at the 5 th National Smart Health Innovation Final Competition (SHIC 2022) and the 1st Medical Information Innovation Conference of China. Our study strongly indicates that a linac can be fully used for treatment intention by increasing 20%−30% more patients in regular work hours, depending on the duration of new cases, which indicates that the prediction results can precisely reflect the realistic length of the entire TT so that the arrangement of schedule slots can be guaranteed. The robustness of the scheduling management method is significantly higher than that of scheduling management previously reported, which is assisted by the prediction of future workload, because the capability to control dynamic cases and events are enhanced by implementing an online scheduling adjustment mechanism. This appears to be similar to those reported by Legrain et al. and Chang et al., who used an online stochastic algorithm to deal with dynamic treatment tasks, , but the improvement of our study is notable because the neural network is introduced to generate a highly precise estimation of the entire TT for each patient. Based on the dynamic scheduling management mechanism, the combination with auto‐ML strengthens the scheduling accuracy. In addition, from the perspective of the patients, by improving the linac utility rate, a shorter waiting period is needed until their first fraction of RT treatment starts. Therefore, tumor progression can be maintained at a stable level, which increases the stratification level of patients. Meanwhile, the time saved by the improved efficiency of linac can benefit both clinical and non‐treatment aspects. According to the data analysis in our research, generally, technologies such as complicated positioning methods and the application of IGRT that are enforced in modern precise radiation strategies obviously cost longer. The high efficiency of the linac provides more time to implement these technologies. In addition, non‐treatment tasks, such as machine QA, device maintenance, and clinical scientific trials that occupy machine resources, can be arranged more flexibly and reasonably. Moreover, in this study, the visualization of duration length is proposed as a novel solution to manage appointments. With a combination of appointments with time lengths of different orders of magnitude, intensive workloads for linacs are inevitable. CONCLUSION The last ring of the radiotherapy chain is the dose irradiation, namely, the execution of radiotherapy plan, which generated after previous operations in radiotherapy chain: diagnosis, CT image acquisition, ROIs (Regions of Interests) contouring and treatment planning design. Therefore, as the last ring of the radiotherapy chain, radiation is of great significance, highlighting both the timeliness and efficiency of appointment management. This is particularly true when there is an imbalance between medical resources and treatment loads. In this study, a scheduling management method based on the ML algorithm is the first time proposed, in which the ML model is trained to predict highly precise results of individual treatment time per fraction, and the predicted time is further used in the arrangement of appointments. According to the results, artificial intelligence has been successfully used to solve prediction problems affected by various abstract features and to inversely assist clinical practice. By implementing the encouraging predicted time for scheduling management, the utility rate of the linac is improved, which results in a more reasonable assignment of work hours of linac, both for treatment and non‐treatment intentions. The guarantee of the necessary QA and maintenance of the linac ensures that the device can perform RT plans with satisfactory accuracy for dose delivery and mechanics. Similarly, it helps to maintain the linac in a sustainable operating state. Another advantage revealed in this study that benefits the life cycle care of treatment machines is the combination of treatment plans with time lengths of different orders of magnitude by appointment arrangement. This leads to decreased machine errors and breakdowns caused by intensive irradiation for a long time. Finally, this study strengthened the value of medical data generated during the treatment process. Although many similar data types are produced discretely, separately, and independently, the development and implementation of various RT information systems provides mature platforms to collect standardized data. Once these ignored medical data are fully utilized through information technology, they have great potential for future and further scientific research. All authors contributed significantly to the performed work and approved the manuscript for publication. The authors declare that there are no conflicts of interest regarding the publication of this article.
Chain entanglement enhanced strong and tough wool keratin/albumin fibers for bioabsorbable and immunocompatible surgical sutures
02eb1b35-c27f-4ef1-aee6-88dc638b6a48
11950410
Suturing[mh]
Protein-engineered materials, renowned for their exceptional biocompatibility and biodegradability, have captured considerable attention for their potential in fabricating fibers – , adhesives , , hydrogels , films , , and scaffolds , for various biomedical applications. In high-performance protein fibers, mechanical properties assume paramount importance, particularly in biomedical applications such as artificial tendons , surgical sutures , , biological patches , and bioelectronics . However, the inherent conflict between strength and toughness in fiber mechanics often results in mechanically unbalanced materials , thereby limiting their applications. In nature, protein fibers, exemplified by spider silk, exhibit exceptional strength and toughness attributable to their unique molecular architecture, featured by both hydrophobic and hydrophilic domains . During fiber spinning, spidroins orchestrate the formation of well-organized supramolecular nanostructure networks, wherein self-assembled rigid crystalline β-sheets are embedded within soft amorphous matrices . Although this intricate molecular arrangement renders silk both superior strength and toughness, it also results in a delayed biodegradation process for in vivo applications. For surgical sutures, the in vivo biodegradability is a critical consideration. Generally, commercially available silk sutures are classified as non-degradable or featuring a long degradation period since it takes ~2 years to complete degradation in vivo, far beyond the sixty-day cutoff set by regulatory agencies , . In contrast, non-silk proteins, lacking such refined structures and assembly processes, often encounter significant challenges in fabricating strong and tough fibers. Recent advancements in protein fiber derived from non-silk proteins, such as recombinant supercharged elastin-like proteins SELPs and SELP-fused modular proteins , , , as well as various globular proteins , , have demonstrated considerable promise for biomedical applications. Nonetheless, these proteins often necessitate glutaraldehyde to couple protein chains due to their intrinsic non-fibrous structures. Although glutaraldehyde can confer a durable mechanical support network, it would raise concerns regarding biocompatibility, as residual aldehyde groups may induce cytotoxicity or inflammatory responses . Furthermore, the crosslinked network may reduce in vivo degradability . Wool-derived keratin attracted tremendous attention as a non-silk protein material due to its abundance and excellent biocompatibility , , . However, fibers derived from regenerated wool keratin often exhibit high strength but low toughness – . This phenomenon can be attributed to the prevalence of α-helix structures and a high cystine content (7–13%) within keratin. The α-helix structures furnish a rigid nano-skeleton, while the robust disulfuric bonds between nano-fibrils establish a dense crosslinking network, imparting high strength but constraining the extension of nano-spiral structures during tension, thus resulting in diminished ductility and toughness. Despite attempts to enhance mechanical properties through compounding with polymers , , cellulose – , viscose , and graphene , these keratin-based fibers persistently manifest a mismatch between high strength and toughness. Here, we develop a protein chain entanglement-reinforced strategy to fabricate high-performance composite fibers from keratin and albumin proteins. Chain entanglement is a topological effect arising from the inability of polymer chains to pass through each other , which has been demonstrated to provide significant stiffening and toughening effects for polymer and protein hydrogels – . High-concentration urea/dithiothreitol (DTT) is utilized to unfold and expand proteins in volume, thus promoting protein chain entanglement. In this context, through denaturation and subsequent secondary structure complementarity, low-cost regenerated keratin and bovine serum albumin (BSA) are spun into strong and tough composite fibers, which rely on the inherent cysteine oxidation crosslinking of the proteins without requiring external crosslinking agents. Blending keratin with BSA balanced the contents of α-helix and β-sheet structures. The resulting drawn keratin/BSA composite fibers (DKBFs) perform a breaking strength of 249.9 ± 8.3 MPa and a toughness of 69.9 ± 10.0 MJ m −3 , surpassing that of natural or regenerated keratin fibers, and even comparable to many silk fibers. The redox-responsive and hydration-induced mechanical behaviors of DKBFs are also explored. In addition, the DKBFs exhibit favorable biocompatibility, degradability, and immunocompatibility, demonstrating good wound closure as surgical sutures. Our exploration demonstrates the feasibility of an entanglement-reinforced strategy for tuning the protein fiber mechanics and opens an avenue for developing mechanically balanced and highly-biocompatible regenerated protein fibers. Wet-spinning preparation of keratin/BSA composite fibers Regulation of molecular entanglement of spinning dope via protein unfolding Modulation of composite fibers mechanics via balancing molecular structures Drawing-induced rearrangement of the molecular structure of DKBFs Mechanical stability and redox-triggered reversible mechanics of DKBFs Hydration-induced mechanical behaviors of DKBFs In vivo immunocompatibility and wound closure of DKBF suture We utilize a protein chain entanglement approach to produce keratin-based fibers through wet spinning. Initially, keratin extraction from wool was achieved using a reduction solvent containing 9.8 M lithium bromide (LiBr) and 0.1 M DTT, effectively facilitating wool dissolution . Subsequent refinement of wool keratin involved dialysis and freeze-drying. Sodium dodecyl sulfate-polyacrylamide gel electrophoresis of extracted keratin shows clear bands at around ~27 kDa, ~37 kDa, 40 kDa, and >180 kDa, as observed in Supplementary Fig. . Circular dichroism (CD) spectra analysis illustrates a predominant α-helix structure with peaks at ~208 nm and 222 nm (Supplementary Fig. ). These findings validate the efficacy of keratin extraction and ensure the integrity of the keratin molecular backbone , . Urea, combined with DTT, serves as a critical component in preparing a mixed spinning dope comprising keratin and BSA. Here, DTT facilitates the cleavage of disulfide bonds while urea molecules permeate between the protein molecular chains, disrupting hydrogen bonds. This process initiates a transformation of both the keratin and BSA molecules into a random structure, ultimately forming a protein entanglement system enriched with a significant quantity of cysteine residues, as illustrated in Fig. . The abundance of reactive thiol groups underscores their pivotal role in governing the formation of fibers and determining fiber mechanics. During the wet-spinning process depicted in Fig. , entanglement-enhanced protein spinning dope is extruded from a narrow needle, where fluid-induced shear force aligns the protein chains into a coagulation (pH 2.0) consisting of 2% (w/v) H 2 O 2 and 75% ethanol. Within this coagulation medium, oxidation cross-linking facilitated by H 2 O 2 and protein refolding induced by acid and dehydration proceed concurrently, facilitating the transition of the fiber from solution. Remarkably, a mere 200 mg of protein yields fibers approximately 480 m in length, as shown in Fig. . The scanning electron microscope (SEM) images in Fig. show that DKBF exhibits a smooth surface and uniform diameter of around 12 μm. Moreover, the DKBF exhibits notable birefringence, as observed in the polarized optical microscope (POM) image (Fig. ), demonstrating a well-aligned molecular orientation. Figure indicates that the DKBF can be securely knotted without compromising its elementary filament structure, confirming its flexibility and high tensile strength. The mechanical properties of keratin-based fibers under various treatments are summarized in Fig. , revealing that DKBF exhibits balanced mechanical properties, with a breaking strength of 249.9 ± 8.3 MPa and a toughness of 69.9 ± 10.0 MJ m −3 , in stark contrast to that of pure keratin fiber (PKF) and undrawn keratin/BSA composite fiber (UKBF). The entanglement of spinning dope through protein unfolding was investigated. The formation of protein chain entanglement is intricately linked to the inherent characteristics of proteins, encompassing factors such as molecular weight and structure. Achieving entanglement typically necessitates altering the structure of protein molecules, leading to their unfolding to the greatest extent possible . Typically, the native structure of proteins is upheld by intramolecular disulfide bonds and various non-covalent interactions, including electrostatic, hydrophobic, and hydrogen bonding interactions . Hence, to induce protein unfolding, it becomes imperative to cleave disulfide bonds with the assistance of stabilizers such as trifluoroethanol, urea, or guanidine hydrochloride, which further stabilize the unfolded protein chains , . Herein, we employed a solvent consisting of urea and DTT to dissolve proteins. For both BSA and keratin, the emission observed at 330 nm under aqueous conditions is shifted to 348 nm when adding 8 M urea and 0.1 M DTT (Supplementary Fig. ), indicating a greater exposure of buried tryptophan residues (Trp 214) within the helical structure to the solvent during protein unfolding . Molecular dynamics (MD) simulations of protein unfolding were conducted using urea to disrupt hydrogen bonding and DTT to cleave disulfide linkages (Fig. ). For keratin, under simulated heating conditions (90 °C) and shear forces during stirring, a progressive increase in the radius of gyration ( Rg ) and solvent-accessible surface area (SASA) was observed. Simultaneously, reductions in intramolecular hydrogen bonds (H-bonds), van der Waals (VDW) interactions, and Coulombic energy (CE) were recorded, along with an increase in coil structure and a decrease in α-helix content, collectively indicating a gradual unfolding process (Supplementary Fig. ). Small-angle X-ray scattering (SAXS) analysis of keratin solutions at ambient temperature revealed an Rg of 3.8 nm (Fig. and Supplementary Table ), signifying a transition from nanostructured aggregates to a disordered single-molecule state (Supplementary Fig. ). Conversely, BSA dissolved readily at room temperature, exhibiting a similar unfolding trajectory but with a significantly larger equilibrium Rg (Supplementary Fig. ). SAXS measurements further demonstrated an increase in the Rg from 2.8 nm in aqueous solution to 5.6 nm in the presence of urea/DTT (Supplementary Fig. ). When the two proteins were mixed under equilibrium conditions, pronounced interprotein interactions were observed. Fluctuations in the Rg indicated ongoing transitions between partial unfolding and refolding within protein structures. These conformational dynamics facilitated protein–protein collisions and subsequent entanglements (Fig. ). Concurrently, the increase in SASA, VDW interactions, and CE provided strong evidence of partial aggregation, further supporting the presence of intermolecular entanglements between the two proteins. Furthermore, a downward trend is observed in the coil structure, while the quantity of α-helix structures increases, suggesting the formation of new α-helix secondary structures. This transformation likely contributes to the stabilization of the entangled structure (Supplementary Fig. ). Subsequently, we performed a comprehensive investigation into the rheological properties of protein spinning solutions, demonstrating that DTT is essential for modulating both rheological behavior and molecular entanglement within protein systems. At 200 mg/mL, keratin exhibited limited swelling in 8 M urea and failed to achieve complete dissolution without DTT. However, the inclusion of DTT enabled full dissolution and the formation of a stable spinning solution (Supplementary Fig. ). In contrast, BSA demonstrated distinct behavior, dissolving completely without DTT, although forming a low-viscosity solution with minimal shear-thinning properties. The addition of DTT significantly increased the viscosity, far exceeding that of the untreated solution, underscoring the pivotal role of disulfide bond disruption in enhancing rheological properties (Supplementary Fig. and Movie ). Fiber formation experiments further confirmed the necessity of DTT for successful fiber fabrication from BSA solutions (Supplementary Movie ). To further elucidate the molecular entanglement, we quantified the non-Newtonian index ( n -value) which serves as an indicator of the extent of molecular interactions and entanglement within the system, with lower n -values reflecting higher degrees of entanglement . The DTT-treated BSA solution exhibited a significantly lower n -value of 0.74 compared to the 0.98 of untreated solution, highlighting enhanced molecular networking (Supplementary Fig. ). These demonstrated that DTT effectively disrupts disulfide bonds, thereby inducing the transition of BSA from a compact globular structure to an extended random coil conformation. This structural transformation facilitates increased molecular interpenetration and overlap, ultimately maximizing the degree of entanglement. In contrast, keratin solutions, even when treated with DTT, exhibited lower shear viscosity and higher n -values, indicative of weaker molecular entanglement. This behavior is likely attributable to keratin’s small Rg , which inherently limits inter-molecular crosslinking and restricts the entanglement potential of the system . Therefore, we introduced highly entangled BSA solutions to modulate molecular entanglement states, demonstrating a clear trend of increasing entanglement with higher BSA content, as evidenced by a consistent decrease in n-values with increasing BSA proportions (Fig. ). Achieving high-strength and high-toughness composite fibers requires not only optimizing molecular entanglement within the spinning solution but also precisely controlling the coagulation bath conditions to balance protein refolding and molecular networking during fiber formation. Therefore, we systematically investigated three critical parameters of the coagulation bath: pH, ethanol, and H 2 O 2 content. It was observed that in a coagulation bath containing 75% ethanol and a pH of 2.0, fiber production was hindered, likely due to insufficient interactions among protein chains. Conversely, in an aqueous solution containing only H 2 O 2 , the extruded protein rapidly formed a gel at the outlet, blocking the channel. This gel exhibited a high proportion of random structures (Supplementary Table ), indicating ineffective dehydration and a failure to achieve timely structural stabilization. However, when oxidation and dehydration processes occurred simultaneously during coagulation, stable fiber formation was achieved (Supplementary Movie ). This approach enabled protein molecules to retain their entanglement while undergoing incomplete refolding and efficient dehydration, facilitating the formation of robust fibers. The influence of pH and H 2 O 2 on the conformational distribution of protein molecules was further analyzed using Fourier Transform Infrared (FTIR) spectroscopy. Results revealed that excessive H 2 O 2 adversely impacted the formation of β-sheet structures, likely due to the rapid establishment of oxidative cross-linking networks that impeded the transition from helix to β-sheet conformations. Additionally, an increase in hydrogen ion concentration was found to promote efficient protein refolding while reducing the prevalence of random coil structures (Supplementary Fig. and Table ). Consequently, a coagulation solution comprising 75% ethanol, pH 2.0, and 2% (w/v) H 2 O 2 was validated for fiber preparation. Subsequently, we investigated the mechanical tensile behavior of fibers across various protein weight ratios, as illustrated in Fig. and Supplementary Fig. and Table . The results reveal that despite exhibiting an enhanced tensile strain of 46.3 ± 9.4%, PKFs demonstrate inferior mechanical strength compared to those reported keratin-based fibers. This phenomenon could be attributed to the weak internal entanglement within PKFs, rendering it challenging to offset the loss of the rigid helix structure following refolding. With an increase in BSA content, the tensile strain of the fibers gradually rises from 46.3 ± 9.4% to 111.2 ± 23.8%, while the mechanical strength initially shows an ascending trend before descending. At a keratin/BSA ratio of 3:2, the UKBFs exhibit optimal mechanical performance, boasting a breaking strength of 119.2 ± 13.3 MPa and a toughness of 98.8 ± 10.8 MJ m −3 (Fig. ). We employed deconvolution analysis of the typical peaks in FTIR spectra (Supplementary Fig. and Fig. ) located at 1650 cm −1 , 1620 cm −1 , and 1670 cm −1 corresponding to α-helix, β-sheet, and disordered structures, respectively. As the proportion of BSA rises, the helical structures within the fibers gradually diminish, while β-sheet structures exhibit an initial rise followed by a decline. This behavior can be attributed to BSA’s role in balancing the rigid secondary structures and chain entanglement within the fibers. Both keratin and BSA are rich in thiol groups, and the transition from solution to fiber involves the reorganization of disulfide bond networks, which govern their interactions and influence the distribution of secondary structures. Initially, the introduction of BSA promotes molecular chain entanglement and enhances interactions between keratin and BSA. During fiber formation, oxidative cross-linking occurs, which reduces the density of disulfide bonds that would otherwise form exclusively between keratin molecules. This dilution increases chain flexibility, enabling the stretching of helical structures into β-sheets , thereby increasing β-sheet content. However, as BSA content increases further, excessive entanglement begins to hinder these structural transitions, favoring the formation of random coils and reducing β-sheet content. This entanglement cannot compensate for the loss of keratin’s rigid structure, leading to increased tensile strain but reduced tensile strength. Therefore, precisely controlling BSA content to regulate chain entanglement while maintaining adequate protein refolding capacity is essential for achieving an optimal balance between strength and toughness through the coordination of secondary structures. Given that mechanical drawing is a critical procedure for improving tensile strength via molecular orientation and filament densification – , we applied continuous stress to UKBFs to produce drawn keratin/BSA composite fibers (DKBFs). This procedure efficiently enhances fibril alignment while significantly maximizing the secondary structure transition from α-helix to β-sheet , as depicted in Fig. . The morphology evolution observed in SEM images (Fig. ) shows a significant reduction in diameter from ~29 µm to ~12 µm as the drawing ratio ( λ ) increases from 1.0 to 3.0. Two-dimensional SAXS characterizes the protein molecular arrangement structure at various drawing ratios (Fig. ). As summarized in Fig. , DKBFs exhibit an improved degree of order, with the orientation factor ( f ) increasing from 0.857 to 0.948. This is consistent with the enhanced birefringence observed upon drawing, as evidenced by POM images in Supplementary Figs. . The calculated β-sheet proportion significantly increases from 44.2% to 64.1% as the drawing ratio rises from 1.0 to 3.0 (Fig. and Supplementary Fig. ). In addition, Fig. demonstrates a strong positive correlation between the tensile strength and drawing ratio (Supplementary Fig. and Table ). With an increase in the drawing ratio from 1.0 to 3.0, the tensile strength of DKBFs rises to 249.9 ± 8.3 MPa, marking a twofold enhancement compared to UKBFs. While a high drawing ratio improves the tensile strength of DKBFs, the drawing process consumes part of the deforming capacity of the protein molecular chain, reducing elongation at break to 40.8 ± 4.5%. Nonetheless, the toughness of DKBFs remains at 69.9 ± 10.0 MJ m −3 after mechanical drawing, demonstrating effective enhancement of protein chain entanglement in improving comprehensive mechanical properties. It is worth noting that the comprehensive mechanical properties of the current DKBFs are comparable to those of natural wool fibers (Supplementary Fig. ) and even outperform those of various natural and artificial silk fibers, including degummed silk , regenerated silk fibers , , recombinant spider silk fibers , , and regenerated keratin fibers , , , , , as summarized in Fig. and Supplementary Table . Building on the strategy of chain entanglement enhancement and stretch-induced reinforcement to regulate the mechanical properties of composite fibers composed of fibrous and globular proteins, we extended our investigation to explore the applicability of this approach to other globular proteins. Considering that disulfide crosslinking of cysteine residues can significantly strengthen interactions between keratin and globular proteins, selecting globular proteins with an adequate number of cysteine residues is essential. Therefore, we chose three commonly used globular proteins—soy protein isolate (SPI), β-lactoglobulin (βLB), and ovalbumin (OVA)—for complexation with keratin. The results indicate that fibers composed of single proteins exhibit relatively low mechanical properties, with tensile strength below 40 MPa and toughness below 25 MJ m - ³, which are inferior to those of pure BSA fibers (Supplementary Figs. and Table ). This discrepancy may be attributed to the intrinsic characteristics of different globular proteins, such as molecular weight and amino acid sequence. Nevertheless, after complexation with keratin and subsequent tensile processing, the resulting fibers demonstrated significant improvements in mechanical properties, including strength, modulus, and toughness (Supplementary Fig. and Table ). These findings highlight the broad applicability and potential versatility of the proposed strategy. Subsequently, the mechanical stability of the fibers was assessed through cyclic tensile testing. As depicted in Supplementary Fig. , when the tensile strain is set to 5% within the elastic deformation region, the fibers exhibit robust elastic recovery, with stress remaining constant. Upon increasing the strain to 20%, although notable irreversible deformation is observed in the tensile curve, the tensile stress can still be maintained at ~160 MPa. These results indicate the excellent mechanical stability of the fibers. Furthermore, thermal stability was evaluated using thermogravimetric analysis (TGA). The resulting DKBFs demonstrate a decomposition temperature close to ~250 °C in a nitrogen (N 2 ) atmosphere (Supplementary Fig. ), closely resembling that of natural wool with its rigid scale layer protection (Supplementary Fig. ), which could be attributed to the extensive disulfide crosslinking in DKBFs . Considering the high content of disulfides (S-S) present between fibrils providing mechanical strength for DKBFs, we explored the dynamical mechanical behaviors of DKBFs based on reversible cleavage of S-S bonds. As illustrated in Fig. , treatment with DTT leads to the breakage and reduction of some disulfide bonds to thiol groups. Conversely, upon re-oxidation, the disulfide bonds are restored. Raman spectroscopy (Fig. ) was employed to assess the relative S-S content (R ss ) around 510 cm − 1 , revealing a decrease from 24.5% to 14.8% under DTT treatment, followed by a restoration to 28.9% after re-oxidation with H 2 O 2 , indicating the redox-responsive nature of DKBFs. The mechanical properties of DKBFs undergo dynamic changes due to the reversible cleavage and reconstruction of disulfide bonds. With an increase in DTT concentration from 10 mM to 30 mM, the tensile strength and toughness gradually decrease, resulting in a breaking strength of 82.8 ± 5.7 MPa and toughness of 25.5 ± 2.9 MJ m −3 , indicating an increasing disruption of disulfide bonds. However, after treating these mechanically weakened fibers with a 2% (w/v) H 2 O 2 solution overnight, both the breaking strength and toughness are restored to their original levels (Fig. , and Supplementary Figs. and Table ), thereby demonstrating the excellent mechanical reversibility of DKBFs. This highlights their potential application in redox-stimuli-responsive smart materials. Generally, the β-sheets in DKBFs are kinetically stable owing to the presence of the H-bond network that hinders their reconversion into the more thermodynamically stable α-helixes. Therefore, a shape-memory cycle is allowed for the elaboration in which the H-bond network functions as a locking mechanism to ensure the persistence of the deformed shape (Fig. ) . Upon hydration (Fig. and Supplementary Movie ), the DKBF (state A) bundle shrinks from approximately 15 cm to ~14 cm at state B, eventually stabilizing at about 13 cm in state B’ after air drying. Subsequently, the contracted fiber bundle is re-hydrated and manually wet-stretched to a length of about 15 cm, then maintained under a load at room temperature until the fiber bundle dries (state C). Upon relaxation, no significant length change is observed between the stretched (state C) and relaxed forms (state A’). Notably, when the bundle is re-dried after being re-hydrated in state A’, the bundle length returns to state B’, indicating the shape memory cycle of DKBF bundles. Figure and Supplementary Fig. illustrate the changes in β-sheet and α-helix content in DKBF. The β-sheet content decreases from 64.1% to 34.8%, while the α-helix content increases from 24.6% to 42.5% upon hydration and dehydration. Upon subsequent re-stretching during rehydration and air drying, these structural shifts revert to their original proportions, thereby demonstrating the reversible transformation between β-sheets and α-helixes throughout the shape-memory cycle. Additionally, notable reversible changes in the birefringence of the filament bundle are observed during the stretching-hydration-drying cycles. Initially, the filament bundle contracts, accompanied by a decrease in birefringence, which is then restored to its original high-intensity state upon subsequent re-stretching and drying (Supplementary Fig. ). This reversible transition is predominantly regulated by hydrogen bonding. Notably, the transition from β-sheet to α-helix involves the disruption of interchain hydrogen bonds in the β-sheet structure. This process is facilitated by the incorporation of water molecules, which establish new hydrogen bonds with the protein backbone, thereby stabilizing a more flexible and hydrated α-helix configuration. Under mechanical stretching, protein chains align, enabling the reformation of interchain hydrogen bonds and restoring the compact β-sheet structure as water molecules are expelled, completing the reversible cycle . Leveraging this reversibility, the DKBF bundles could serve as artificial muscles and water-sensitive fabric actuators. In addition to their excellent mechanical properties, DKBFs exhibit good biocompatibility and biodegradability, as their primary components are natural proteins, and their ultimate degradation products are amino acids. Cell cultures of L929 cells with DKBFs show healthy growth after 24 h, and it is observed that the cells can adhere to the fiber surface (Supplementary Figs. ). In vitro degradation of DKBFs in elastase shows that 5 mg of DKBFs in 1 mL of elastase (0.2 mg mL −1 ) can be gradually degraded over 30 days (Supplementary Figs. ). To explore the potential biomedical applications of DKBFs, we further twisted and wove them into bundles, as shown in Fig. . In the hydrated state, the woven DKBF bundle demonstrates a breaking strength of 78.6 ± 5.9 MPa and a modulus of 13.5 ± 2.9 MPa, with a fracture strain exceeding 200%, displaying a J-shaped stress-strain curve (Fig. and Supplementary Fig. ). This phenomenon could be attributed to water’s plasticization of the molecular chains, facilitating the conversion of β-sheets into stretchable α-helixes in the hydrated state . Concurrently, the twisting and weaving processes may offer additional deformation potential , ultimately resulting in characteristic nonlinear tensile curves. Additionally, the load–unload cycles of the bundles at different set strains were conducted to characterize mechanical stability in the hydrated state (Fig. ), demonstrating excellent deformation recovery even at a set strain of 50%. The comprehensive mechanical performance, biocompatibility, and degradability make DKBFs promising load-bearing materials for biomedical applications . In addition to their excellent mechanical and biological properties, the relatively low production cost of DKBFs, at just 0.0011 USD m −1 (Supplementary Table ), makes them particularly well-suited for surgical sutures. To further assess their in vivo performance, including degradability, immunocompatibility, and wound healing potential, DKBF sutures were implanted subcutaneously in C57BL/6 mice (Fig. ). Absorbable PLGA sutures and non-absorbable silk sutures were employed as controls. No notable degradation is observed for either PLGA or silk sutures during the 2-week period. However, the DKBF sutures exhibit significant biodegradability, approaching complete degradation by day 14 (Fig. ). Histological staining (H&E) images reveal that PLGA and silk sutures elicit a strong inflammatory response by day 7, whereas the DKBF sutures induce negligible inflammation, significantly superior to the other sutures. This is evidenced by increased cell infiltration into the interfilament spacings of PLGA and silk sutures over time, along with the presence of foreign body giant cells and proinflammatory eosinophils near the suture filaments. For the DKBF sutures, no giant cells, neutrophils, or macrophages were observed by day 7. M&T sections show that there is no significant tissue regeneration around the PLGA and silk sutures, whereas loose collagen fibers appear around the DKBF suture, indicating its ability to promote tissue repair effectively (Fig. ). To demonstrate the use of DKBF sutures for wound closure, we further evaluated the wound healing of dorsal full-thickness skin incisions closed using PLGA, silk, or DKBF sutures in a mouse model (Fig. ). Macroscopic assessment of the wounds on day 7 shows that all sutures exhibit good wound closure capability, with no signs of infection (Fig. ). Histological assessment confirmed complete healing of all incisional wounds by primary intention, with minimal inflammation and fibrosis (Fig. ). In summary, we have engineered a keratin/BSA composite fiber with both high strength and toughness through enhanced protein chain entanglement, without the addition of chemical crosslinkers. The synergistic interaction between keratin and BSA has effectively tailored the mechanical properties of the composite fibers. The resulting DKBFs demonstrate a well-balanced mechanical performance, featuring a breaking strength of ~250 MPa and a toughness of around 70 MJ m −3 , which matches or exceeds that of natural or artificial protein fibers. DKBFs can be intricately twisted and woven into bundles, exhibiting high mechanical strength and stability under hydrated conditions. In addition, due to their degradability and immunocompatibility, sutures made of DKBFs demonstrate good wound closure capability, with significantly less inflammations compared with PLGA and silk sutures. We envision that the robust and resilient DKBF could inspire valuable insights for innovative bio-regenerated fiber manufacturing and stimuli-responsive and biomedical applications. Wet spinning of keratin/BSA composite fibers Preparation of DKBF bundles Preparation of sutures Mechanical tests Physicochemical characterization Molecular dynamic (MD) simulations Animal work Statistics and reproducibility rst, a denaturant solvent was prepared using 8 M urea and 0.1 M DTT. Keratin powder extracted from Angora Wool was dissolved in the denaturant solvent at 90 °C, while BSA was dissolved at room temperature. Both the keratin and BSA solutions were degassed by centrifugation, resulting in a final concentration of 200 mg mL −1 . The protein composite spinning dope was then prepared by mixing the solutions in different weight ratios. Afterward, the protein composite spinning dope was extruded by a syringe pump at a flow rate of 10 μL min −1 through a needle with an inner diameter of 210 µm and a length of 75 mm into a coagulation bath containing 2% (w/v) H 2 O 2 and 75% v/v ethanol at a pH of 2.0. Keratin filaments were collected at a rate of 1.6 m min −1 . The drawn keratin/BSA composite fibers (DKBFs) were obtained under different drawing ratios ( λ = 1.0, 1.5, 2.0, 2.5, and 3.0) based on the initial collection rate of 1.6 m min −1 . Other spherical proteins, including SPI, βLB, and OVA, were prepared using the same procedure as BSA for the fabrication of composite fibers with keratin. The resulting fibers were then dried under ambient conditions (room temperature: ~25 °C, relative humidity: 50–60%). Fibers were collected into a bundle with a diameter of ~100 μm and a length of 15 cm. The bundle was twisted using a motor at 200 rpm for 2 min to obtain a 1-strand twisted DKBF bundle. Two 1-strand bundles, twisted in opposite directions, were combined to form a 2-strand twisted DKBF bundle. Subsequently, three 1-strand bundles were then braided together to create a 3-strand woven DKBF bundle. For suture preparation, a fiber bundle with a diameter of 200 μm and a length of 15 cm was twisted and then combined in reverse to form a suture, resulting in a macroscopic diameter of ~400 μm. Tensile testing of the single fiber was performed using a testing machine (Discovery DMA 850, TA-waters) equipped with an 18 N sensor and pneumatic grip. Before the test, the fibers were affixed to a hollow paper frame (5 × 5 mm) and conditioned at room temperature (~25 °C) and 60% relative humidity for 24 h. The testing was conducted in Rate Control-Stress Ramp mode with a ramp rate of 0.1 N min −1 . Tensile testing of DKBF bundles was carried out using a Computerized Electronic Universal Testing Machine (Shenzhen Suns Technology Stock Co. Ltd.). The viscosity measurements of the spinning dope were performed on a rotational rheometer (Hacke MARS, Thermo Fisher) with 25 mm rotor under 25 °C. The shear rate was from 0.01 to 1000 s −1 . The consistency coefficient ( K ) and non-Newtonian exponent ( n ) were determined as expressed in equation: τ = Kγ n , where γ denotes the shear rate and τ is the shear stress . The secondary structures of prepared fibers were analyzed by an IR spectrophotometer (Nicolet iS20, Thermo Fisher Scientific). The deconvolution, baseline-correction, and smoothing of spectra were performed using Origin 2021 software. The Fit Peaks (Pro) modes of the peak analyzer tool and the Gaussian Peak Function were utilized for the peak deconvolution. The content of disulfide bonds within prepared fibers was analyzed by a Raman Spectrometer (LABRAM HR Evolution, HORIBA Jobin Yvon). The excitation wavelength was 633 nm. The normalization of Raman spectral data was based on the C-H band due to its large peak area and not affected by chemical treatment. The disulfide bond content in fibers was calculated according tooldsymbol{R}}}}_{{{\boldsymbol{ss}}}}=\frac{{{{\boldsymbol{S}}}}_{{{\bf{510}}}}}{{{{\boldsymbol{S}}}}_{{{\bf{1450}}}}}$$\end{document} R ss = S 510 S 1450 , where S 510 is the peak area of S-S band (470-560 cm −1 ), while S 1450 is C-H band peak area (1430–1500 cm −1 ). To investigate the structural differences of proteins in urea aqueous solvent conditions with various structural systems, MD simulations were performed using the Gromacs 2021.5 open-source software package. The simulation systems were set up in a closed environment, with all systems maintained at a pressure of 1 bar (atmospheric pressure). The sequences of BSA (UniProt ID: P02769, ALBU_BOVIN) and keratin (UniProt ID: P25690, K1M2_SHEEP) can be obtained from the https://www.uniprot.org/ (supplementary Table ). The temperature of the BSA system was set to 300 K, while the temperature of the keratin system was set to 363 K, with shear force applied to both, simulating the actual dissolution temperature and stirring shear forces. The mixed system was simulated at 300 K. Periodic boundary conditions were applied with the protein centered within the simulation box, ensuring that the minimum distance between the protein’s edge and the box edge was set to 5.0 nm. The receptor structure topology files were converted into GROMACS-compatible formats using the pdb2gmx tool, with the AMBERff14SB force field used for the simulations. The TIP3P model was employed for water molecules. After constructing the initial system, energy minimization was performed on all atoms using the steepest descent method. After fixing the protein positions, a 1000 ps constant number of particles, volume, and temperature (NVT) equilibrium simulation was first performed, followed by a 1000 ps constant number of particles, pressure, and temperature (NPT) equilibrium simulation. After NVT and NPT equilibration, both the wild-type and mutant systems were simulated for 100 ns in production dynamics, with a simulation step of 2 fs. The bond lengths were constrained using a linear constraint solver, and long-range electrostatic interactions were calculated using the Particle Mesh Ewald method. Once all simulations were completed, structural analysis, including the calculation of the Rg and root mean square deviation, was performed using the GROMACS gmx module. The snapshots of the protein structures after unfolding were captured at 75 ns for BSA, 40 ns for keratin, and 100 ns for the composite system, respectively. These time points were determined based on the results of the actual SAXS measurements of the Rg . All mouse experiments were conducted in strict accordance with relevant regulations, and the research protocols were approved by the Lab of Animal Experimental Ethical Inspection of Dr. Can Biotechnology (Zhejiang) Co., Ltd (Approval number: DRK2023150287). Female mice of the wild-type (WT) C57BL/6 strain, aged 6 to 8 weeks, were obtained from the animal facility of the Zhejiang Academy of Medical Sciences. All animals were maintained under controlled environmental conditions with a temperature range of 18–29 °C, relative humidity of 45–55%, and ventilation rate of 6–15 air changes per hour. The animals were provided with a standard laboratory diet and housed under a 12-h light/dark cycle. For subcutaneous implantation procedures in mice, the sutures were washed with PBS buffer and sterilized via UV irradiation for 30 min and implanted subcutaneously in C57BL/6 female mice. The implantation processthen disinfected with iodine. A longitudinal incision of approximately 6 mm was made on the dorsal surface using surgical scissors. Subsequently, the incisions were closed with PLGA, silk, and DKBF sutures, respectively. The mice were monitored until they recovered from anesthesia and maintained for 3, 7, and 14For the incision wound model in mice, the sutures were sterilized via UV irradiation for 30 min. The procedure was meticulously conducted as follows: mice were anesthetized using 3% isoflurane in oxygen, their skin was carefully shaved, and disinfected with iodine. A longitudinal incision of ~8 mm was made on the dorsal surface using surgical scissors to provide access to the subcutaneous space. Subcutaneous pockets about 0.5 cm away from the incision were created with blunt forceps for the implantation of the sutures. After implantation, the incisions were closed with 5-0 taper-tipped PGA absorbable sutures. The mice were monitored until recovery from anesthesia and maintained for 7 days. They exhibited normal growth with no signs of discomfort after implantation, and no body weight loss was observed throughout the experiment. At specific time intervals, mice were sacrificed, and the suture samples, along with adjacent tissues, were photographed, excised, and collected. These explanted samples were fixed in a 10% formaldehyde solution for histological examination. Details regarding sample sizes and the appropriate statistical tests employed are provided in the figure captions. Data are presented as means ± standard deviation. Significance testing was performed using two-sample t -tests via OriginPro 2021 software. It should be noted that due to the mechanical testing procedure, which involves clamping fibers with paper clips before conducting tests using Dynamic Mechanical Analysis, some tests may fail due to sample preparation issues. Therefore, the actual sample sizes achieved are specified in the corresponding figure captions. All in vivo experimental results are derived from three independent experiments to ensure the reproducibility of the finding
The value of Phosphohistone H3 as a cell proliferation marker in oral squamous cell carcinoma. A comparative study with Ki-67 and the mitotic activity index
9fa1d95f-69f1-4df8-9aed-5652028cf613
9445610
Anatomy[mh]
Oral squamous cell carcinoma (OSCC) represents the most common cancer of head and neck region . It is characterized by aggressive biological behavior and an unfavorable prognosis . In order to improve the low survival rate of patients with OSCC, an important line of research is focused on the identification of molecular biomarkers of prognostic value that contribute to the selection of the most appropriate therapeutic plan . Cell proliferation is an essential biological process, key in the growth and maintenance of tissue homeostasis, whose loss of control plays a fundamental role in the development of malignant neoplasms . In fact, in several types of human cancer, the evaluation of cell proliferation is considered an important histological parameter . For its assessment, different molecular antigens have been identified, which, when expressed by cells in active proliferation, can be used as predictive biomarkers of sustained proliferation . MAI and Ki-67 immunoexpression are the most widely used methods to determine tumor proliferative capacity, however, both methods are objected to present significant intra and interobserver variability . MAI represents the oldest method for determining the proliferative capacity of malignant neoplasms . It is obtained by counting normal and atypical mitotic Figures (MF) in a cell population of known number . The Ki-67 antigen is a non-histone nuclear protein present in all active phases of the cell cycle (G1, S, G2 and M), being absent only in resting cells (G0) . The Ki-67 antibody by immunohistochemical techniques (IHC) allows the identification of the Ki-67 protein in tissue samples . Actively proliferating cells immunoexpress this antibody, varying in intensity and location, depending on the phase of the cell cycle and the history of each individual cell . The quantification of Ki-67 immunoexpression is performed through the labeling index, defined as the percentage of positive cells in a cell population of known number . The PHH3 antibody is recognized as a biomarker of cell proliferation, specific for cells in mitosis, by identifying phosphorylated histone H3 by IHC . Histone H3 is one of the five different types of histone proteins that are part of nucleosomes, whose phosphorylation at the serine 10 and 28 level, determines the compaction of chromatin during cell division and from this event the cell is enabled to enter the M phase of the cell cycle . Histone H3 phosphorylation is initiated non-randomly in pericentromic heterochromatin in late G2 phase and as mitosis progresses, it spreads throughout the chromosome, completing in late prophase and continuing through metaphase . In anaphase, the dephosphorylation process of histone H3 begins, which ends in early telophase . The PHH3 antibody is characterized by presenting a clear and well contrasted immunostaining, limited to cells that are in the M phase of the cell cycle, while interphase cells do not express it or do so minimally . Although it has been recognized as a prognostic marker in multiple types of cancer, it has been scarcely analyzed in OSCC and, in particular, its relationship with survival rate has not been the subject of research until now . The main objective of this study was to evaluate the immunoexpression of PHH3 in OSCC, through the correlation with the immunoexpression of Ki-67, MAI, histological grading, clinical-morphological parameters and the rate of survival. The study sample consisted of 62 cases of OSCC, corresponding mostly to incisional biopsies diagnosed in the Pathological Anatomy Laboratory of the University of the Republic (UdelaR) School of Dentistry, in the period 2007-2015. The clinical‑pathological records of each of the cases were reviewed, recording the variables corresponding to gender, age, topography, date of pathological diagnosis and histopathological diagnosis. Survival was defined as the time elapsed between the initial histopathological diagnosis and death from cancer. The histopathological diagnosis of each of the cases was made according to the WHO classification (2017) for OSCC, in well, moderately and poorly differentiated . Hematoxylin-eosin (H-E) and IHC histological slides were digitized with the Motic VM 3.0 Digital Slide Scanning System for image acquisition and processing. Motic VM 3.0 Motic Digital Slide Assistant software (version 1.0.7.46, Copyright Motic China Group Co., Ltd.2017) was used for its analysis. Cell quantification was performed with the Image J software manual counting tool (1.52v, Wayne Rasband, National Institutes of Health, USA). For the calculation of the MAI, in the H-E slides, normal and atypical MF present in 1000 tumor cells were counted. The identification of the MF was carried out according to the morphological characteristics described by Van Diest et al ., recognizing as atypical mitosis, those multipolar, annular, asymmetric and bridge in anaphase . Immunohistochemical processing: from the blocks of tissue fixed in formalin and embedded in paraffin, two sections of 4 μm thickness were obtained. To unmask the antigenic epitopes, recovery was performed with sodium citrate solution (pH 6.2) (Borg Decloaker, RTU; Biocare Medical) in a microwave pressure cooker at maximum power for about 5 minutes. Endogenous peroxidases were blocked with 0.9 % hydrogen peroxide for 5 minutes. Tissue sections were incubated with the primary antibodies Ki‑67 (monoclonal; 1:100 dilution, clone MIB-1, Dako, Glostrup, Denmark) and PHH3 (Serine 10) (monoclonal; 1:100 dilution, Novus Biologicals and Bio-Techne brand) for about 45 minutes and then with a biotinylated anti-mouse/anti-rabbit secondary antibody for 30 minutes. To visualize the products of the antigen-antibody reaction, 3,3' diaminobenzidine H2O2 (Dako Corporation, Carpinteria, CA, USA) was used, followed by counterstaining with Harris hematoxylin (Biopack, cod. 0832.08, Argentina). Breast carcinoma samples were used as a positive control for IHC processing. Likewise, as a negative control, IHC processing was performed omitting the incubation step with the primary antibodies. IHC evaluation: positive cells for the Ki-67 antibody were considered to be those neoplastic epithelial cells that presented a brown nucleus, regardless of the intensity and pattern of staining . For each of the cases, the degree of positivity for Ki-67 was expressed through the labeling index, expressed as a percentage of positive cells in 1000 tumor cells. PHH3 positive cells were considered those whose nuclei had the following characteristics: a) intense and dense brown staining, b) absence of an intact nuclear membrane, and c) condensed chromatin with normal or atypical MF morphology (Fig. ). For the PHH3 antibody, nuclei stained brown, with a smooth, intact nuclear membrane and without conspicuous chromosomal condensation were considered nonspecific and were not included in the immunoquantification (Fig. ). As with Ki-67, the degree of positivity for PHH3 was expressed as a percentage of positive MF in 1000 tumor cells. Prior to the IHC evaluation, interobserver calibration was performed between two pathologists, who previously agreed on the morphological criteria necessary for the identification of MF and positive cells for biomarkers. The observers carried out their quantifications independently, without knowing the Figures established by each other. The interclass correlation coefficient (ICC) was used to calculate the degree of interobserver agreement . Statistical analysis: in the considered markers comparation, descriptive statistics (mean and standard deviation) were used for categorical variables, and frequency distribution for continuous variables. To compare the expression of the markers according to sex, age, location and histological grade, when it came to two groups, the comparison of means based on the student's t-test for independent samples was used. Likewise, in cases where three or more groups were compared, the analysis of variance (ANOVA) model was used. In cases where a significant association was identified, multiple comparisons were carried out considering the Tukey test. Additionally, scatter diagrams were made to investigate the degree of linear correlation between the three markers, calculating in each case, the Pearson linear correlation coefficient. Survival analysis was carried out using the Kaplan-Meier survival curves. The association of patient survival was analyzed according to the three markers considered, in a multivariate analysis through the Cox proportional hazards model, adjusting the results by sex, age, grouped location and histological grade. While their comparisons were made using the Cox model. All tests were carried out with a significance level of 5%. - Descriptive analysis of the clinical-pathological variables of OSCC: of the 62 cases, the 64.6% corresponded to men and the 59.7% were older than 65 years, the 43.5% were located in others (under this term the located cases were grouped in gingiva, alveolar ridge and extensions to the retromolar trigone and cheek), 25.8 % in tongue, 21.0 % in palate and 9.7 % in the mouth´s floor. Regarding histological grading, 61.3 % corresponded to moderately differentiated OSCC (Grade 2), 27.4 % to well-differentiated OSCC (Grade 1), and only 1.3 % were classified as poorly differentiated OSCC (Grade 3) . - Association between clinical-pathological characteristics and biomarkers Ki-67 and PHH3: a statistically significant relationship was found between the histological grade of OSCC and the immunoexpression of Ki-67 ( p 0.004) . Thus, in the well‑differentiated OSCC, a lower expression of Ki 67 was observed (mean 28.97), rising in the moderately differentiated OSCC (mean 41.27) and reaching the highest expression in the poorly differentiated OSCC (mean 43.37). However, a similar relationship was not found with the PHH3 biomarker ( p 0.564). Neither could a statistically significant correlation be demonstrated between the biomarkers studied and the independent variables, sex, age and location . - Expression patterns of Ki-67 and PHH3: When correlating the Ki-67 and PHH3 variables, a slight significant relationship was found ( p 0.041) (Fig. ). In general, the degree of expression of the PHH3 antibody was markedly lower than that of Ki-67, with a mean of 1.34 (SD 0.62) and 38.14 (SD 14.44), respectively. Additionally, the PHH3 antibody was characterized because in addition to presenting a narrow range of immunostaining (range = 3.1; lower limit = 0.2 and upper limit = 3.3), the intensity of the staining was strong and positively correlated with the morphology of the nuclei according to the different stages of mitosis. In contrast, the Ki-67 antibody demonstrated a wide range of expression (range = 54.2, lower limit = 8.5 and upper limit = 62.7) with variable staining patterns and intensity (Fig. ). - Survival analysis: the event of death was observed in 48 patients, while 14 patients reached the end of the follow-up period. The survival curve was constructed using the Kaplan-Meier procedure, from which it was estimated that the median overall survival time was 1.51 years (0.92; 2.69) and the survival rate at five years was 27 % (0.18; 0.41) (Fig. ). Likewise, the study of the possible association between each variable (Ki‑67, PHH3 and MAI) and patient survival was carried out in a bivariate and multivariate manner. From this analysis, in its two forms, it was possible to observe a certain significant association between survival time and PHH3 expression ( p 0.016) (Fig. ). A significant and similar relationship is also observed with MAI ( p 0.031) (Fig. ), confirming that, at higher values ​​of this, as well as of the immunoexpression of PHH3, the shorter the survival time. In contrast, for KI-67, no statistically significant association was found ( p 0.295) (Fig. ). shows the results obtained. When analyzing the relationship between biomarkers and MAI, not enough evidence was found to indicate a significant correlation with Ki-67 ( p 0.194) (Fig. ). In contrast, based on the calculated correlation coefficient (r 0.450) with a p-value < 0.001, a statistically significant association was observed between MAI and PHH3 (Fig. ). Likewise, the PHH3 antibody allowed us to more easily identify the positive MF and the fields with the highest mitotic density, thus being able to confirm its role as a specific marker of mitosis. In fact, as shown in , in 44 cases, the number of mitosis identified by IHC for PHH3 was higher than that obtained based on the recognition of its morphological characteristics in H-E stained slides. The evaluation of cell proliferation in cancer is considered an important histological parameter for defining the biological behavior of the tumor and for determining the individualized prognosis for each patient . However, in OSCC, evaluation of tumor proliferation is not part of the current staging system . Nevertheless, it constitutes a line of research, like so many others, justified in part by the poor survival rate of diagnosed patients, even in early stages of the disease . In fact, in this study, the median overall survival time was only 1.51 years (0.92; 2.69) and the five-year survival rate, without considering the stage of the disease, was 27 %. (0.18, 0.41). The low survival rate in our study could be partly because of two reasons. In first place, in Uruguay, the OSCC diagnosis is made in advanced stages, as established in the few publications available . Second, the sample selection was non-probabilistic for convenience. The PHH3 antibody, recognized as a biomarker of cell proliferation and specific for cells in mitosis, has been little studied in comparison with other biomarkers of cell proliferation . In contrast, in OSCC, unlike the PHH3 antibody, one of the cell proliferation biomarkers that has been widely studied is Ki-67 . However, studies on the value of the expression of this biomarker in determining the survival of patients with OSCC have shown contradictory results . In fact, there was not a statistically significant association between Ki 67 expression and patient survival ( p 0.295). Along the same lines, the results obtained by Brockton et al . and Gonzales Moles et al . . However, it is important to note that recently published research, such as that by Gadbail et al . and Jing et al . recognize Ki-67 immunoquantification as a reliable prognostic factor in OSCC . These authors found a significant relationship between the degree of Ki‑67 immunostaining and survival, after studying Ki‑67 expression in large cohorts of 217 and 298 OSCC cases, respectively . Unlike Ki-67, in this study it was demonstrated a significant relationship between the expression of PHH3 and the survival time of patients with OSCC ( p 0.016). Although it was previously established in invasive breast and urogenital cancer, it wasn´t found any work in the literature that had previously studied this relationship in OSCC . It was also observed a significant relationship between the expression of Ki-67 and PHH3 ( p 0.041); expected since both are biomarkers of cell proliferation expressed by the fraction of cells that are actively passing through the cell cycle. This association was described in breast cancer by Kim et al . and in follicular lymphoma by Bedekovics et al .. In addition, as previously described, it was able to verify a significant relationship between the degree of histological differentiation of OSCC and Ki-67 immunoexpression ( p 0.004), which supports the potential usefulness of this biomarker in the histopathological classification of OSCC . Thus, the least differentiated OSCCs at the histological level are those with the highest number of Ki-67 positive cells . However, there was not a similar relationship with the biomarker PHH3 (p 0.564). When compared the immunostaining patterns of the studied biomarkers, in the Ki-67 slides, nuclear immunostaining was shown with a wide range of intensities, a factor that contributes to low reproducibility . On the other hand, PHH3 showed small variations in the intensity of expression and since only those cells with a strong brown‑stained nucleus and normal or atypical MF morphology should be recognized as positive, in general terms it was easier to identify and quantify . It is important to specify that, as has been observed by other authors, in some cases, the PHH3 biomarker was expressed by cells whose nucleus preserved the nuclear membrane intact and did not present MF morphology, the latter should not be considered in the quantification, for be cells in G2 phase where phosphorylation of histone H3 begins and not cells in mitosis, which do present complete phosphorylation of histone H3 (Fig. ). In addition, as has been previously described in follicular lymphoma and breast cancer, there was a statistically significant association between MAI and PHH3 ( p < 0.001), expected because the PHH3 antibody is a specific IHC biomarker of MF and because it was used the same quantification criteria, both in the H-E slides for MAI and in those for IHC . The main limitations of the study are the sample size, not including the tumor invasion front as an evaluation parameter because most of the biopsies were incisional, and not having the TNM stage of the cases analyzed. The PHH3, is a biomarker of cell proliferation specific to cells in mitosis, with respect to Ki-67, has been underinvestigated in the literature. In this work, a significant relationship was demonstrated between the immunoexpression of the phh3 and the survival of patients with OSCC. A significant association of MAI with survival time was also observed. Regarding the Ki-67, as was previously described in the literature, a positive association was confirmed between the degree of histological differentiation of the OSCC and the marker´s positive immunoexpression. Based on the results obtained, it is important to continue investigating the PHH3 proliferation biomarker in OSCC, with a uniformly standardized IHC protocol and in a large cohort of cases.
Coordination of prophage and global regulator leads to high enterotoxin production in staphylococcal food poisoning-associated lineage
e10b81ac-4d1b-4b19-b92a-d198c81deac1
10913437
Microbiology[mh]
Staphylococcal food poisoning (SFP) is a common foodborne disease worldwide. Basically, food poisoning can be classified under two types, namely, food intoxication and food infection, and SFP is classified under the former similar to foodborne botulism and cereulide-type Bacillus cereus food poisoning ( ). The timeline of SFP occurrence is as follows ( , , ): first, food is contaminated by Staphylococcal enterotoxin (SE) gene-positive staphylococci. Thereafter, as the bacteria grows, it produces SEs in the food. Next, humans consume the SE-contaminated food. This leads to the development of symptoms, including emesis and nausea, within a short period, usually 0.5–6 h. The sources of such bacterial contamination include humans, animals, and the environment. Furthermore, the frequently encountered causative foods include meat products, dairy products, and cereals. The main etiological pathogen of SFP is Staphylococcus aureus . However, cases of SFP caused by another emerging Staphylococcus species, namely, S. argenteus , have also been reported ( , ). In 1914, Baber reported that S. aureus can cause food poisoning, and in 1930, Dack et al. showed that food poisoning is caused by enterotoxin ( ). Ever since Staphylococcal enterotoxin A (SEA) was first reported, more than 20 other types of SEs or SE-like toxins (SEls) have been reported ( , ). The SE/SEl family represents a group of toxins that share a similar molecular structure ( ). Among them, SEA is still given particular importance owing to its unique characteristics, the first of which is its toxicity. SEA shows stronger emetic activity against the Suncus murinus model than other classical SEs ( ). It also shows stronger emetic activity against the primate model compared to recently characterized SEs ( , , ). Second, the s ea gene shows sustainable expression in food ( , ); our previous study also revealed that it is sufficiently produced in meat models ( ). Third, it is associated with epidemiological data. Some studies involving food handlers and nasal cavities, which are common contamination sources, have shown relatively low SEA positivity rates ( ). However, other studies have shown high positivity rates (>40%) for the sea gene in isolates from SFP outbreaks ( , ). These positivity rates were often found to be higher than those of other SEs/SEls. Thus, SEA is not only frequently detected but also plays an important role in SFP cases. Particularly, it has been detected in three large SFP outbreaks (over 1,000 cases in the United States in 1985, approximately 4,000 cases in Brazil in 1998, and over 13,000 cases in Japan in 2000) ( ). These epidemiological studies provide evidence that this toxin is the primary cause of Staphylococcal foodborne illnesses as well as occasional large-scale outbreaks. For these three reasons, SEA, which was first identified more than 90 years ago, is still recognized as the most important toxin associated with SFP. SE/SEl genes are predominantly located on mobile genetic elements (MGEs) and immobilized former MGEs, such as prophages, S. aureus pathogenicity islands, enterotoxin gene clusters, and plasmids ( ). Given that MGEs can move from once cell to another, horizontal MGE transfer is recognized as an important event in toxin production ability acquisition and bacterial pathogenicity. MGEs can also be transmitted across species borders, and this may contribute to SE production by Staphylococcus species other than S. aureus . Although most SE/SEl genes are present in MGEs, the expression of several SE genes is mainly controlled by the gene regulators encoded in the core genome of Staphylococcus ( , ). These global regulators, including Staphylococcal accessory regulator (Sar) family members, two-component systems, and sigma factors, affect SE/SEl gene expression. As described above, the regulatory mechanism is considered a unique factor that shapes SEA production characteristics; however, currently, it remains unclear. Previous studies have shown that there exist differences between the regulation of SEA and other SEs, indicating that SEA production is less endogenously regulated and more exogenously regulated by the bacteriophage (phage) effect ( ). Specifically, SEA is located on a lysogenized phage (prophage), and its production amount differs according to the prophage type ( , ). In other words, some phages enhance SEA production during their life cycle ( , ). Additionally, higher SEA-producing prophages carry sea genes with two promoters, promoter 2 located within the phage late operon, and promoter 1, which is the original sea promoter, located close upstream of the sea gene. Even though promoter 1 is located in the phage genome, it is not related to the phage life cycle. In contrast, promoter 2 is associated with the phage life cycle and high SEA production. When DNA is damaged, the SOS response is initiated, resulting in changes in the life cycle of the phage from lysogenic to lytic. After entering the lytic cycle, the phage transcribes SEA mRNA together with its structural mRNA(s) in the late operon. This leads to enhanced SEA production in S. aureus lysogenized with a phage-carrying promoter 2, and reportedly, this phenomenon involves Recombinase A (RecA) and the SOS response ( ). Previous studies have also suggested that the phage type has a major impact on SEA production ( ). However, our previous studies have shown alternative possibilities as well as the existence of other unknown factors. In recent SFP outbreaks in Japan, CC81 subtype-1, which produces a large amount of SEA, was identified as the dominant lineage ( ). Almost all isolates classified under this lineage carry the φSa3mw2 type prophage, which harbors sea promoter 2 ( ). This suggests that SEA production is highly dependent on the presence of prophage. Conversely, we have also observed that CC81 subtype-1 isolates (No. 10, Nagasaki, and 01240) harboring φSa3mw2 type prophage produce larger amounts of SEA than MW2, a reference strain harboring the same φSa3mw2 type prophage ( ). Therefore, based on these findings, in this study, we hypothesized that an additional hidden mechanism enhances SE production and, thereafter, attempted to elucidate this mechanism. RecA affects SEA production but not all Staphylococcal accessary regulator S (SarS) plays an important role in high SEA production SarS can bind the original promoter of the SEA gene Finally, we verified whether SarS protein could bind sea original promoter, promoter 1, and directly regulate SEA expression using a gel shift assay (electric mobility shift assay). Thus, we purified MW2 type recombinant SarS (rSarS) as shown in . Subsequently, we reacted this protein with the cy3-labeled/non-labeled promoter 1 DNA fragment. The result obtained is shown in . The binding of rSarS to the promoter sequence resulted in an upward band shift. However, this shift was negated in the presence of abundant non-labeled DNA. These findings suggested that rSarS (MW2) could directly bind to promoter 1 and repress SEA production. Our previous study showed that CC81 sbutype-1 harbors a φSa3mw2 type prophage, which carries two sea promoters, namely, promoter 1 and promoter 2, and is integrated into the same φSa3 locus ( ). Thus, we first examined the effect of phage on SEA production and the role of sea promoter 2 in this regard, using NaCl addition and RecA mutants. Reportedly, NaCl induces phage and increases sea expression ( ). Accordingly, SEA production by CC81 subtype-1 with/without NaCl was assayed using enzyme-linked immunosorbent assay (ELISA). Fig. S1 shows that SEA production increased in the presence of NaCl in all four strains harboring the φSa3mw2 type phage. However, even in the presence of NaCl, three strains belonging to the CC81 subtype-1 (No. 10, Nagasaki, and 01240) showed more SEA production than MW2, a reference strain harboring φSa3mw2 that is classified as another CC, i.e., CC1. Next, we verified whether promoter 2 of sea and RecA, which is involved in SOS response, also influence SEA production in CC81 subtype-1 similar to that observed in the previous study ( ). Thus, we constructed recA knockout, vector control, and recA complemented mutants from strain No. 10 and examined their growth curves. Thus, we observed that these mutants are similar to their parental strain (Fig. S2). We also assayed SEA productivity via ELISA. The results obtained are shown in from which it is evident that recA deletion (No. 10 Δ recA ) significantly decreased SEA production ( p = 0.003) in the medium, while its complementation (No. 10ΔrecA/pKAT:: recA ) recovered the decrease in SEA production but not significantly ( p = 0.09; ). This phenomenon was more pronounced in the meat model ( ). Furthermore, significant differences were observed for this model for both the deficient (No. 10Δ recA ) and complementary (No. 10ΔrecA/pKAT:: recA ) strains. However, SEA production by the recA deletion mutant in both the medium and meat models was still higher than that observed for wild-type MW2 ( , right lane). These findings suggested that phage and sea promoter 2 may partially influence SEA production, as previously described ( ). However, the possibility that other factor(s) and promoter 1 may also influence SEA production, especially in this lineage, could not be ruled out. Next, we compared the genome of No. 10 with that of MW2 to identify differences in the genetic background responsible for high SEA productivity. Thus, some sequence differences were observed. Specifically, the observed polymorphisms included the coding sequence of Protein A, SDR cell wall protein, alcohol dehydrogenases, and the toxin/anti-toxin system ( yoeB/yefM ), as was briefly mentioned in our previous study ( ). Of these, we focused on the sarS (also known as sarH1 ) gene given that it is a global regulator and is reportedly associated with toxin production in S. aureus ( ). Its nucleic acid and amino acid alignments are shown in Fig. S3 and S4, respectively. From these figures, it was evident that the C-terminal side of the SarS protein of No. 10, which is part of the helix-turn-helix DNA-binding domain, possesses 27 amino acid deletions. A single nonsense nucleotide polymorphism (c.670c > t, p.Q224*) was also identified in the second domain of SarS. Subsequently, we constructed an allelic replacement strain with MW2-type SarS and compared its SEA productivity with those of the wild-type, SarS deletion, SarS (MW2)-replaced, and complemented strains. While the vector control and complement strains showed relatively slow growth, the other strains showed growth patterns similar to that of the parental strain (Fig S2). Furthermore, our results indicated that the toxin productivity of sarS deletion No. 10 mutants (No. 10Δ sarS ) was not significantly different from that of the parental strain in medium ( . p = 0.14). However, the SEA productivities of the vector SarS (MW2)-complemented mutant (No. 10ΔsarS/pWH1520:: sarS ) and mutant with chromosomally replaced MW2 type SarS (No. 10 SarS(MW2)) were significantly decreased in the medium compared with those of their counterparts ( p = 0.04 and p = 0.006, respectively). Moreover, SEA productivity of the SarS allele-replaced mutant was higher than that of MW2 ( p = 0.0003). A similar trend was observed in the meat model ( ). Furthermore, complementation with SarS (MW2) and SarS (MW2) allelic replacement significantly reduced SEA production, compared to the SEA productivities of their counterparts ( p = 0.01 and p = 0.002). However, the SarS deletion mutant showed significantly reduced SEA production in the meat model compared to the wild-type strain ( p = 0.01), while no significant differences were observed in the medium. Additionally, the allelic replacement mutants showed higher SEA productivities than MW2 in this food model ( p = 0.0005). These findings suggested that the SarS regulator is predominantly, but not entirely, responsible for the high toxin production observed in strain No. 10. Next, we examined other strains to verify the universality of this phenomenon. Specifically, the sarS sequences of other 22 CC81 subtype-1 strains used in a previous study ( ) were determined. Thus, no SarS gene truncation was observed except in No. 10, and the others had the same amino acid sequence as MW2. Even though our results showed that SarS was intact in these strains, previous results showing high toxin production ( ) led us to consider the existence of another mechanism by which SarS influences SEA production. It is well known that SarS acts as an important activator for Staphylococcal protein A (Spa). Therefore, we performed western blot analysis for Spa and qPCR for SarS. Our groth curve analysis results indicated similar growth rates for all the examined strains (Fig. S2). Furthermore, as shown in Fig. S5, Spa production in the wild-type strain (No. 10) was lower than that in the chromosome allelic replacement mutant, No. 10 SarS (MW2), used in the above experiments (Lanes 1 and 2). The increase in Spa production following sarS allelic replacement indicated that the truncated SarS in strain No. 10 became partly dysfunctional. Furthermore, all the wild-type CC81 subtype-1 strains (lanes 1, 3, and 4) showed lower Spa expression than the non-CC81 strains (lanes 5 and 6). The results of qPCR ( ) also showed good agreement with the western blot data. SarS expression in the wild-type strain (No. 10) was lower than that in the allelic replacement strain [No. 10 SarS (MW2)]. Additionally, SarS mRNA was less abundant in CC81 subtype-1 strains than in MW2 strains in the early phase. In the case of the No. 10 strain, it harbors an abnormally low-activity protein, whereas strains 01240 and Nagasaki possess normal proteins. In both cases, SarS expression itself was reduced in the early log phase. This low expression level lasted for at least 12 h in a broth culture. In the case of MW2, SarS expression was significantly reduced after 12 h, suggesting that low SarS activity in the early growth phase of CC81 subtype-1 may be important for high SEA production in this group. Research progress in the field of SEs varies depending on the subfield. In the field of medicine, there has been significant advancement in the study of its superantigenic activity and molecular mechanisms ( ). However, in the field of food hygiene, the research progress has been limited. Several studies have demonstrated efficient SEA production, and it has been suggested that this phenomenon can be attributed to phages ( , , , ). However, it is still unclear whether it is universal for all SEA-producing strains or whether other factors may be involved in this process. In this study, we attempted to elucidate the factors that induce high SEA production in the dominant SFP clone in Japan, CC81 subtype-1. Based on our observations, the hypothetical schema shown in was proposed. Specifically, we hypothesized that the low activity of SarS, accompanied by the involvement of the phage life cycle, is responsible for high SEA production. In other words, the coordination of a prophage and an endogenous regulator resulted in unusually high toxin production, at least in this SFP-associated clone. Further, the regulation of virulence factors plays an important role in the pathogenicity of S. aureus , and it is well known that some SEs/SEls are controlled in a manner similar to the manner in which other toxin types are controlled. This toxin family, which includes TSST-1, is mainly regulated by Sar family proteins, two-component systems, and sigma factors ( , ). However, unlike other SEs/SEls, SEA is exceptionally free from the control of most regulatory factors. A typical example is its independence with respect to the Agr-Rot system ( ). Particularly, Agr is a two-component system in S. aureus consisting of RNAII and RNAIII, and a quorum sensing sensor. Once bacterial cell density reaches the threshold value, Agr alternates gene expression. Furthermore, Rot is a Sar family protein that binds to the promoter region of toxin genes and inhibits their transcription. At high cell densities, RNAIII transcription, which inhibits Rot translation, is initiated, resulting in Rot loss in the promoter region and the initiation of toxin gene transcription ( , ). In contrast, surface proteins such as Spa show high expression during the early stages of growth, but their expression is subsequently suppressed with the upregulation of Agr ( ). SEB, SEC, and SED are Agr-dependent toxins, whereas SEA, SEH, and SElJ are Agr-independent, among SEs/SEls ( , , , ). The results of our previous study also showed that Rot has a reverse function and enhances SEH production at the transcriptional level ( ). However, it has no effect on SEA production. Additionally, SEA is controlled by neither SaeR nor SigB ( ). However, utilizing SrrAB, S. aureus is known to sense oxygen levels in its vicinity and regulate the expression of virulence genes ( ). Even with SEs, the influence of oxygen has been reported ( ), and toxin production is observed under both aerobic and anaerobic conditions although the optimal condition is aerobic. Therefore, the influence of SrrAB on SEA production is deemed considerable and cannot be overlooked. SEA production is rather affected by phage type and life cycle, and the principle is as follows ( , , ): the SOS response mediated by RecA, which is triggered by DNA damage, activates dormant prophages on the host chromosome, leading to daughter phage production accompanied by toxin protein synthesis similar to Shiga toxin production in enterohemorrhagic Escherichia coli ( ). This phage-associated phenomenon has been observed to a limited extent in some sea -carrying prophages ( , ). Previous studies have also shown that the addition of NaCl, pH lowering, and the presence of mitomycin C induce sea gene expression ( , ). In the present study, a similar trend of increased SEA production was observed following NaCl addition (Fig. S1; ). However, a previous study showed that the addition of NaCl enhances mRNA expression but attenuates toxin protein amount ( ). Reportedly, variations in SEA-producing strains affect the amount of SEA produced under stress conditions ( ), and possibly, we observed a similar effect of strain differences in this study. This suggested that CC81 subtype-1 may possess the ability to produce more SEA under stress conditions. Additionally, the results of this study revealed that recA loss resulted in decreased SEA production ( ), suggesting that a similar mechanism associated with the phage cycle exists in this lineage and prophage. However, strain No. 10 with recA knockout still showed higher SEA production than MW2, which carries the same type of phage, indicating that strain No. 10 does not usually overexpress RecA , leading to a stronger SOS response. Induction via NaCl addition was also not considered to be more intense than induction owing to MW2 (Fig. S1). This is because MW2 increased induction by approximately twofold, whereas the remaining three CC81 subtype-1 strains showed increased induction to a lesser extent. In the experiments with the RecA mutant strains, there were two inconsistencies between the medium and food models ( ). First, the recA complement did not fully restore SEA production in the medium. Reportedly, in Bacteroides , plasmid complementation of recA -deficient strain does not completely restore phenotype ( ). A similar phenomenon possibly occurred during our experiments in this study. Second, the effect of RecA was more significant for the food model. Stressors, such as a low pH and a high osmotic pressure, which are characteristic of foods, induce an SOS response. Furthermore, the absence of RecA, which plays a major role in stress response ( ), may adversely affect growth or survival in food, resulting in a significant decrease in SEA production in the RecA-deficient strain. As mentioned above, there are strong indications of the involvement of SrrAB based on existing research reports, yet the involvement of other global regulators in the mechanism of SEA production has not been reported to date. However, this study demonstrated that SarS inhibited SEA production. Therefore, this is the first report on the effect of coordination of global regulator and phage on SEA, which is an Agr-independent and quorum-sensing-independent toxin, whose production begins early during bacterial growth ( ). Previous studies have shown that SarS expression in MW2 cells is high during the early culturing period and low during the late culturing period; this is consistent with the results of the present study ( ) ( ). Furthermore, consistent with this result, Spa expression, which was observed during the early phase as mentioned above, was also high ( ; Fig. S5). A high level of SarS expression during the early culturing stage was expected to suppress SEA transcription and enhance Spa transcription, resulting in a lower level of SEA production and a higher level of Spa production during periods of low bacterial numbers, in the case of MW2. However, the CC81 subtype-1 showed low SarS expression during the early stages, suggesting that this lineage possibly produces more SEA and lower Spa during the early stage of bacterial proliferation. This may be advantageous for the early accumulation of toxins in food. Similar to recA , the sarS mutant strain also showed differences in SEA production between the food model and the culture medium. As shown in , the loss of truncated SarS did not affect SEA production in the medium but affected its production in food. This may also be attributed to stress response failure, such as RecA deficiency as mentioned above. Our previous study revealed a slight increase in SEH production with SarS deletion, suggesting that this mutant SarS is somewhat active ( ). Moreover, one of the Sar family proteins, MgrA, is known to be involved in stress tolerance, and directly or indirectly interacts with other regulatory factors, including SarS, for stress response ( , , ). Therefore, the SFP clone showed reduced SarS activity, possibly owing to its role in stress tolerance in food. However, this subject is beyond the scope of this study and requires further investigation in future studies. In addition, for strains within this lineage other than No. 10, we had only looked at the correlation between SarS and SEA production with Spa. Therefore, further research is needed to establish the causation of SarS and SEA production, including stress responses. However, this study at least demonstrates an inverse relationship between SarS and SEA production. It is estimated that the amount of SEA required for human vomiting is 100–200 ng ( ), and the rapid production of SEA to amounts within this range is the most important issue in food poisoning outbreaks. This suggests that in food poisoning specifically, the CC81 subtype-1 clone evolves by suppressing SarS expression, but not completely, and the associated increase in SEA production results in rapid SEA accumulation in foods. Both changes in the core genome and MGE acquisition are involved in S. aureus evolution ( ). Regarding S. aureus -related infectious diseases, it is well known that some lineages with specific genetic backgrounds can spread across countries ( , ). This study primarily showed that the same may be true for the genesis of SFP clones. In CC81 subtype-1, not only the acquisition of MGEs, including SEA and SEH, but also alterations in endogenous regulators, such as Rot and SarS, possibly led to its evolution as a more highly foodborne disease-associated group. Moreover, strains positive for both SEA and SEH have been isolated from food poisoning cases in South Korea ( ), highlighting the possibility that closely related lineages may spread in East Asia. Emerging S. aureus infection clones have also been reported in recent years, probably owing to genomic changes ( , ). Similarly, in SFP, there exists the possibility that new SFP clones originate from S. aureus and other staphylococci in the same manner. Ongoing molecular epidemiological surveillance will continue to be important to capture such genetic changes. Additionally, a deeper understanding of toxin production regulation and the survival strategies of toxigenic S. aureus in foods is also necessary. Bacterial strains and culture conditions Growth curve Conformation of mutation and domain analysis Enzyme linked immune-solvent assay Genetic manipulation RT-qPCR Protein A detection Electronic mobility shift assay Statistical analysis All statistical aPaired t -test was performed to determine statistically significant differences between groups. p < 0.05 was considered statistically significant. All the bacterial strains used in this study are listed in . In brief, S. aureus strains were cultured in brain heart infusion (BHI) broth, BHI agar, tryptic soy broth, and tryptic soy agar. E. coli strains were cultured in Luria-Bertani broth (1 L containing 10, 10, and 5 g of NaCl, trypticase peptone, and yeast extract, respectively). When necessary, ampicillin (final concentration, 100 µg/mL), chloramphenicol (final concentration, 10 µg/mL), tetracycline (final concentration, 5 µg/mL), D-xylose (final concentration, 1%, wt/vol), yeast extract (final concentration, 1%, wt/vol), and NaCl (final concentration, 3%) were added. All the materials were purchased from Becton Dickinson (Franklin Lakes, NJ, USA) or Wako Pure Chemical Industries (Osaka, Japan). The growth curve method, which has been previously described ( ), was used with some modifications. Specifically, the optical density of the culture medium (OD660) was measured using a spectrophotometer (SPECTRONIC 200; Thermo Scientific, Wilmington, DE, USA). BHI medium containing 1% yeast extract was used as the culture medium. Atest tubes, all the test tubes were incubated at 37°C with constant agitation. Thereafter, temporal sampling (3 h, 6 h, and 12 h) was performed. The cultured media were diluted 5- to 10-fold with fresh media, and measurements were performed using SPECTRONIC 200. A genomic comparison between No. 10 and MW2 was conducted in our previous study ( ), and among them, SarS mutation was focused on in this study. We performed PCR and Sanger sequencing analyses using the primers listed in Table S1 to confirm the presence of a sarS nonsense mutation. The QuickTaq HS kit (Toyobo, Osaka, Japan) and Big-Dye terminator version 3.1 cycle sequencing kit (Applied Biosystems, Foster City, CA, USA) were used for PCR and sequencing, respectively. The CC81 subtype-1 strains used in our previous study were used for the SarS sequencing ( ). Furthermore, we performed a domain search using InterPro software ( ). ELISA samples from the medium (BHI with 1% yeast extract) and salted ham (meat model using a commercial meat product; Nippon Meat Packers, Inc., Osaka, Japan) were prepared as previously described ( ) with some slight modifications to the meat model. Furthermore, approximately 4.5 g of salted ham was used as the meat model. After bacterial culturing, SEA on the meat samples was collected by washing with 1 mL of Dulbecco’s phosphate-buffered saline (pH 7.4) containing 0.1% (wt/vol) bovine serum albumin (Sigma, St. Louis, MO, USA). For pWH1520 and pWH1520:: sarS , 1% xylose (wt/wt) was absorbed into the meat prior to bacterial culturing. The incubation lasted for 24 h, after which SEA ELISA, was performed as previously described ( ). We prepared several mutants of strain No. 10, as shown in . Notably, allelic replacement and gene complementation were performed as previously described ( ). The plasmids and primers used in this study are listed in ; Table S1. Furthermore, vector DNA for transformation was extracted from RN4220 using the manual alkaline SDS method or from BL21 using the FastGene Plasmid Mini Kit (Nippon Genetics Co., Ltd., Tokyo, Japan). Thereafter, electroporation was performed using the ELEPO21 system (NEPA GENE, Chiba, Japan) or BTX ECM830 system (BTX Harvard Apparatus, Inc., Holliston, MA, USA). BHI containing 1% yeast extract was used as the bacterial culture medium. After the inoculation of 1/100 vol of the overnight pre-culture media (100 µL) into 10 mL of fresh media in a test tube, bacterial culture was performed at 37°C with agitation. Temporary sampling (at 3h and 12 h) was performed. Thereafter, RNA extraction, reverse transcription, qPCR, and RT-qPCR quality check were performed as previously described ( ). The primers used are listed in Table S1. The annealing temperatures for gyrB and sarS were 62 and 60°C, respectively. Bacterial cultures and western blot analysis were performed as previously described ( ). In brief, after the inoculation of 1/100 vol of the overnight pre-culture media (40 µL) into 4 mL of fresh media in a test tube, bacterial culture was performed for 24 h at 37°C with constant agitation. Thereafter, centrifugation (10,000 × g , 4°C, 5 min) was performed, and the supernatants were collected. SDS-PAGE was then performed using 10% gel. To block membranes, 5% skim milk (wt/vol, Morinaga Milk Industry Co., Ltd., Tokyo, Japan) in Dulbecco’s phosphate-buffered saline containing 0.05% (vol/vol) Tween 20 (PBST; Sigma-Aldrich, St Louis, MO, USA) was used. Furthermore, normal human IgG (final concentration, 2.5 µg/mL; LLC-Cappel Products, Irvine, CA, USA) and goat anti-human IgG (1/2,000 dilution, LLC-Cappel Products) were used as the primary and secondary antibodies, respectively. All washing and antibody reaction steps were performed using PBST as mentioned above. Recombinant SarS protein (rSasS) was prepared as previously described, with slight modifications ( ). The materials used are listed in Tables 1 and 2. In brief, we used pET24b instead of pET14b and cloned sarS from JP018 (SRA accession number, DRR257760), which carries the same SarS as MW2. This vector was then transformed into BL21(DE3)/pLysS cells, and the recombinant protein was expressed using IPTG [isopropyl-β- d -thiogalactopyranoside (Wako, Osaka, Japan)]. Thereafter, the proteins were purified via His-tag purification using TALON Metal Affinity Resins (Takara, Otsu, Japan). We constructed a cy3 conjugated sea promoter DNA probe and performed an electrophoresis mobility shift assay (EMSA). The DNA probe was constructed via PCR and TA-cloning as described previously ( ). The primers used are listed in Table S1. EMSA was also performed as previously described with some slight modifications ( ). In brief, rSarS were incubated with the reaction buffer containing 20 mM Tris (final concentration), 50 mM NaCl, 5% glycerol, 1 mM EDTA, 1 mM dithiothreitol, and 1 µg Poly (dI-dC), and at pH 7.5. After incubation for 15 min without DNA at room temperature, labeled and unlabeled DNA was added to the reaction buffer, and the mixture was then incubated for 20 min at room temperature. Subsequently, the samples were subjected to electrophoresis performed on ice using 0.5× TBE and 5% native acrylamide gel.
Molecular correlates for HPV-negative head and neck cancer engraftment prognosticate patient outcomes
6ac34511-6fcc-4ad7-80a8-72e07a581f10
11686353
Biochemistry[mh]
Outcomes for patients with HPV-negative head and neck squamous cell carcinoma (HNSCC) remain relatively poor with 5-year overall survival (OS) rates of ~ 50% , . Treatment for HPV-negative HNSCC patients depends on clinical presentation, with the principal options including surgical resection and radiotherapy with or without chemotherapy. Nodal involvement is currently one of the most important prognostic features for treatment selection. For patients with small tumors and involvement of ≤ 1 node, treatment with a single modality has been effective – however, there is still a subset of patients who experience tumor recurrence and eventually succumb to the disease . In patients undergoing surgery for more advanced tumors, postoperative radiotherapy, with or without concurrent chemotherapy, can improve outcomes. While combination therapies can improve the survival of patients, HNSCC survivors often suffer from major complications and long-term reduction in quality of life due to the pervasive impact treatment can have on daily life - leading to a high suicide rate (63.4 cases per 100,000 individuals) among survivors . Given these considerable consequences of over- or under-treatment, HNSCC care could benefit greatly from additional strategies to assist risk-tailored clinical decision-making. Patient-derived xenograft (PDX) models are well suited to tumor biology and biomarker studies as they recapitulate the originating tumor in its molecular profile, histopathological features, and therapeutic sensitivity – . Poor patient outcomes are associated with successful engraftment of surgically-resected HNSCC tumors into mice, akin to other cancers , . Determining engraftment capacity requires extensive expertise, facilities, and time, which could limit its utility for clinical decision-making. A molecular biomarker for engraftment could eliminate the need for xenografts. Here, we integrate mass spectrometry (MS)-based proteomics with transcriptomics to investigate whether the molecular correlates of engraftment recapitulate the relationship to clinical outcomes in 88 HPV-negative HNSCC tumors. From our molecular profiling, we identified LAMC2 and TGM3 as candidate prognostic biomarkers. We validated that LAMC2 and TGM3 protein expression correlates with clinical outcomes in an independent cohort of 404 HNSCC patients using immunohistochemistry. Critically, our findings show that LAMC2 and TGM3 can significantly stratify clinical outcomes beyond nodal status alone. These findings further demonstrate engraftment to be of clinical significance for HNSCC patients and that the molecular correlates of engraftment represent a resource from which biomarkers of clinical outcomes can be identified. Clinical outcomes of HPV-negative HNSCC patients are associated with engraftment Engraftment is associated with diminished immune signaling and partial EMT Engraftment is associated with a fibroblast and epithelial cell signaling network Engraftment associated proteins LAMC2 and TGM3 stratify clinical outcomes Combining LAMC2 and TGM3 stratifies patients without nodal involvement LAMC2 and TGM3 pathways correlate with clinical outcomes and engraftment We subcutaneously implanted 273 surgically resected HPV-negative HNSCC specimens into NOD/SCID/IL2Rg -/- (NSG) mice which resulted in 180 successful engraftments (Fig. ). Compared with our prior study , this expanded cohort contains 30 additional patient samples and updated records for 44 patients; disease-specific survival (DSS) for the entire cohort was 71% at 3 years, and median follow up was 27.4 months. Consistent with our prior findings, we observed engraftment is associated with significantly worse clinical outcomes (Fig. , Supplementary Data Table ). Of the recorded clinical covariates, engraftment was significantly correlated with N category, surgical margin, and nodal extracapsular extension (ECE) (chi-square p -values of 0.022, 0.037, and 0.038, respectively) (Table ). As expected, higher N category was significantly associated with worse clinical outcomes (Supplementary Fig. ). To interrogate the relationship between engraftment and N category on clinical outcomes, we first binned N category into two groups representative of their clinical outcomes and management - lower risk (N low : N0, N1) and higher risk (N high : N2, N3) (Fig. and Supplementary Fig. ). Patients who had high-risk tumors that engrafted (E-N high ) had worse clinical outcomes than those with high-risk tumors that did not engraft (N-N high ), suggesting that engraftment and nodal status independently predict clinical outcomes (Fig. and Supplementary Data Table ). N high patients that engraft (E-N high ) have significantly worse clinical outcomes compared to every other subset (i.e., N-N high , E-N low , and N-N low ) for each clinical outcome assessed suggesting that engraftment provides additional risk stratification beyond assessing N category alone (Fig. and Supplementary Fig. ). This observation is striking given the strong association between N category and HNSCC survival and reinforces the utility of considering engraftment for clinical risk stratification. To explore the relationship between engraftment and clinical outcomes, we profiled a subset of our surgically resected, primary HPV-negative HNSCC tumor samples ( n = 88) using mass spectrometry-based proteomics and RNA-Sequencing (Fig. and Supplementary Data Table ). There were no significant differences in the clinical covariates for the subset of the cohort for which we acquired molecular data compared to the complete cohort (Table and Fig. ). Proteomic analysis detected a total of 9721 proteins. Median transcript and protein abundances were moderately positively correlated (Spearman’s correlation of 0.31, p -value < 2.2 × 10 −16 ) (Fig. ). The RNA and protein fold changes between engrafters and non-engrafters were better correlated than the abundance measurements (Spearman’s correlation of 0.42, p -value < 2.2 × 10 -16 ) (Fig. ). Samples did not cluster notably by any clinical covariates (Supplementary Fig. ). Pathway analysis of the proteomic data revealed that engraftment is positively associated with various developmental and morphogenic programs and terms related to extracellular matrix reorganization and adhesion (Fig. and Supplementary Data Table ). Cell type deconvolution of bulk transcriptomic data revealed increased immune cell content in non-engrafting tumors and increased fibroblast content in engrafting tumors, although the differences weren’t statistically significant (Supplementary Data Table and Supplementary Fig. ). Most pathways enriched in non-engrafters were immune cell processes; correspondingly, most (75%) proteins associated with immune system processes were higher in non-engrafters (Fig. and Supplementary Fig. ). Markers of natural killer cells were the most strongly associated with non-engrafters (Supplementary Fig. ) consistent with reports that infiltration of NK cells is associated with favorable prognosis , . Engraftment-related fold changes were very weakly correlated with malignant transformation-associated fold changes (i.e., tumor/normal adjacent tissue) from Huang et al. (Spearman’s ρ of 0.06 and 0.18 for proteomic and transcriptomic data, respectively) (Supplementary Fig. ) . Overall, these analyses suggest that the capacity of a tumor to form a xenograft involves diverse pathways that are distinct from malignant transformation and are partially associated with the modulation of multiple cell type-specific programs. Next, we investigated the relationship between engraftment and established HPV-negative HNSCC subtypes to determine whether the features of the engraftment phenotype were distinct from previously reported molecular profiles , . While our cohort contains samples of each HPV-negative bulk molecular HNSCC subtype (Supplementary Fig. ), we detected no correlation between engraftment and subtype (Supplementary Fig. ). Alternatively, Puram et al. recently identified six distinct expression profiles for malignant HNSCC cells by applying single-cell RNA Sequencing (scRNA-Seq) to dissect intra-tumoral heterogeneity . Of these six expression profiles, only partial epithelial-mesenchymal transition (p-EMT) had significantly different scores between engrafters and non-engrafters (two-sided Wilcoxon rank-sum unadjusted p -value of 2.2 x 10 −5 ) (Fig. ). Among the most differentially expressed p-EMT proteins were secreted structural and enzymatic components of the ECM, consistent with the pathway analysis in Fig. , . To corroborate that the p-EMT signature is indeed reflective of an epithelial cancer cell state, we performed transcriptomic analysis on paired HNSCC PDX-patient tumors ( n = 41). Even with stromal depletion characteristic of PDX models, cancer cells robustly expressed p-EMT signatures supporting that expression is driven by differences in epithelial cells and not an artifact of stromal contamination (Supplementary Fig. ). Altogether, these results indicate that engraftment is independent of the HNSCC subtype, but instead is associated with an intratumoral pattern p-EMT, characterized by co-expression of epithelial and mesenchymal genes. As fibroblasts are the most abundant stromal cell type in our HNSCC samples (Supplementary Fig. and Supplementary Data Table ) – and have an established role in the proliferation and remodeling of HNSCC – - we sought to investigate engraftment through a framework of fibroblast-to-epithelial cell signaling networks. To assign the expression of the proteins in our bulk data to a cell type, we integrated scRNA-Seq of primary tumors with proteomic analysis of seven oral squamous epithelial cell lines and eight patient-derived fibroblast cultures (Fig. , Supplementary Fig. , and Supplementary Data Table ). Cell-type assignments were mostly (86%) concordant for genes detected in both datasets (Supplementary Fig. ). Compared to epithelial-assigned proteins, fibroblast-assigned proteins had significantly higher engraftment fold changes (Fig. ). While cell-cell communication with fibroblasts could be important for priming epithelial cells for engraftment, they may not be as consequential for nodal involvement (Supplementary Fig. ). These findings suggest that although human stroma is eventually replaced by mouse stroma during xenograft growth , – consistent with our analysis of PDX tumors (Supplementary Fig. ) - the extent of interactions between fibroblasts and epithelial cells at the time of implantation may support the capacity of the epithelial cells to engraft. We investigated putative interactions for the subset of proteins that had both higher expression in engrafters and could be assigned a cell-type specific expression (Fig. ). By integrating these different analyses, we uncovered a network of potential engraftment-associated epithelial and fibroblast interactions (Fig. ). The majority (62%) of the interactions in this network are ligand-ligand interactions of ECM associated proteins (e.g., laminins, collagens) – consistent with the finding that ECM reorganization is enriched in engrafters (Fig. and Supplementary Fig. ). Most (53%) of the interactions are between, not within, cell types (Fig. and Supplementary Fig. ). Taken together, these observations suggest that epithelial cells and fibroblasts both contribute to the engraftment-enriched ECM. Interestingly, six of the seven (85.7%) epithelial-assigned proteins in this network are p-EMT proteins (Supplementary Fig. ), revealing a possible relationship between fibroblasts and the epithelial p-EMT state – consistent with the reported localization of p-EMT expression at the leading edges of human tumors (i.e. proximal to stroma) , . Indeed, a reanalysis of the spatial transcriptomic data from Arora et al. reveals that, with few exceptions, pEMT genes are enriched in the leading edge compared to tumor core epithelial cells (Supplementary Fig. ). In fact, p-EMT increases stepwise from the TC to the transitory region, to the LE (Supplementary Fig. ). We profiled how gene expression changes in HNSCC cell lines during co-culture with patient-derived CAFs and found that expression of p-EMT genes increases (Supplementary Fig. ). This provides experimental evidence that expression of p-EMT at the leading edge could be due to enhanced crosstalk between CAFs and epithelial cells. These findings suggest that engraftment capacity may be driven by interactions with the tumor microenvironment and provide a putative network of engraftment-associated interactions between cancer cells and CAFs. We next investigated whether there were molecular features that were associated with both engraftment and clinical outcome to identify biomarkers that could provide prognostic value. Each clinical outcome hazard ratio had positive correlations with engraftment fold changes (Spearman’s ρ ranging from 0.26–0.37 and 0.35–0.49 for protein and RNA, respectively) (Supplementary Fig. ). We filtered our dataset based on engraftment fold changes and hazard ratios to identify candidate biomarkers associated with clinical outcomes (Fig. ). We then ranked candidates according to median protein and mRNA expression, DSS hazard ratio from TCGA, and tumor/normal fold changes evaluating markers for both better and worse clinical outcomes separately (Fig. and Supplementary Fig. ). The top-ranking candidate biomarkers were LAMC2 and TGM3 – where increased expression was associated with worse and better clinical outcomes, respectively. We investigated whether expression of LAMC2 and TGM3 was associated with survival in an independent cohort of HNSCC patients ( n = 404; 3-year DSS of 76.7% with median follow-up 58.3 months) using immunohistochemistry (IHC) (Fig. ). Each tumor core was scored for the frequency of positive staining among cancer cells, the intensity of staining, and a third score was generated by multiplying these two together. For TGM3, we noted two distinct staining patterns: one in which the staining was very similar to the staining of the suprabasal layers of the normal adjacent squamous epithelium (intense in both the nucleus and cytoplasm), and one in which staining was only cytoplasmic, less intense, and in some cases speckled. We, therefore, added an additional scoring category for TGM3 in which we scored the frequency of tumor cells with a staining pattern similar to that seen in normal epithelia, which we refer to as a “normal-like” pattern. Of the scored IHC features, LAMC2 intensity, and TGM3 normal-like staining had significant associations with clinical outcomes (Supplementary Fig. , example images in Fig. ). Staining patterns in matched PDX tumors resembled those directly from patients (Supplementary Fig. ). Higher intensity of LAMC2 staining was significantly associated with worse clinical outcomes for each measure assessed in accordance with our expectations from candidate selection criteria (Supplementary Fig. , Kaplan-Meier survival curve for DSS shown in Fig. ). Strikingly, the significant associations with worse outcomes were preserved for each clinical variable in multivariate survival analyses including controlling for N category (Supplementary Data Table ). LAMC2 intensity was significantly associated with DSS and recurrence-free interval (RFI) for N low patients and with overall survival (OS) and recurrence-free survival (RFS) for N high patients (Supplementary Fig. , Kaplan-Meier survival curve for DSS shown in Fig. ). Expression of TGM3 in a normal-like pattern (Fig. ) was significantly associated with improved clinical outcomes (Supplementary Fig. , Kaplan-Meier survival curve for DSS shown in Fig. ). Multivariate survival analyses revealed a significant interaction between N category risk group and TGM3 normal-like expression (Supplementary Data Table ). Pairwise survival analyses revealed that high normal-like TGM3 expression was significantly correlated with improved clinical outcomes in N low but not N high risk groups (Supplementary Fig. , Kaplan-Meier survival curve for DSS shown in Fig. ). These findings are consistent with analyzing the effect of each marker within each N category (i.e., N0, N1, N2), instead of as nodal status groups (i.e., N low , N high ) (Supplementary Data Table ). To investigate whether there was a benefit to considering the combination of LAMC2 intensity, TGM3 staining, and N category risk groups, we combined all three variables in a multivariate Cox model. Based on similarities in clinical outcomes, we subset the cohort into four Survival Groups comprising different combinations of these three variables (SG1-4, where SG1 has the best outcomes and SG4 has the worst outcomes) (Fig. and Supplementary Fig. , Kaplan-Meier survival curve for DSS shown in Supplementary Fig. ). There are significant differences between Survival Groups for each clinical outcome (Fig. and Supplementary Fig. ). For markers to be of clinical benefit for prognosis they must provide value beyond N category assignment, as that is part of standard-of-care. We observed different clinical outcomes within N category 0 (N0) patients, suggesting that combining LAMC2 intensity and TGM3 staining met this criterion (i.e., assigned Survival Groups were significantly associated with different clinical outcomes within N0 patients) (Fig. and Supplementary Fig. ). An important decision for the clinical management of HNSC patients without nodal involvement (N0) is elective neck dissection. A commonly cited threshold is that for patients with > 20% probability of occult metastases, dissection is warranted . To investigate whether LAMC2 and TGM3 could be of clinical utility in this context, we assessed the relationship between assigned Survival Groups and the rates of local failure (i.e. disease persistence or reappearance at the primary tumor site) and regional/distant failure (i.e., metastases in lymph nodes / an organ outside the head or neck) (Supplementary Fig. ). Compared to N category alone, the incorporation of the IHC data on the LAMC2 and TGM3 (i.e., Survival Groups) marks a patient subset with 42% of the failure rate of N0 category patients overall (Supplementary Fig. ). If patients assigned SG3 (which spans patients of all N categories (Fig. )) are assessed, there is no statistical difference in the failure rate of N0 and N2 category patients (Supplementary Fig. ). Strikingly, the increased failure rate of N0 patients assigned SG3 is associated with an overall survival probability that is statistically indistinguishable from N2 category patients assigned SG3 (Supplementary Fig. ). The differences between Survival Groups were most pronounced for N0 category patients – where total failure rates increased from 9.7% to 36.8% from SG1–SG3 (Supplementary Fig. ). In summary, the use of LAMC2 and TGM3 as markers allows us to pinpoint N0 patients that are performing statistically similarly to N2 patients in terms of failure rate and overall survival (Supplementary Fig. ). There are important clinical covariates for HNSCC, such as extracapsular extension, surgical margins, perineural invasion, and lymphovascular invasion that can stratify outcomes and failure rate (Supplementary Fig. ). However, these effects appear to be driven by the relationship with N category, as the statistical significance mostly disappears for comparisons within N category groupings (Supplementary Fig. ). This highlights how exceptional the relationship between the Survival Group and outcomes within N0 category patients is, as this stratification splits N0 into three distinct prognostic groups. Recognizing that within N0 there remain some clinical covariates that can impact prognosis (e.g., T category, disease subsite, adjuvant treatment), many of which are associated with SG (Supplementary Fig. and Supplementary Data Table ), we accounted for these using a multivariate Cox model. SG2 and SG3 had worse clinical outcomes than SG1 for each outcome assessed; hazard ratios ranged from 1.6 – 6.5 and 2.4 – 10.3 for SG2 and SG3, respectively (Fig. and Supplementary Data Table ). Differences between SG1 and SG2-3 were similarly pronounced for N0 patients treated only with surgery; hazard ratios ranged from 2.3 to 11.4 for SG3 (Supplementary Data Table ). In summary, we validated the relationship of LAMC2 and TGM3 to survival in an independent cohort and demonstrated that the markers could benefit prognosis by stratifying outcomes within patients with the same N category. To explore why LAMC2 and TGM3 expression are associated with clinical outcomes, we explored the pathways that involve these proteins. Most LAMC2 pathways are associated with worse clinical outcomes, especially “epithelial-mesenchymal transition” and “ECM receptor interactions” (Supplementary Fig. ). Pathways involving TGM3, particularly “keratinization” and “keratinocyte differentiation”, are associated with favorable outcomes (Supplementary Fig. ). Enrichment analysis performed by splitting samples using either LAMC2 or TGM3 intensity (Supplementary Fig. ) reveals many of the same pathways associated with engraftment (Fig. ). LAMC2 and TGM3 are expressed in epithelial cells of different spatial compartments in the tumor, (Supplementary Fig. ), consistent with participation in distinct cellular programs. These analyses suggest that LAMC2 and TGM3 are proteins whose expression in epithelial cells is correlated – in opposing directions - with engraftment, epithelial-mesenchymal transition, and clinical outcomes. Finally, we integrated two scRNA-Seq datasets , to explore the molecular characteristics and stromal interactions of malignant epithelial cells with distinct LAMC2 and TGM3 expression profiles. While LAMC2 was robustly expressed, detection of TGM3 was too scarce (56/5614 malignant epithelial cells) for meaningful analysis. This was concordant with our analysis of co-cultures, which saw an increase of LAMC2 with co-culture but did not detect TGM3 (Supplementary Fig. ). Of the proteins associated with high LAMC2 expression, LAMA3 and LAMB3 were the most significant hits (Supplementary Fig. ) – consistent with the importance of the laminin 332 complex in EMT of squamous cell carcinomas and other cancers – . Among the other differentially expressed proteins were molecules involved in our putative engraftment associated fibroblast-epithelial signaling network including ITGB4, ITGA6, and COL17A1 (Fig. and Supplementary Fig. ). Enrichment analysis further validated that differences we observe in EMT signaling could be explained by differences in expression profiles of malignant epithelial cells (Supplementary Fig. ). LAMC2-high cells interact via extracellular matrix remodeling with all cell types associated with the HNSCC tumor microenvironment (Supplementary Fig. ). These findings are in agreement with our previous findings that reorganization of the extracellular matrix is associated with engraftment (Fig. ), includes p-EMT signature genes (Fig. ), and is a primary mechanism by which epithelial cells and CAFs communicate (Fig. and Supplementary Fig. ). Overall, these findings elucidate how cellular crosstalk between CAFs and epithelial cells, driven through mutual extracellular remodeling, contributes to the expression of the p-EMT program, the capacity to engraft, and worse clinical outcomes for patients. HPV-negative HNSCC treatment plans - guided by factors such as anatomical site and staging - are formulated to optimize the curative potential while preserving form and function. As the treatment modalities can have profoundly impactful sequelae (e.g., speech impairment and dysphagia), there is an acute interest in improving risk-tailored clinical decision-making. We previously reported that successful engraftment of surgically resected HPV-negative HNSCC tumor tissue into immune-compromised mice was significantly associated with poor patient outcomes . While engraftment could, in theory, be of benefit to clinical decision-making, there remain practical challenges for implementation . Molecular biomarkers have been used for myriad diseases for applications ranging from risk stratification to predicting response to therapeutic agents. In particular, protein biomarkers that can be detected by IHC are attractive because of the familiarity of this type of information to a pathologist and the importance of considering spatial information. Here, we interrogated the transcriptomic and proteomic characteristics of a cohort of 88 HNSCC tumors to investigate the molecular characteristics of engraftment and to determine whether the relationship between engraftment and clinical outcomes could be recapitulated with biomarkers. From our molecular profiling data, we identified LAMC2 and TGM3 as top-ranking candidate markers for stratifying clinical outcomes. We validated the relationship between LAMC2 and TGM3 and clinical outcomes by IHC in an independent cohort containing samples from 404 HNSCC patients in a tissue microarray. Analogous to engraftment itself, LAMC2 and TGM3 proved to further stratify patient prognostication beyond the N category - the only currently accepted reliable predictor of patient outcome. We tested whether LAMC2 and TGM3 could be of value in patients without nodal involvement (N category of 0; N0). By combining LAMC2 and TGM3, we identified a subset of N0 patients (29 patients) with poor clinical outcomes – 58.6% and 73.6% 5-year overall and disease-specific survival, respectively. Correspondingly, we identify a substantial number of N0 patients (51 patients) with exceptional clinical outcomes – 87.2% and 100% 5-year overall and disease-specific survival, respectively. The hazard ratios associated with the combination of LAMC2 and TGM3 within N0 patients treated with surgery alone (2.3–11.4) are larger than the hazard ratios associated with the difference between N0 and N2 patients (2.7–5.2). Furthermore, we demonstrate that by considering LAMC2 and TGM3, we can identify subsets of patients with significantly different failure rates, an important consideration for elective neck dissection. These outcome differences are of sufficient magnitude to indicate escalation or de-escalation of treatment plans for patients within these subsets and warrant further investigation and confirmation as a source of information to support clinical decision making. While this is not the first report of the utility of LAMC2 or TGM3 as biomarkers for outcomes of HNSCC patients, this is, to our knowledge, we have shown that LAMC2 or TGM3 can significantly improve patient stratification beyond N category and the benefit of combining the markers. In addition, this represents the largest independent cohort reporting differences for LAMC2 and TGM3 (404 patients compared to 175 or 87 patients, respectively). Finally, we report that LAMC2 and TGM3 allow for patient stratification beyond that afforded by clinical covariates that prognosticate outcomes such as N category, extracapsular extension, and surgical margins. In summary, these analyses definitively highlight LAMC2 and TGM3 as having potential utility to assist risk-tailored clinical decision-making for HNSCC treatment. Beyond the identification of biomarkers, the relationship between engraftment and poor clinical outcomes suggests there may be therapeutic potential in targeting engraftment-associated pathways and signaling. Our study identified a potential relationship between engraftment and communication between cancer cells and fibroblasts. These findings are in line with previous reports of the significance of fibroblasts in HNSCC , , . There is an increasing recognition of differences between signaling at the leading edge of tumors compared to the core , . Arora et al. hypothesized that directing the signaling of leading-edge cells towards a tumor core-like state could be the basis for effective anticancer therapeutics . Our analyses suggest that fibroblast-tumor crosstalk may contribute to the aggressiveness of the leading edge and modulation of this signaling axis represents an alluring avenue of therapeutic intervention in HNSCC. In summary, we have established that the relationship between HNSCC engraftment and clinical outcomes is reflected by differences in the molecular profile of patient tumors. From the engraftment molecular profile, we identified and validated two protein biomarkers, LAMC2 and TGM3, associated with clinical outcomes and capable of adding to the standard-of-care information used for risk-tailored clinical decision-making. Expanding the cohort size will be imperative for translating the relationship between the markers and outcomes into the clinic, particularly as indicia for adjuvant assignment or neck dissection. Although previous studies have demonstrated similarities between different disease sites , further investigation of new and independent HNSCC cohorts is warranted, as most of the patient tumor disease sites in this study were lip and oral cavity cancers. As there are other cancers for which a relationship between engraftment and clinical outcomes has been established, analogous and comparative analyses may reveal generalizable findings about engraftment and tumor aggressiveness. Biological materials Tissue sample preparation Patient-derived xenografts Transcriptomics sample processing Proteomics sample preparation Mass spectrometry data acquisition Mass spectrometry raw data analysis Tissue microarray – Immunostaining and image acquisition TMA – Scoring Mass spectrometry statistical analysis RNA Sequencing scRNA Seq for cell type assignment of engraftment associated proteins scRNA-Seq - Data processing, normalization, and annotation scRNA-Seq - Pseudobulk differential gene expression analysis Cell-cell communication inference from scRNA-seq data Scoring pEMT signature in spatial transcriptomic data Marker prioritization Data visualization tumor samples: Fresh HNSCC tumor samples were collected from patients undergoing surgery at Toronto General Hospital, University Health Network. Informed consent was obtained by all patients in this study and participants did not receive any form of compensation. All experiments were approved by the Research Ethics Board at University Health Network. Sex and/or gender were not considered for the study design, but self-reported gender was investigated as a covariate. Patient clinical data was extracted from the Anthology of Outcomes . A fragment of each patient sample was snap-frozen. Mouse experiments: All animal experiments were performed with the approval of the University Health Network Animal Care Committee and adhered to the Canadian Council on Animal Care guidelines (protocol #1542). NOD.Cg- Prkdc scid Il2rg tm1Wjl /SzJ (NSG) mice were bred in-house at the University Health Network Animal Resources Center. The housing is on a 12 h light cycle, and the temperature is set to 21-22 °C with 45–60% relative humidity. Cell culture: FaDu (pharynx SCC, HTB-43), Detroit-562 (pharynx SCC, CCL-138), SCC-4 (tongue SCC, CRL-1624), SCC-25 (tongue SCC, CRL-1628), and Cal27 (tongue SCC, CRL-2095) cells were from ATCC. Cal33 (tongue SCC, ACC-447) were from DSMZ. Normal Oral Epithelium (NOE) cells were from Celprogen (San Pedro, CA; 36063-01). SCC-8 (RRID:CVCL_7781) and SCC-42a (RRID:CVCL_7847), human laryngeal squamous cell cancer cell lines, were kind gifts from R. Grénman, Turku University Hospital, Turku, Finland). The cells were authenticated at the Center for Applied Genomics (Hospital for Sick Children, Toronto, Canada) using the AmpF/STR Identifier PCR Amplification Kit (Applied Biosystems) and routinely tested for mycoplasma contamination using the Mycoalert detection kit (Lonza Group Ltd). Cell lines were grown in IMDM supplemented with 10% FBS and PSG (penicillin (20 U/ml)/streptomycin (20 U/ml)/glutamine (60 µg/ml)). NOE cells were cultured in NOE-specific media according to the supplier’s indications (pre-coated flasks with Human Oral Epithelial Primary Cell Culture Complete Extra-cellular Matrix Cat# E36063-01 and Media with Serum Cat# M36063-01S). All cell lines were maintained in a 5% CO2 environment at 37 °C. Co-culture: Large numbers of cancer cells only and cancer + CAF co-culture spheroids were generated using AggreWell™ Microwell 24-well plates (StemCell Technologies) according to the manufacturer’s guidelines. Briefly, co-cultures were seeded using 600,000 CAFs and 300,000 GFP-labeled Cal33 or Cal27 cells (Creative Bioarray) per well, while monocultures were seeded with 900,000 cancer cells alone. Cells were seeded in IMDM with 10% fetal bovine serum and 1% Penicillin/Streptomycin (Wisent) per well. The plate was spun down at 100 x g for 3–5 minutes and placed in a humidified 37 °C (5% CO2) incubator for 72 h to allow spheroids to fully form and CAFs and cancer cells to interact. Spheroids were collected into 15 mL Falcon tubes using 37 mm reversible strainers (StemCell Technologies), washed twice with PBS, and enzymatically dissociated with 600 mL Accutase™ supplemented with 20 mL collagenase/hyaluronidase (StemCell Technologies). Tubes were warmed to 37 °C for a maximum of 1 h, flicking the side of the tube occasionally to agitate the mixture. 2 mL of complete media was added to stop the reaction. The dissociated spheroid samples were washed with PBS filtered with a 70 mM mesh strainer to remove small aggregates. GFP+ cancer cells were isolated using an Aria Fusion fluorescence-activated cell sorter (Beckton Dickinson). Snap-frozen patient OCT-embedded tissues were first sectioned (8 microns) onto a slide, and H&E staining was performed to assess the proportion of tumor cells present. Samples that contained < 25% tumor cells were excluded. The cohort of patient tumor samples consisted of 51 Engrafters and 37 Non-Engrafters. Serial shavings, 20–40, at 10 µm were cut and collected, alternating tubes for RNA and protein, from each snap-frozen OCT sample for downstream extractions. Patient-derived xenograft tumors were generated as previously described . Briefly, under sterile conditions, tumor samples were cut into small pieces (~ 1 mm 3 ), and individual pieces were implanted subcutaneously into the flank of NSG mice. Mice were between six to twelve weeks old when implanted with patient tumors. Two tumor fragments were implanted per mouse (1 to each flank) of up to 5 mice. Mice were then monitored weekly for tumor growth and the time of initial palpation was recorded. The maximal tumor burden permitted by the ethics committee was no more than 15 mm in diameter – this was not exceeded. Mice were euthanized when tumors reached 15 mm in diameter or after 6 months. If no tumor formed by 6 months, the patient tumor was defined as Non-engrafter (N). RNA was extracted from tumor, PDX tissue, or cells using the RNeasy Minikit (Qiagen). After passing QC, the sample library was prepared using the Illumina TruSeq stranded total RNA sample preparation kit at the Princess Margaret Genomics Center. Sequencing used a 100-cycle paired read protocol and multiplexing to obtain ~ 75 million reads/sample on a Novaseq S4 flow cell using XP mode. OCT removal: All steps were performed at 4 °C using pre-chilled solutions unless otherwise noted. Shavings of OCT-embedded surgically resected tumor samples were depleted of OCT using a protocol adapted from published literature , . Briefly, samples were washed two times with 1 mL of 70% ethanol, two times with 1 mL of deionized water, and one time with 1 mL of 50 mM Ammonium Bicarbonate. Samples were air-dried for 5 minutes at room temperature. SP3 digest and peptide cleanup: 100 µL of SP3 lysis buffer (100 mM Ammonium Bicarbonate with 1% SDS, 1% TritonX100, 1% NP-40, 1% Tween, 1% Sodium Deoxycholate, 1% glycerol, 50 mM NaCl, and 1x Protease Inhibitor Cocktail) was added to each tube of tissue pieces. Samples were sonicated (VialTweeter; Hielscher Ultrasonics, Teltow, Germany) by three ten-second pulses, set on ice for one minute, and thethen stored at − 80 °C until all samples were ready for proteomic digestion. All samples were retrieved from the freezer and thawed on ice. Samples were sonicated by three ten-second pulses, set on ice for one minute, and then again sonicated by three ten-second pulses. Samples were heated at 95 °C for 5 min using a heat block and chilled on ice. Protein concentration was estimated using the Pierce™ BCA Protein Assay Kit according to the manufacturer’s protocol. Samples were brought to 5 mM dithiothreitol (DTT) and reduced for 30 min at 60 °C in a water bath. Samples were brought to 14 mM iodoacetamide for alkylation in the dark for 30 min at room temperature. 50 µg of protein from each sample was transferred to a low attachment round-bottom plate. Volumes were normalized using deionized water, and 10were added to each well and mixed in with trituration. Samples were brought to 70% ethanol. The plate was transferred to a rocker and incubated for 18 min. at room temperature. Magnetic particles were collected with 96 well plate magnetic rack for 2 min, and supernatants were removed. Magnetic particles were washed 2x with 200 μL of 80% ethanol and 1x with 200 μL of acetonitrile. Cleaned particles were resuspended by adding 100 µL of digest buffer (100 mM ammonium bicarbonate with 2 mM CaCl 2 and 20% (v/v) Invitrosol) and incubating for 10 min at room temperature in ultrasonicator water bath to assist disaggregation. 2 µg of Trypsin/LysC was added to each well and samples were digested for 16 hr at 37 °C in a humidity chamber. After the digestion, the plate was centrifuged at 1000 × g at 24 °C for 2 mintransferred to low-binding tubes. 40 µL of digested peptides were cleaned using SP2 with 6 µL of prewashed magnetic particles (100 µg/µL of 1:1 SeraMag Hydrophilic:SeraMag Hydrophobic) and eluted in 60 µL of MS grade water. Peptide concentrations were estimated using Pierce™ Quantitative Fluorometric Peptide Assay. Cell line digestion: Each cell line was processed in triplicates. Cells were grown in 10 cm dishes to 80% confluency and washed three times with cold phosphate-buffered saline PBS (pH 7.4) before cells were pelleted. The cell pellets were lysed in 500 μL of 50% (v/v) 2,2,2-Trifluoroethanol with 100 mM ammonium bicarbonate (pH 8) with repeated freeze-thaw cycles followed by five cycles of pulse sonication (10 s each). The disulphide bonds were reduced using 5 mM dithiothreitol for 30 min at 60 °C, the reduced disulphide bridges were alkylated with 25 mM iodoacetamide for 30 min at room temperature in the dark. The samples were diluted 1:5 with 100 mM ammonium bicarbonate (pH 8.0) and added with 2 mM CaCl 2 . The proteins were digested overnight with 2 µg of trypsin/Lys-C enzyme mix (Promega, Cat# V5072) at 37 °C. The reaction was quenched with the addition of 1% formic acid and the peptides were desalted by C 18 -based solid phase extraction, then lyophilized in a SpeedVac vacuum concentrator. The peptides were solubilized in mass spectrometer grade 0.1% formic acid in water. LC-MS/MS analysis was performed on a Q Exactive HF (ThermoFisher) coupled to EASY-nLC™ 1000 System (ThermoFisher). Peptides were loaded on pre-column (Acclaim™ PepMap™ 100 C18, ThermoFisher) at 740 Bar max pressure separated using a 50 cm EASY-Spray column (ES903, ThermoFisher) ramping mobile phase B (0.1% FA in HPLC grade acetonitrile) from 0% to 6% in 5 min, 5% to 24% in 200 min, 24% to 48% in 40 min interfaced online using an EASY-Spray™ source (ThermoFisher). The Q Exactive HF MS was operated in data-dependent acquisition mode using a loop count of 25 at a full MS resolution of 120,000 with a full scan range of 350–1800 m/z , full MS AGC at 3 × 10 6 , and maximum inject time at 240 ms. Ions for MS/MS were selected using a 1.4 Th isolation window with 0.2 Th offset, AGC at 2 x 10 5 , 55 ms maximum injection time, minimum AGC target of 100, positive charge states of 2–5, 60 s dynamic exclusion, and then fragmented using HCD with 27 NCE. MS/MS scans were collected at a resolution of 30,000 in profile mode. Raw files were analyzed using FragPipe (v.20.0) using MSFragger , (v.3.8) to search against a human proteome (Uniprot, 43,392 sequences, accessed 2023-02-08) – canonical plus isoforms. Default settings for LFQ workflow , were applied using IonQuant (v.1.9.8) and Philosopher (v.5.0.0) with the following modifications: Precursor and fragment mass tolerance were specified at − 50 to 50 ppm and 7 ppm, respectively; parameter optimization was disabled; Pyro-Glu or loss of ammonia at peptide N-terminal was included as a variable modification; MaxLFQ min ions was set to 1; MBR RT tolerance was set to 1 min, and MBR top runs was set to 10. A clinically annotated TMA consisting of over 600 oral cavity cancer tumor tissues (most patients are from 1994-2012) was used to assess the expression of LAMC2 and TGM3. TMAs were sectioned at 4 microns at the UHN-LMP-Pathology Research Program. After deparaffinization and antigen retrieval with Tris-EDTA, pH 9.0, buffer, the TMA slides were stained with LAMC2 monoclonal antibody at 1:100 dilution, (MA5-24646, ThermoFisher) or TGM3 polyclonal antibody at 1:100, (HPA004728, Sigma Aldrich). The stained slides were scanned at 20X magnification using Aperio ScanScope AT2. The digital images were viewed using Aperio ImageScope and each tumor core. Each tumor core (duplicate per patient) was scored manually by three scientists. For both LAMC2 and TGM3, the following parameters were scored: 1- the percentage of tumor cells that are positively stained by the biomarker with 5 bins (0 (no staining), 1 (1–25%), 2 (25–50%), 3 (50–75%), 4 (75–100%); 2 – the intensity of biomarker expression with 4 bins (0 (no staining), 1 (low), 2 (medium), 3 (high). An overall score (frequency x intensity) was determined by multiplying the values for each parameter. For TGM3, an additional score for the percentage of cells with a “nomal-like” staining pattern (as described in Results) was determined with 4 bins:(1 (0–25%), 2 (25–50%), 3 (50–75%), 4 (75–100%). All analysis was performed using R programming language (v.4.2.2) with Tidyverse package (tidyverse_1.3.2) unless otherwise specified. All correlation estimates and p-values were calculated using the “cor.test” function. For all experiments, the “MaxLFQ Intensity” columns were extracted from the “combined_protein.tsv” output file from FragPipe (Supplementary Data Table and ). Patient tissue cohort: First, the log 2 -transformed protein intensities were adjusted to the median cellularity (estimated from H&E staining) using a linear model for proteins quantified in ≥ 3 samples (8384 proteins). Next, proteins were filtered for presence in > 75% of patient samples, resulting in 4382 proteins, and subsequently imputed with a random forest algorithm using the MissForest package (missForest_1.5). Differential expression was estimated using the difference in group means (Engrafters v Non-engrafters), and significance was calculated using a two-sided Wilcoxon rank-sum test. GSEA was applied using a pre-ranked list of Gene Symbols sorted based on estimated fold changes against the Gene Ontology Biological Processes, KEGG Pathways, and Cancer Hallmark gene sets with minimum and maximum sizes of 25 and 200, respectively. Outputs from GSEA were used as inputs to Cytoscape (v.3.9.1). GSEA gene sets were visualized in EnrichmentMap (v.3.3.4) and AutoAnnotate (v.1.3.5). Cell line analysis: First proteins were filtered for presence in > 50% of lines for at least one cell type and subsequently imputed with lower tail imputation (downshift of 1.8 s.d. and width of 0.3 s.d.) . Differential expression and significance were estimated using a linear model. Tissue and co-culture: Transcript-level abundance was quantified using Kallisto (v.0.46.1) using Gencode.v33 transcript. Transcript abundances were then imported into R using tximport , filtered for protein-coding genes with annotations from org.Hs.eg.db (v.3.16.0). Samples with total counts fewer than mean minus 2.5 s.d. were excludedlog 2 -transformed CPM were adjusted to the median cellularity (estimated from H&E staining) using a linear modelPaired PDX: Reads from PDX were classified to either mouse or human genomes using Xenome . Reads classified as human were aligned to GENCODE v38 human genome using STAR (v. 2.7.9a) and counted to exonic features using featureCounts (Subread v. 2.0.1). Reads from primary tissue were aligned to the human genome (gencode.v30) using STAR (v. 2.6.1c) and filtered to remove reads that aligned to more than one locus. Reads were counted to exonic features using HTSeq-count (v. 0.11.0). Samples with total counts fewer than mean minus 3 s.d. were excluded, and then genes with low transcript counts were excluded using filterByExpr function from DESeq2 (v.1.38.1). Using edgeR (v. 3.40.0), normalization factors were calculated using calcNormFactors function and counts were converted to counts-per-million (CPM). The ESTIMATE algorithm was used for the determination of tumor cellular composition. A linear model using log 2 (CPM + 1) was used for gene fold change estimation. Processed expression data from Puram et al. was downloaded from Gene Expression Omnibus ( https://www.ncbi.nlm.nih.gov/geo/ ) with accession number GSE103322. Data were imported into Seurat (v.4.3.0). Cells annotated as “lymph” or without a specific cell type annotation were excluded. Data were scaled and normalized using ScaleData and NormalizeData functions, respectively. Differential expression testing was performed using the FindMarkers function. When available, we re-analyzed processed and annotated data provided by the authors of the respective studies. For the Kürten et al. dataset, processed gene barcodes from CellRanger were provided. We processed this dataset following the methods described in their manuscript. Briefly, using Seurat (v4.3.0), cells with fewer than 200 genes expressed, more than 5000 genes expressed, or more than 10% mitochondrial genes were filtered out. Data were normalized by scaling expression to 10,000 counts per cell followed by log transformation. The top 2000 highly variable genes were selected and normalized expression values for these genes were scaled, regressing out mitochondrial gene percentage and number of molecules detected per cell. PCA was performed on the scaled data, and a shared nearest neighbors (SNN) graph was computed using the top 10 PCs. Clusters were computed on the SNN graph with resolution = 0.3. Clusters were annotated using reported cell type markers. Malignant cells were identified by performing CNV inference on epithelial cells using inferCNV (v1.17.0) with cells from the peripheral blood leukocyte samples as reference cells. To identify differentially expressed genes (DEGs), we first created pseudobulks by aggregating LAMC2-positive/-negative cells from each patient. Pseudobulk analysis mitigates type I error compared to traditional approaches at the single level which is important for deriving robust DEGs across patients . DEGs between LAMC2-positive and LAMC2-negative cells were identified using DESeq2 (v1.32.0), considering patient identity as a fixed effect with design ‘~ Patient + CellType’. Statistical significance was computed using a Wald test. Log fold changes were shrunk using the ashr method in the lfcShrink function to control for noise in lowly detected genes. DEGs were considered significant if they met the criteria FDR < 0.05 and |log2(FoldChange)| > 1. The fgsea package (v1.18.0) was used to perform gene set enrichment analysis with the fgseaMultiLevel function. A ranked list was computed from the results of the differential gene expression test described above. Genes were ranked by log2(FoldChange). The C2, CP, and C5 GO pathways collections were downloaded from MsigDB , and only pathways containing between 10 and 2000 genes were used. To achieve robust prediction of ligand-receptor pairs, we used a pipeline that integrates several published tools for scRNA-seq: LIANA , CellPhoneDB v5 , Cell2Cell , and CellChat v2 . In addition, the interaction databases from LIANA, CellPhoneDB, and CellChat were combined into a single database capturing 6938 interactions. Interactions were inferred on a sample basis, followed by consensus identification where an interaction had to be detected ( p < 0.05) in at least three tools, of which one had to be LIANA. We employed two filtering steps to obtain a final list of interactions: (1) interactions detected in at least two patients were retained, and (2) on a sample level, a consensus rank ( p -value) across tools was computed using Robust Rank Aggregation . Sample-level results were aggregated by combining p-values using Fisher’s method. The combined p -values were adjusted via the BH procedure. The final list of interactions was obtained by filtering for interactions that fulfill criteria for (1) and have an FDR < 0.05 from (2). Processed and annotated Seurat objects for 10x Visium data from the Arora 2023 study were used to analyze the expression of the pEMT signature reported by Puram et al. For each sample, the relative expression of the pEMT signature was scored using the AddModuleScore function in Seurat (v4.3.0) in spots annotated as SCC by the authors. The relative expression of the pEMT signature was compared across regions (edge, transitory, core), with statistical significance computed using a likelihood ratio test of linear mixed models (LMM) with/without sample ID as a random effect to account for the nested structure in the data. LMMs were implemented using the lme4 R package (v1.1-34), and p -values were calculated using analysis of variance (ANOVA) with a likelihood ratio test between the LMMs with/without sample ID using the Stats R package (v4.2.1). The cumulative log 2 (FoldChange) expression of p-EMT genes between edge and core spots was visualized by computing the average expression of the p-EMT genes per sample, log-transforming the values, and subtracting the core value from the edge value to obtain the log 2 (FoldChange) for each sample. To meet “Lower in E” or “Higher in E” criteria, the mean log 2 (E/N) fold change (considering RNA and protein) was < − 0.5 or > 0.5, respectively. Hazard ratios were calculated using Cox proportional hazard models on median dichotomized RNA and protein data for four clinical outcomes – disease-specific survival, overall survival, recurrence-free interval, and recurrence-free survival. The “Better clinical outcomes” and “Worse clinical outcomes” criteria were met if > 4 of the eight possible hazard ratios were < 0.67 or > 1.5, respectively. Five ranking criteria were assessed for each candidate that met both filtering criteria – DSS hazard ratio of TCGA cohort (accessed using Xena ), median protein intensity (current study), median RNA intensity (current study), Huang et al. HNSCC log 2 (Tumor/normal adjacent tissue) RNA and protein. The z-score was calculated for each ranking criteria and then summed for candidate selection. Markers for “Worse survival” and “Better survival” prioritized different directionality for each ranking criteria apart from median intensities of RNA and protein. Unless otherwise specified, plots were generated using R programming language (v.4.2.2) with Tidyverse package (tidyverse_1.3.2) with ggthemes_4.2.4, ggpubr_0.5.0, ggplot2_3.4.0, and ggbeeswarm_0.6.0 packagTransparent Peer Review file Source Data
Evidence of activation of vagal afferents by non-invasive vagus nerve stimulation: An electrophysiological study in healthy volunteers
daab795c-6578-4185-8d10-eef23bb440b7
5680905
Physiology[mh]
Invasive vagus nerve stimulation (iVNS) is a well-known therapeutic alternative for drug-resistant epilepsy ( ). Case reports have suggested that iVNS could also have beneficial effects in other disorders such as depression, Alzheimer’s disease, and primary headache ( – ). The results of these case reports were encouraging, but no large randomised controlled studies of iVNS in primary headache have been conducted to date. The use of iVNS requires surgical intervention for electrode implantation and battery placements and replacements, thereby carrying a risk of potential surgical complications (e.g. haemorrhage, lead migration, infection). Non-invasive alternatives to iVNS have been developed to stimulate the vagus nerve transcutaneously at the cervical or auricular region using external devices for non-invasive vagus nerve stimulation (nVNS). These devices avoid the risk of surgical complications associated with iVNS. The minimal risks of nVNS offer potential benefits in the treatment of more prevalent medical diseases beyond refractory epilepsy. An nVNS device that stimulates the cervical vagus nerve in the neck is approved by the US Food and Drug Administration for the acute treatment of pain associated with episodic cluster headache ( ) and has demonstrated significant preventive therapeutic effects in chronic cluster headache when used with standard of care (versus standard of care alone) ( ). Preliminary evidence has also suggested potential efficacy in the treatment of migraine ( , ). The favourable safety and tolerability profile of nVNS mitigates concerns associated with the use of triptans and other pharmacologic agents, especially in certain subsets of patients (e.g. those with a medical history of cardiovascular/cerebrovascular diseases or medication overuse headache) ( – ). The mechanism of action of vagus nerve stimulation (VNS) in the treatment of headache is probably multifactorial. The majority of afferents in the cervical portion of the vagus nerve are of visceral origin and project to the nucleus tractus solitarius (NTS), while the small population of somatic afferents project to the trigeminal nucleus caudalis (TNC) ( ). The anti-nociceptive effects of iVNS have been established in rodents ( ). This therapy has also demonstrated the ability to modulate both firing of spinal trigeminal nucleus neurons in response to dura mater stimulation ( ) and cortical synchrony and rhythmicity through the activation of muscarinic receptors in rodents ( ). Findings from another study showed that stimulating the afferent fibres of the vagus nerve in the neck (primarily Aβ and Aδ fibres) suppressed dural stimulation-induced facial allodynia in rats for more than three hours, with the nVNS-associated decrease in trigeminal pain potentially mediated by a glutamate reduction in the TNC ( ). The efficacy of cervical nVNS shown in cluster headache and suggested in migraine ( – ) has not been directly proven to be mediated through the activation of vagus nerve afferents. A functional magnetic resonance imaging (fMRI) study in healthy subjects showed that nVNS activated the NTS and several brain areas that receive vagal input and deactivated the TNC ( ). In humans, iVNS is able to evoke a short-latency somatosensory nerve potential that can be recorded ipsilaterally over the scalp and has been attributed to the activation of vagus nerve sensory afferents ( – ). This far-field vagal somatosensory evoked potential (vSEP) can also be elicited after transcutaneous stimulation of the auricular branch of the vagus nerve, with three reproducible peaks (P1, N1, and P2) being identified from C4-F4 recordings and higher-intensity stimulations leading to increasing vSEP amplitudes ( , ). The vSEP latencies obtained using auricular vagal stimulation are consistent across several studies ( , – ). In the surgical setting of direct VNS, a response consisting of four peaks (P1, N1, P2, and N2) was clearly identifiable at the scalp level ( ). The late P2-N2 component of the vSEP, but not the early P1-N1 peak, disappeared after neuromuscular blockade, suggesting that the late components have a muscular origin ( , ). Based on the above-mentioned electrophysiological studies, we aimed to determine if cervical nVNS could elicit an evoked response similar to the vSEPs previously described in the literature and to further characterise this response to better understand the mechanism of action of nVNS in the treatment of primary headaches. Research participants Protocol and devices vSEP recordings and analyses This investigator-initiated, single-centre study included 12 healthy volunteers (HVs; mean ± SE age, 26.9 ± 5.2 years; five females) who had no history of cervical surgery or other relevant medical procedures, no personal or familial history of headache, and no daily medication intake other than oral contraceptives. The HVs were university students or members of the hospital staff who were recruited between February and April 2015 at the Headache Research Unit, University Department of Neurology, CHR Citadelle, Liège, Belgium. The study was reviewed and approved by the local ethics committee of the Centre Hospitalier Régional de la Citadelle and was conducted in accordance with the Declaration of Helsinki. Prior to testing, all participants were provided with detailed information about the study procedures and gave their written informed consent. In each HV, vSEPs were recorded in three different conditions using bilateral stimulations on the same day in a pseudo randomised order ( ): 1) the nVNS device was placed over the cervical portion of the vagus nerve (first active condition); 2) the vagus nerve afferents in the inner tragus were stimulated (second active condition); and 3) the nVNS device was placed over the sternocleidomastoid (SCM) muscle (control condition). Recordings were performed according to the side of stimulation. In the first active condition , cervical nVNS was delivered using a portable CE-marked nVNS device (gammaCore®; electroCore, LLC, Basking Ridge, NJ, USA) placed over the expected location of the vagus nerve in the anterolateral cervical region ( ). This portable nVNS device produces a low-voltage electrical signal consisting of a 5-kHz sine wave burst lasting for 1 ms (five sine waves, each lasting 200 μs), with such bursts repeated once every 40 ms (25 Hz) for two minutes per stimulation. The vSEP registration period started at the beginning of a burst and lasted 35 ms. The stimulations were applied over the skin of the neck using two stainless steel contact surfaces covered with a small amount of highly conductive, multipurpose electrolyte gel. The electrical signal used for the nVNS stimulations ranged from 6 V to a peak of 18 V, with a maximum output current of 60 mA. The vSEPs were recorded at five different voltages (6, 9, 12, 15, and 18 V), which corresponded to pre-programmed intensities delivered by the cervical nVNS device. In the second active condition , vSEPs were obtained with stimulation of vagus nerve afferents in the inner tragus ( , ). A custom-made bipolar electrode connected to an electrical stimulator (DS7A stimulator; Digitimer Ltd., Welwyn Garden City, Hertfordshire, UK) was used to stimulate the inner tragus of the ear ( ) because the size of this region was incompatible with the effective use of the nVNS device. The stimulation intensity was adjusted according to each participant’s individual sensitivity (mean stimulation intensity, ≈8 mA). Fifty stimulations were delivered at a frequency of 2 Hz and a pulse duration of 500 ms over the medial region of the tragus close to the entry of the acoustic meatus. A control condition was used to distinguish vagal nerve potentials from muscular artefacts by positioning the cervical nVNS device over the SCM muscle in the posterolateral part of the neck using a stimulation intensity of 9 V ( ). Needle electrodes were placed on the scalp at M1/2, Cz, C3/4, and F3/4 according to the International EEG 10-20 system. The vSEPs were recorded ipsilateral to the side of stimulation, and the M1/2-Cz and C3/4-F3/4 electrode configurations were evaluated. A ground electrode was placed over the wrist. The vSEPs were averaged offline using CED 1401 and 1902 devices (Signal 4.11 Software; Cambridge Electronic Design, Cambridge, UK). The vSEP peaks appearing up to 10 ms (P1, N1) and latencies were identified as described previously ( , ). Peak-to-peak amplitudes (P1-N1) and wave latencies were measured for each cervical nVNS stimulation intensity and for auricular vagal stimulation. The P2 and N2 peaks appearing after 10 ms, latencies, and amplitudes were also measured to show muscular components. For the dose-response analysis, after an initial DC subtraction, the evoked responses were imported into EEGLAB (MATLAB R2016a; MathWorks, Inc., Natick, MA, USA) for processing ( ). An automatic artefact epoch rejection function from EEGLAB was used to remove epochs exceeding two standard deviations (SDs) from the mean channel limit. The N1 area under the curve (AUC) in M1/2-Cz was extracted from each recording where a response was clearly identifiable. The extracted values were used to construct a dose-response curve, with the dose corresponding to the logarithm of the stimulation intensity and response corresponding to the AUC value for the evoked responses. Curve-fitting analyses were performed afterwards using the mean AUC value for each stimulation intensity. For latency and amplitude analyses, the distribution of variables was first analysed using a Shapiro-Wilk normality test. Non-parametric tests were performed in the case of non-normal distributions (i.e. the Mann-Whitney test). Latencies and amplitudes were averages of the right and left stimulations. Percentages of participants who had a clearly identifiable response to stimulation were compared among all stimulation intensities using a chi-squared goodness-of-fit test. A p value of < 0.05 was considered significant for all statistical evaluations. Statistical analyses were performed and graphs were developed using GraphPad Prism Windows version 6.00 (GraphPad Software, Inc., La Jolla, CA, USA). Results are summarised in . vSEP peaks and latencies (P1, N1/P2, N2) P1-N1 amplitudes Dose (intensity)-response curve The logarithmic dose-response curve ( ) showed a good fit over the mean AUC values for N1 responses elicited with each stimulation intensity ( R 2 = 0.97). Based on this fit, the E 50 (where 50% of the maximum response occurs) was 10.55 V (95% CI: 9.02 to 12.35). Cervical nVNS (first active condition) elicited two reproducible vSEP peaks (P1, N1) ( ) in 11 of 12 HVs on one side or both sides of neck stimulation. One of the HVs had no response to cervical nVNS throughout the session, while two participants had modest low-amplitude responses only at high stimulation intensities. The number of participants in whom a response could be evoked increased with increasing stimulation voltage, reaching a maximum of 11 participants at 15 V ( ). At all voltages evaluated, the tolerability of cervical nVNS was acceptable to patients. A late vSEP component (P2, N2) was also identified with cervical nVNS for nine patients in M1/2-Cz at 9, 12, 15, and 18 V and in C3/4-F3/4 at 15 and 18 V ( , ). Likewise, bipolar stimulation of the inner tragus (second active condition) evoked two reproducible peaks (P1, N1) ( ) in nine of 12 HVs. These responses were similar to those previously described in the literature ( , ). The SCM muscle stimulation (control condition) elicited identifiable late responses (P2, N2) ( ), at least unilaterally, in all 12 participants, but these responses had a much longer latency than the vSEPs elicited by cervical or auricular stimulation. A mean N1 latency with cervical nVNS was calculated for each participant because N1 latencies of vSEPs for this active condition did not vary among different stimulation intensities (repeated measures analysis of variance (ANOVA) with Geisser-Greenhouse correction, F (2.634, 18.44) = 1.26, p = 0.32). The mean ± SD N1 latencies were shorter for vSEPs elicited by auricular stimulation (second active condition) (3.79 ± 0.78 ms) than for vSEPs elicited by cervical nVNS (first active condition) (5.25 ± 0.62 ms) ( p < 0.001). The N1 latencies for both cervical nVNS and auricular stimulation were four to five times shorter than the N2 latency for the first negative peak observed after SCM muscle stimulation (20.36 ± 3.37 ms) (both p < 0.001) (overall: repeated measures ANOVA with Geisser-Greenhouse correction, F (1.123, 11.23) = 235.0; p < 0.010). The difference in mean P1-N1 vSEP amplitudes between the first and second active conditions (1.06 ± 0.78 µV and 4.31 ± 4.79 µV, respectively) was not statistically significant ( p = 0.077). Our findings suggest that cervical nVNS can elicit evoked responses that are relatively similar to those elicited by iVNS ( ) and non-invasive stimulation of auricular vagal afferents in the inner tragus ( ). These vSEP responses were not present when the nVNS device was moved from the vicinity of the cervical vagus nerve and was placed over the SCM muscle. Cervical nVNS was able to elicit a reproducible early P1-N1 vSEP response in 11 of 12 HVs, whereas inner tragus stimulation evoked a vSEP response in nine of 12 HVs ( ). This finding might be explained by anatomical variations in the distribution of vagal afferents in the ear or differences in the number and frequency of stimuli for the two active conditions (cervical nVNS, 3000 bursts/25 Hz; auricular vagal stimulation, 50 bursts/2 Hz). Vagal afferents are absent in the tragus in up to 55% of postmortem cases but are present in 100% of cases in the cymba conchae ( ), which could not be stimulated using our stimulation electrodes. Individual variations in neck anatomy could also explain the absent or minimal nVNS-induced vSEPs in two participants who had a vSEP elicited by bipolar auricular stimulation ( ). Alternatively, these participants may have had a higher threshold for vagal fibre activation. Stimulations are required to pass through skin and muscular structures to reach the deeply located cervical afferents of the vagus nerve, which may explain the contraction of the platysma muscle commonly seen with cervical nVNS. Our findings have revealed two differences between cervical and auricular vSEPs. First, the vSEP latencies were significantly shorter with auricular stimulation (3.79 ± 0.78 ms) than with cervical nVNS (5.25 ± 0.62 ms; p < 0.001). This difference could be related to the distance between the stimulators and the entry point of the vagus nerve into the skull. The vSEPs are thought to arise from the impedance change of the volume conductor around the vagus nerve as it enters the cranium through the jugular foramen. By changing the position of the electrode and monitoring the change in latency, Usami et al. (2013) calculated an N1 signal conduction velocity of 27 m/s at a latency of 3.3 ms ( ). This velocity and latency correspond to a distance of 89.1 mm, representing the approximate distance from the electrode position on the vagus nerve to the skull base. Compared with the cervical region, the ear is probably closer to the vSEP generator thought to be localised close to the brainstem or at the skull base ( ). The difference between cervical and auricular vSEP latencies could also be related to a difference in fibre populations, giving rise to different conduction velocities between the predominantly somatic auricular afferents and the predominantly visceral afferents in the cervical vagus nerve. Another difference between the vSEPs was that the P1-N1 amplitudes were slightly higher with inner tragus stimulation than with cervical nVNS ( ), which could also be partly related to the difference in fibre populations. One must also keep in mind that different stimulators, frequencies, and intensities were used to stimulate the vagal afferents at the auricular and cervical regions. Finally, the muscle contractions observed with cervical nVNS may partly contaminate the vSEP response and result in slight differences in wave form ( ). Vagus nerve stimulation has been considered to be a valuable therapeutic option for neurologic diseases, but its use has been limited by the need for invasive surgical procedures ( ). The viability of non-invasive methods for stimulating the vagus nerve using portable devices that are more practical, convenient, and cost effective (versus iVNS) has expanded the therapeutic potential of VNS for a larger patient population and improved its accessibility for use in further studies ( , ). In our study, we observed that further increases in stimulation intensity beyond 15 V only slightly increased the responder rate and produced only a slight increase in the size of the response ( ). This finding could provide clinicians with guidance suggesting that a cervical nVNS intensity of more than 15 V may not be required for the majority of patients. The efficacy of cervical nVNS has only recently been confirmed in the treatment of cluster headache and suggested in the treatment of migraine ( – ), but its precise mode of action in these primary headaches has not been determined. The use of iVNS is known to modulate the firing of trigeminal neurons within the brainstem ( ) and also to inhibit cortical synchrony and rhythmicity ( , ), while nVNS has demonstrated the ability to reduce facial allodynia and glutamate release in the TNC ( ). Another study used fMRI to evaluate stimulation of the vagal afferent pathways by the nVNS device in 13 HVs ( ). Activation of several vagal projections, including the NTS, was significantly greater with cervical vagal afferent stimulation by nVNS than with the control condition ( ). The control stimulations involved placement of the nVNS device over the SCM muscle, consistent with the present study. The fMRI study of cervical nVNS did not include an auricular stimulation condition, but noted that the regional activity generated by the stimulation was comparable to activity reported in separate studies of non-invasive auricular vagal stimulation and iVNS ( ). Like cervical nVNS, auricular stimulation increased fMRI BOLD signals in the NTS ( ). Auricular stimulation also activated the locus coeruleus, with stronger nuclear activation being elicited by cymba conchae stimulation than by inner tragus stimulation ( ). Our vSEP findings further support the previous fMRI evidence of vagal activation as a mechanistic explanation for the beneficial clinical effects of nVNS. Limitations of our study include the use of surrogate markers in the absence of direct measures of vagus nerve activation. Further studies to evaluate associations between vSEPs evoked by nVNS and the clinical efficacy of this therapy are warranted. Other limitations include challenges inherent in cross-trial comparisons and controlled device studies. Cross-trial methodological differences may affect the results being compared, and effects of control conditions involving a medical device may be overstated owing to corresponding participant behaviours and perceptions ( , ). The origin of the vSEP itself has been questioned in former studies ( ). The vSEPs have been suggested to correspond with an electromyographic response arising from laryngopharyngeal muscles. However, only the late P2-N2 component of the vSEP disappeared when a neuromuscular block was performed using a relaxant ( ). The early P1-N1 component that we evaluated in the current study persists unchanged despite neuromuscular blockade ( ). We identified a late vSEP component, as observed in the iVNS studies, similar to a late response that we reproducibly observed when applying the nVNS device over the SCM muscle ( ). The P1-N1 wave was absent after stimulation over the SCM muscle, while it was identified using cervical nVNS and inner tragus stimulation, thereby favouring the involvement of nerve fibres. Finally, vSEPs elicited at auricular and neck regions differ slightly in wave form ( ), possibly due to muscular contractions that cannot be avoided when using cervical nVNS. We show here for the first time that a cervical nVNS device used to treat primary headaches is able to elicit reproducible short-latency far-field sensory vSEPs similar to those elicited by iVNS and stimulation of vagus nerve branches in the ear. The amplitude of these vSEPs increased with increasing stimulation intensity and disappeared in the control condition, in which the nVNS device was positioned over the SCM muscle. Our findings suggest that vSEPs could be considered for use in therapeutic studies of nVNS and could pave the way for further trials, especially those comparing vSEP characteristics with the clinical outcomes of patients, to find accessible predictive biomarkers of nVNS efficacy. Cervical non-invasive vagus nerve stimulation (nVNS) appears to elicit vagal somatosensory evoked potentials (vSEPs), as previously observed with invasive vagus nerve stimulation and transcutaneous auricular vagal stimulation. Control nVNS stimulations of the sternocleidomastoid muscle produced only longer-latency muscle artefacts. The vSEPs observed suggest that cervical nVNS stimulates afferent fibres of the vagus nerve. A dose-response analysis for cervical nVNS showed that a clear vSEP response could be elicited in more than 80% of the participants using an intensity of 15 V; cervical nVNS was well tolerated, consistent with previous studies. The assessment of vSEPs could lead to the development of a biomarker that is predictive of clinical responses.
Exploring attributes of high-quality clinical supervision in general practice through interviews with peer-recognised GP supervisors
72e4e16a-a254-4ef1-9866-076899f82fbb
8376628
Family Medicine[mh]
Clinical supervision of doctors learning to be general practitioners (called GP registrars) is critical for enabling safe medical care and producing skilled doctors who are invested in primary care careers. However, to some extent this relies on the quality of supervision that GP registrars are provided. Quality variation is a major issue within general practice training, as registrars (GP trainees) learn in distributed regions and GP practices that may have widely differing perspectives of optimal supervision. Understanding the dimensions of quality supervision may provide insights for planning, benchmarking, and supporting optimal supervision practice. This study is based in Australia, as a case study of a country undertaking medical workforce policy reforms aiming to produce more general practitioners . Policy reforms include a strong focus on improving the quality of general practice training, to attract people to GP training and produce skilled doctors. Like many countries, Australia requires a higher proportion of GPs for the increasing demands for chronic and complex care in the community, including as the population ages . In Australia, there are approximately 5200 GP registrars spanning one of 3–4 years of vocational training and aiming to pass national College assessments to achieve fellowship . GP registrars are employed by the practices that they train in and work under the guidance of a clinical supervisor as they learn to deliver a wide scope of primary care services including for vulnerable and high-risk groups in the community . GP supervisors are considered fellowed GPs who have completed—or are completing—a supervisor orientation course with a Regional Training Organisation (RTO; one of nine in Australia). They must also agree to deliver on a range of broad functions as part of training practice accreditation (Table ). Despite the minimum accreditation requirements, there is still no formalised national curriculum, nor quality framework for GP supervision work in Australia. Informing quality within GP supervision is important to assist with standardising GP training, where multiple GPs in the one practice may supervise registrars in different ways and registrars also rotate to at least two practices during their GP training. Where GPs have a different understanding of supervision quality, there is a high potential for registrars to have a negative training experience at some point in their training cycle. Quality standards may assist registrars to receive more continuity of optimal learning, aiding their enjoyment of general practice at the formative stages of their career and helping them to pass assessments and achieve fellowship. The community may also gain from consistent best practice supervision as this enables access to the full suite of safe primary health services from any registrars with whom they may consult. Multiple groups have a stake in engaging and training GP supervisors in the Australian context. General Practice Supervisors Australia (GPSA) is a peak body that leads the development of up-skilling resources for GP supervisors, under the Australian General Practice Training Program (AGPT). GPSA delivers supervisor-led webinars, guides, and teaching plans year-round. Additionally, the RTOs and GP Colleges engage in supervisor’s professional development courses and networking events. However, across the nation, supervision resources have emerged informally and iteratively, without a strong holistic understanding of quality within GP supervision. The existing literature mainly describes the GP supervisor’s role, rather than the attributes of quality supervision. An integrative literature review suggested that the GP supervisor role is multi-faceted . They establish learning environments, assess learning needs, facilitate learning, monitor the content and process of learning and registrar well-being as they guide registrars from ‘know that’ to ‘know how’ . Another narrative overview suggested that supervision roles may vary by the registrar and practice context such as rurality and part-time work . GP supervisors also undertake regular communication, conflict resolution, clinical reasoning, critical thinking and support for registrars who are struggling . Another literature review identified the relationship between the GP supervisors and registrars is important along with giving clear feedback and allowing registrars to have input into supervision processes . However, there is a need for more research to understand how quality is represented by the interplay of various functions, so that supervisor professional development an be anchored to a best practice framework . There is also room to link the concept of quality supervision to vocational learning theories. The aim of this study was to explore the attributes of high-quality clinical supervision for registrars in general practice. This study applied a qualitative descriptive approach involving in-depth, semi-structured interviews. The study recruited GP supervisors Australia-wide, who had been peer nominated as best practice supervisors. This group was chosen as they were industry-recognised for supervising and therefore considered suitable for informing quality within supervision practice. Ethics approval for the study was obtained from the Monash University Human Research Ethics Committee (Project ID 21655), ratified at the University of Queensland (Project ID 2012001171). Procedure and semi-structured interviews Data analyses The study was led by GPSA, at the interest of the GPSA board, to inform the directions for quality improvement of supervision. In March 2019, participants were recruited via the nine RTOs, the Royal Australian College of General Practitioners (RACGP) and the Australian College of Rural and Remote Medicine (ACRRM), to ensure a breadth of national representation. These organisations were sent information about the study and invited to contact GPs in their networks, who they considered best practice supervisors, including recipients of their own ‘Supervisor of the Year’ awards over the last 10 years. GP supervisors with a mix of characteristics such as gender and career stage, practice size, and location in different states/territories and ruralities were sought, to include a range of perspectives from different contexts. Only one reminder was issued, as the study was well-subscribed. The sampling frame included 60 GP supervisors. After contacting this group, 22 supervisors with a broad range of characteristics, completed the enrolment and participated in the study. Participants received an $AUD150 gift card in recognition of their time. Recruitment ceased on a practical basis in April 2020, as the COVID-19 pandemic had caused a diversion of time and resources in general practices which made it too busy for GP supervisors to respond . After completing written informed consent, 22 recorded video-conference semi-structured interviews of 50–70 min duration were conducted between September 2019 and June 2020. The interviewer was an experienced PhD-trained qualitative researcher, not known to the participants, who was a trained clinician in a non-medical field. The interviewer had experience of working with general practices in selected regions related to coordinating primary care projects. The interview duration was participant-driven and guided by a schedule (Table ), which had been informed by the research team and piloted with two experienced GP supervisors. During interviews, participants were asked to describe their supervision experience, practice context and aspects of their supervision that they considered reflected high quality. Each interview was transcribed verbatim, linked by a unique identifier and then de-identified before being circulated to the research team for analysis. Secondary data collection included interview notes/reflections and minutes of research meetings for the purpose of self-reflexivity and transparency about any of the research challenges, in seeking to achieve the research aim . The analyses were informed by theories of work-based learning suggesting that the conceptualisation of quality supervision might occur naturally in the workplace, through a multiplicity of opportunities and interactions . Additionally, theory about the quality of vocational education was used, where quality teaching/supervision evolves from the intersections of knowledge and professional culture, specific to the institutional environment in which they practice . Whilst considering these theories, the research team sought to interpret emergent findings in the data. As such, analyses commenced with three members of the research team reading the full transcripts and independently coding the data. This happened with no pre-set coding frame, in line with inductive analyses processes . Additions and alterations to the codes were made as blocks of five transcripts were completed. Authors then double-coded another transcript identifying reasonable concurrence with the codes and adding extra codes if these were relevant. The material was discussed, annotated, and then organised into emerging themes, layering and reorganising these to make sense of the data . This first stage of analyses occurred with the research team working in distributed sites, and meeting online, fortnightly. The team regularly challenged each other’s ideas, to reduce subjective biases and test any assumptions . The research team then attended a half-day face-to-face analysis session to allow whiteboarding of concepts around the emerging themes and to dig deeper into the themes. The broad general practice supervision requirements (Table ), and other national clinical supervision competencies were used to explore concurrence. Based on this, it was found that the inductive themes provided a richer perspective of quality GP supervision, and used language and concepts specific to the general practice vocational learning context. The research team pursued further rounds of inductive analyses by re-reading the original transcripts for meaning about quality and used face-to-face and online discussions to allow for internal confirmation or disconfirmation. This process enabled thick description and triangulation of a final set of themes that the team agreed upon . To aid analyses, participant characteristics were appended to unique identifiers and accompanied all transcript codes and thematic text. These depicted gender and career stage, practice size, state/territory, and rurality, coded according to the Modified Monash Model (MM1 depicting metropolitan areas and MM2-MM7 increasing degrees of rurality) . Participants of the semi-structured interviews were from a wide range of career stages, practice sizes and locations (Table ). Despite sampling participants of broad characteristics and exploring variation by career experience, practice or registrar factors, the perspectives of quality supervision were relatively consistent. The data produced 7 main themes (Table ). Participant quotes were labelled with participant number (P1–P22), gender, and years’ post-fellowship. Further participant-characteristic labels for quotes were avoided to maintain anonymity. Understand the meaning of quality supervision and seek to continually refine practice Structure placements with a focus on optimising learning Build secure and caring relationships with registrars Sustain and enhance learning opportunities drawing from the whole practice team Use learner-centred supervision, adjusting the supervision model as required Build professional identity and foster safe, independent decision-making Encourage registrar reflection and give quality feedback to drive learning Respondents noted that a “ key and critical thing is being able to give... quality feedback... about something done well or feedback about something that needs improvement ” [P14: female, < 20 yrs] whilst acknowledging that this could be “ challenging ” and “ one of the hardest things [to do] ” [P4: female, < 20 yrs]. Giving constructive feedback also involved seeking registrar insights, “ inviting a conversation into, ‘ How do you think it went in there? What went well, what went badly?’ … giving them a chance to give an account and then give your account ” [P6: male, < 20 yrs]. They spoke about taking an “ encouraging ” stance, giving space for registrars “ to come reflect on their own consultation style, their own management process, or their thought process, and see where that may be falling down, and come up with an idea of how they would maybe do it better ” [P20: female, < 20 yrs]. The secure and caring relationship was noted to intersect with this theme, where it enabled the process of giving open and honest feedback, “ a much easier conversation ” [P5: male, < 20 yrs]. Quality supervisors also gathered feedback about the registrar’s performance from other staff in the practice and applied this to support registrar learning: When it comes to the actual performance, it's feedback from the staff. The reception at the front will know that the patient is unhappy about doctor X, and they'll tell me. I can feed that back to the registrars [P3: male, <20yrs]. Participants unanimously reported that teaching is a good thing to do, both personally and professionally. However, they found it challenging to talk about the quality of their supervision in a way that could differentiate their approaches from those of colleagues. Through prompting, these supervisors related that they had mostly benchmarked the quality of their supervision in relation to other GP supervisors that they had encountered along their own learning pathway, “ I watched how good supervisors taught me. And I saw how bad supervisors taught me ” [P1: male, < 20 yrs] . They noted, “ easily remember[ing] the good things my own supervisors taught me and told me. So, I know in my teaching sessions with my registrars, I often reference what my own supervisors told me ” [P17: male, < 20 yrs]. Another participant reinforced that having experienced good and bad supervision as a registrar, “ I know which supervision worked...So I try and provide that ” [P4: female, < 20 yrs] . They also observed other GP supervisors, which helped them to reflect on issues related to supervision quality, “… we often try to take as much of the things that you go, ‘Well, that looks good. That’s a good way of doing something.’ And removing the ways that you go, ‘Oh, that didn’t work so well.’ You try to remove as much as you can for the things don’t work well ” [P18: male, < 20 yrs] . They also revisited their teaching tools and resources over time “ from our experiences of what’s helped ” [P4: female, < 20 yrs] and trialled different processes, then reflected about whether they “ worked a little bit better ” [P20: female, < 20 yrs]. When supervising registrars and their strong contemporary knowledge in lots of areas, quality supervisors showed capacity to learn, to “ rethink and reflect whether I’m doing things right, or whether things could be done better and regularly I find something that I change ” [P12: male, 20 + yrs]. Respondents also noted that, they valued “ be[ing] able to take feedback from the registrar. To listen ” [P18: male, < 20 yrs], as part of improving their supervision. Quality was reflected by actively structuring and preparing placements to optimise learning. Emphasis was placed on firstly selecting registrars suited to the practice, by checking on their motivations to be a GP, “ I try and get an idea of why they want to go into general practice. I’m looking for people who actually do want to...truly help their patients ” [P6: male, < 20 yrs]. Another noted the importance of matching clinical skills, “ So some of them might say, ‘I’m really interested in women’s health.’ And I say, ‘Unfortunately, I’m not the person for you’ ” [P1: male, < 20 yrs]. Respondents also undertook holistic orientation processes using a suite of resources such as checklists and guides. They introduced registrars to infrastructure and processes, “… for the first two days...there’s a very formal welcome and tour … they sit in with me, watch me see a few patients ... more about learning the computer than anything else ” [P11: male, 20 + yrs]. Registrars also met with and observed other practice staff, “ meet the reception... see how the bookings are made, how the billings are done…time with the nurses ” [P3: male, < 20 yrs]. Participants also planned to safeguard quality clinical teaching time around seeing their own patients. One noted, “ thinking about how you set up your teaching time…You actually need to timetable that in so that it does happen…and then the sort of curriculum that should be progressed through ” [P19: male, 20 + yrs] . Quality supervision was also depicted by clarifying learning boundaries within the GP context where there were business responsibilities and may be limited other supervisors onsite, saying “‘ This is what your background is, this is what your skills are. This is what my background is, this is what my skills are. This is also how we’re going to do it, this is the rostering, this is how you contact me through the day’...it’s all laid out ” [P9: male, < 20 yrs] . Participants strongly emphasised that quality supervision involved building relationships with registrars. Quality supervisors took leadership of this, aiming to build a secure relationship with registrars at their commencement of a training term. They noted that this involved “ front load[ing] the support in the first couple of weeks ” [P11: male, 20 + yrs] and consolidating the relationship as registrars went through “ difficulties...and...good times ” [P3: male, < 20 yrs] . Supervisors described that the focus of the relationship was on genuine care for the person and their learning, one commenting “ you actually have to care about the learner and want them to be the best clinician or person that they are ” [P5: male, < 20 yrs] . Understanding the registrar at a holistic level was considered to enable supervisors to contextualise and guide registrar performance: ...it works much better if it's a personal relationship. It's especially understanding where they come from? What's going on in their lives? It does impact on how they perform [P12: male, 20+yrs] . A secure and caring relationship was the enabler of “ talk[ing] about the difficult things…uncertainty, mistakes ” [P2: female, < 20 yrs]. It was also viewed as central to the “wellbeing component to your [supervisor] role... to look out for the whole person, not just their clinical training” [P15: male, < 20 yrs]. Within relationship building, supervisors purposefully aimed to “ get rid of the power imbalance ” [P12: male, 20 + yrs] and maintain open lines of communication . They did this by “ role modelling vulnerability ” [P21: male, 20 + yrs] involving showing it is normal “ to ask your colleague for advice and questions and take about mistakes and difficulties ” [P2: female, < 20 yrs]. They drew on key negative events to build trust for open communication: Well yes, you did scratch dad's car and he's not happy. Dad’s still going love you tomorrow, and he'll still feed you.…We'll forget about it [P12: male, 20+yrs] . Quality also entailed working sustainably and ensuring registrars gained as much as they could from the learning opportunities available across the wider practice team, with one respondent noting: “ It’s a group thing. You cannot do it alone... You’ve got to work as a team ” [P7: female, 20 + yrs]. Others similarly reflected that this was a way to ensure registrars had enough oversight and support when they were busy with their own patients : “ I guess trying to facilitate a team environment because I don’t know how I would do this on my own ” [P4: female, < 20 yrs]. Further, quality supervisors acknowledged their own skills boundaries as well as their expertise in some clinical areas that could add value to registrar learning. They promoted connections that could address a breadth of learning goals: ...there are people in my practice who are much better at dermatology than I am. So in terms of supervision for formal exposure to stuff, I hand that over to other people [P8: female, 20+yrs]. Participants also considered that the whole practice team could help registrars to access other styles of general practice and teaching for developing broader perspectives , “ there’s about half a dozen supervisors…all very good. We have slightly different styles ” [P9: male, < 20 yrs]. Quality was reflected by tailoring their supervision model to the needs and style of each registrar, through “ spend[ing] quite a bit of time with getting to know what their background is, how they learn, what’s best for them, and trying to fit that into how we work ” [P5: male, < 20 yrs] . Several commented that this involved “ adapt[ing] my teaching ” [P10: male, 20 + yrs] and “ adjust[ing]…supervision style ” [P7: female, 20 + yrs] depending on individual registrars. This included working out the methods that optimised the registrar’s capacity to learn, “… some people, obviously, like to learn from didactic teaching in which case you can just do a didactic talk. Others... learn a lot better when you have to ask them more questions and do their own research ” [P10: male, 20 + yrs]. Quality also involved evaluating the strengths, weaknesses, clinical interests and motivations of each registrar and addressing these in structured and opportunistic ways during a training term. The breadth of clinical aspects of general practice made this exercise quite extensive: And so I said, ‘ Look we're going to deal with general practice. We're going to start from the baby right up to palliative care and everything that's going to be in between. And we're going to figure out where you're weak, and we're going to make you stronger’ [P1: male, <20yrs]. Quality was reflected by supervisors developing a general practice identity including how to work effectively in the unique general practice context, “ it’s... a totally different work environment … appointments and reception, billing, prescribing, nursing, pathology, pharmacy ” [P13: male, 20 + yrs]. This included engaging registrars in clinical best practice, “ we work on lots of evidence-based approaches to patient management ” [P19: male, 20 + yrs] and developing knowledge of “ comprehensive general practice ” [P5: male, < 20 yrs]. Given general practice involves doctors working in independent consulting rooms, quality supervisors also noted building registrar confidence about their own style noting, “ everyone practices differently ” [P5: male, < 20 yrs]. Respondents encouraged registrars to think like a GP and learn the balance between independent decision-making and asking for help when they needed it. They spent time understanding “ how secure the registrar is in decision-making, and how much trust the supervisor in the practice can have in the medical assessments made by that registrar ” [P8: female, 20 + yrs]. This exploratory study provides some insights into the concept of quality clinical supervision for GP registrars learning in the general practice context. Quality supervision was understood when GPs reflected on their own supervision experiences and anecdotes from colleagues. This is consistent with work-based learning theory, where quality in supervision is conceptualised through situated, experiential learning, based on real-world exemplars . It also fits with vocational teaching more broadly, where the understanding of quality is strongly tied to what works well in a unique institutional environment . However, GP supervision can be somewhat siloed by its occurrence in distributed regions and general practices, whereby it may be critical to develop practical exemplars and connect GP supervisors professionally to promote cross-learning and support learning about best practice supervision. This could occur by developing GP supervisor “communities of practice”. Communities of practice are known to work well when groups with shared interests mutually guide each other through their understanding of the same problems in their area . Some of the resources that could be also shared in GP supervisor networks are processes and tools, such as orientation checklists, quality self-reflection tools and topic pro-formas. Quality of supervision encompassed a comprehensive approach where supervisors recognised and drew from a deep understanding of practice and learner contexts to find synergies that would improve registrar engagement in general practice and their performance as an emerging GP. Other narrative literature suggests that an educational alliance between supervisor and learner is important to facilitate registrars doing tasks of increasing complexity . Our findings suggested that supervision quality involved investing in secure relationships as a foundation for registrars to receive rich and potentially challenging feedback, around areas of uncertainty and mistakes. This builds on previous literature reviews that place relationship-building, communication, and feedback at the crux of a supervisor’s role . With a strong relationship and open lines of communication, quality supervision involved taking opportunities to get registrars to reflect on their own performance, even if situations were challenging. Quality of supervision was also reflected by supervisors who were learner-centred, and who used precise structure and advice to promote optimal learning relative to an emerging set of competencies. This also included helping registrars to become highly functional and conscious of safe, but autonomous decision-making to equip them for their future as an independent GP. This aligns with research suggesting quality in vocational learning encourages learners to take responsibility and feel empowered . Professional identity was also developed by linking learning to models of comprehensive GP work, which supervisors recognised may be new concepts given many registrars enter general practice from a hospital environment that is very different to primary care. Quality supervision meant that skill levels were not assumed but actively screened upon entry to the practice, and regularly monitored, allowing for both remediation and extension in real-time. Supervisors also adjusted their coaching around the registrar’s needs and their capacity for self-regulated learning building on the findings other research . Critically, this suggests that quality supervision is highly dynamic, and the broad requirements of supervisors in Australia (Table ) may need to better reflect the constant intersecting cycle of activities involved in quality supervision, rather than narrow, or siloed tasks. Further, this research suggests that learning to be a quality supervisor could be challenging as GP supervisors often work in discrete practice settings, and building quality supervision skills may require regular real-life practice with mentorship from experienced GP supervisors. Within the private business model of general practices, quality in supervision also entailed selecting registrar’s with an attitude, needs/interests which best fit with the practice’s resources. Selection of registrars has not previously been described as a quality issue for supervision but if a registrar is poorly suited to a practice, and there are limited supervisors in the practice, it may be extremely challenging to provide supervision at a level and range that will meet their needs. Another issue for quality within supervising in a private business model was being able to draw other team members into the supervision model to enable supervisors to progress their own clinical caseload. A key implication is that small practices may need to get more capable registrars who are self-directed as the supervisors may otherwise find supervising unsustainable. Although this study was small and exploratory, it provides the first holistic description about quality within supervision of GP registrars and therefore informs ongoing work to develop optimal general practice training. It may have specific application to quality supervision standards, curriculum, and resources development. The study also had several limitations. It was based in Australia, and the study should be validated with respect to the context of GP training in other countries. Whilst including GP supervisors who were peer recognised, this is still a subjective measure of their standard of supervision. Interviewing under-performing supervisors may have enhanced the potential for this study to reflect on the quality of supervision practice. It is possible that the interruption of this research by the COVID-19 pandemic affected the saturation of the findings, although there were signs that there was minimal new material arising from the last few interviews. Interpretive validity and trustworthiness was strengthened by verbatim quotation, independent and double-coding, and taking 9 months to explore the themes in depth. All efforts were made to collect the participants’ views in this study, although it is possible that self-disclosure was limited by apprehension of the interviewing researcher’s relationship as an employee of GPSA, or that the participants ideas and expectations of the research impeded discussion. This study, although exploratory, provides a starting point for understanding the quality of supervision in general practice from the perspective of GP supervisors who are peer recognised for their supervision work. Quality supervision was understood through lived experience and personal reflection. It encompassed building a caring and trusted relationship with the registrar, drawing on input of the whole practice team, using a learner-centred models and adjusting input in real-time as registrar competency emerged. Quality was also depicted by building registrar professional identity and capabilities for safe and independent decision making whilst promoting regular reflection and feedback processes, even when issues were challenging. The findings may be applied to inform quality supervision standards and resources. Professional networks that link GP supervisor for sharing practical exemplars and resources may improve the capacity to conceptualise and embed quality supervision in general practice.
The predictive role of modified stress hyperglycemia rate in predicting early pneumonia after isolated coronary bypass surgery in patients with diabetes mellitus
c90b4258-8e6b-459b-96bb-047b5576e798
11734818
Surgical Procedures, Operative[mh]
Coronary artery bypass graft (CABG) surgery is a procedure performed with high success rates today; however, serious complications that can occur following these operations remain significant concerns . Postoperative pneumonia (PP) is among the most severe complications . This condition is associated with prolonged hospital stays and an increased risk of morbidity and mortality. According to the literature, the rate of PP ranges from 5% to 21%, with mortality rates varying between 6.2% and 28% in these cases . Various inflammatory parameters obtained from routine blood tests in cardiovascular surgery have been the subject of research for their prognostic value. These parameters have played a role both in the diagnostic approach and as prognostic markers . Hyperglycemia is one of the stress-related factors that requires close monitoring during and after major surgical procedures, particularly in cardiac surgery. Numerous publications have reported that hyperglycemia is associated with a high risk of morbidity and mortality in these cases . Beyond hyperglycemia, the recently developed admission blood glucose (ABG)/estimated average glucose (eAG) ratio has emerged as a prognostic marker in cardiovascular diseases . One study demonstrated a relationship between the ABG/eAG ratio and the development of pneumonia in stroke patients . Another study investigating the predictive power of this ratio found that the ABG/eAG ratio was associated with mortality and poor outcomes in patients with COVID-19 . In the current study, we aimed to investigate the predictive role of the modified ABG/eAG (mABG/eAG) ratio, created by modifying blood glucose data, in the development of pneumonia during the early postoperative period following CABG surgery in diabetic patients. This study was designed as a single-center investigation, including diabetic patients who underwent isolated coronary bypass surgery with cardiopulmonary bypass (CPB) at the Bursa Yüksek İhtisas Training and Research Hospital between January 1, 2018 and January 1, 2023. Patients who underwent re-operations, emergency cases, those requiring additional cardiac interventions other than isolated coronary bypass surgery, those with a known history of lung disease, those with chronic renal failure, a history of previous pneumonia, and non-diabetic patients were all excluded from the study. After applying the exclusion criteria, a total of 549 consecutive patients were included. The demographic characteristics of all patients, preoperative blood values, operative data, and blood glucose measurements during surgery were recorded. Patients who did not develop PP were assigned to the control group, while those who developed PP were assigned to the PP group. Operative management Calculation of mABG/eAG Diagnosis of postoperative pneumonia Ethical statement Statistical analysis ll procedures were performed through median sternotomy and under CPB. Pedicled left internal thoracic arteries and saphenous vein grafts were harvested in all patients. Radial artery or right internal thoracic artery grafts were not used in the patients included in the study. Aorto-venous two-stage cannulation was performed to establish CPB following heparinization. To achieve cardiac arrest, a cross-clamp was applied to the ascending aorta, and cold antegrade cardioplegia with high potassium was administered (PLEGISOL ® | Pfizer). Blood cardioplegia was administered every 15–20 min to maintain cardiac arrest. CPB was maintained using a roller pump equipped with a membrane oxygenator and arterial line filter (Maquet, Getinge Group, Germany). The pump flow rates were set between 2 and 2.4 L/min/m 2 , and moderate hypothermia (32 ∘ C) was utilized. Arterial blood gas was evaluated every 20–30 min; immediately before removing the cross-clamp, 500 mL of warm blood cardioplegia were administered. Upon completion of the surgery, the patients were transferred to the cardiovascular intensive care unit. Intra-aortic balloon pump (IABP) support was provided to patients exhibiting low cardiac output, visibly deteriorating cardiac function, and resistant cardiac arrhythmias. All patients received standard postoperative care and were evaluated hourly for extubation readiness. Hemodynamic stabilization (absence of severe cardiac rhythm problems, no need for high-dose vasoactive inotropic support, urine output >0.5 mL/kg/h) was followed by extubation as soon as feasible. Blood parameters were obtained from peripheral venous samples of all patients during their hospitalization. The ABG/eAG ratio, also known as the stress hyperglycemia ratio, is calculated by comparing the patient’s hyperglycemia response to stress with the estimated glucose value . In our study, the ABG value was established as the average of blood glucose levels obtained from all analyses—from the blood gas analysis before anesthesia induction in the operating room to the first blood gas analysis after transfer to the intensive care unit at the end of the operation. After this modification, the mABG/eAG value was calculated using the following formula: mABG ═ Blood glucose levels (Before induction of anaesthesia + after induction + during the surgery + on admission to intensive care unit) (mg/dL)/ n . The n represents the number of blood glucose tests conducted in the perioperative period (this number ranged from 6 to 10 in our study group). The eAG value was calculated using the following formula: eAG ═ (28.7 × glycosylated hemoglobin %) −46.7. In patients with clinically suspected pneumonia, the presence of newly detected infiltration on chest X-ray or an increase in the degree of existing infiltration was considered indicative of pneumonia. Additionally, the diagnosis of pneumonia was confirmed if at least two of the following criteria were met : 1) Fever (>38.5 ∘ C) or hypothermia (<36.0 ∘ C); 2) Presence of purulent tracheobronchial secretions or an increase in the quantity of existing secretions; and 3) Leukocytosis (≥12,000/µL) or leukopenia (≤4000/µL). C-reactive protein and procalcitonin levels were also utilized to support the diagnosis. This study was approved by the Clinical Research Ethics Committee of the Bursa Yuksek Ihtisas Training and Research Hospital, as per protocol, on March 22, 2023 (Approval number: 2011-KAEK-25 2023/03-14). The data were analyzed using SPSS 21.0 (IBM Statistical Package for the Social Sciences, version 21.0, Chicago, IL, USA). The Kolmogorov–Smirnov test and Shapiro–Wilk test were utilized to assess the normality of the data distribution. For data with a normal distribution, Student’s t -test was applied, while the Mann–Whitney U test was used for data that did not conform to normal distribution. The results were presented as mean ± standard deviation (SD) for normally distributed data or as median (minimum–maximum) for non-normally distributed data. Categorical variables were presented as frequencies and percentages, with the chi-square test used for their analysis. Multivariate binary logistic regression analysis was performed to identify predictors of PP. A P value of less than 0.05 was considered statistically significant. Receiver operating characteristic (ROC) curve analysis was conducted to predict in-hospital pneumonia development based on the mABG/eAG value, and the area under the curve (AUC) was calculated. Youden’s J statistic was used to determine the optimal cut-off value for the mABG/eAG ratio. A total of 549 patients were included in the study. The diagnosis of pneumonia was made at a mean of 3.4 ± 2.6 days postoperatively. Patients who did not develop PP were included in the control group ( n ═ 478, median age ═ 58 years [35–81]), while those who developed pneumonia were included in the PP group ( n ═ 71, median age ═ 63 years [37–86]). The median age of patients was statistically significantly higher in the PP group ( P < 0.001). The proportion of female patients was 32.6% ( n ═ 156) in the control group and 36.6% ( n ═ 26) in the PP group ( P ═ 0.506). There were no statistically significant differences between the two groups regarding body mass index (BMI), presence of hypertension (HT), smoking, hyperlipidemia (HL), and left ventricular ejection fraction (EF) . The preoperative blood values of the patients are shown in . The perioperative characteristics of the patients are presented in . The use of packed blood products, the rate of IABP use, mABG levels, mABG/eAG ratio, extubation times, re-intubation rates, and postoperative atrial fibrillation (PoAF) rates were all statistically significantly higher in the PP group ( P < 0.001, P ═ 0.002, P ═ 0.019, P < 0.001, P ═ 0.012, P ═ 0.029, and P ═ 0.017, respectively). Additionally, the mortality rate was higher in the PP group (1.8% vs 7%; P ═ 0.010). Logistic regression analysis was performed to identify factors influencing the development of PP . In the univariate analysis, the development of PP was significantly associated with age > 70 (odds ratio [OR] ═ 1.347, 95% confidence interval [CI] 1.190–1.740; P ═ 0.007), chronic obstructive pulmonary disease (COPD) (OR 1.114, 95% CI 1.018–1.668; P ═ 0.022), use of packed blood products (OR 0.775, 95% CI 0.535–0.889; P < 0.001), use of IABP (OR 0.855, 95% CI 0.537–0.971; P ═ 0.006), mABG levels (OR 1.648, 95% CI 1.452–1.856; P ═ 0.023), mABG/eAG ratio (OR 2.361, 95% CI 1.796–4.650; P < 0.001), extubation time (OR 1.812, 95% CI 1.715–1.994; P ═ 0.019), and re-intubation (OR 1.530, 95% CI 1.250–2.194; P ═ 0.021). In the multivariate analysis, the use of packed blood products (OR 1.685, 95% CI 1.453–1.892; P ═ 0.027), mABG/eAG ratio (OR 1.659, 95% CI 1.190–2.397; P ═ 0.019), and re-intubation (OR 1.829, 95% CI 1.656–1.945; P ═ 0.034) were identified as independent predictors of PP. For predicting PP, the cutoff level in the ROC curve analysis was 1.23 for the mABG/eAG ratio (AUC 0.750, 95% CI 0.691–0.810; P < 0.001), with a sensitivity of 75.7% and a specificity of 69.1% . Advancements in cardiac surgery technology have significantly reduced postoperative mortality. However, the incidence of PP remains high, with rates reported to be as much as 21% . Several studies have explored various predictive factors for the risk of PP . In this study, we demonstrated that the mABG/eAG ratio is an independent predictor of pneumonia risk, alongside known risk factors, such as re-intubation and excessive blood product use, following isolated CABG surgery. The incidence of coronary artery disease is higher among patients with diabetes mellitus, who consequently require CABG more frequently . It has been shown that preoperative hyperglycemia increases the risk of various postoperative complications, and high hemoglobin A1c (HbA1c) levels, an important indicator of long-term blood sugar control, negatively affect outcomes in diabetic patients undergoing CABG . In recent years, the stress hyperglycemia ratio (ABG to eAG ratio), calculated using the blood glucose value and HbA1c at the time of admission, has been widely investigated for its prognostic value in cardiovascular diseases. For instance, in a study involving diabetic patients with ST-segment elevation myocardial infarction, the ABG/eAG ratio was significantly associated with the amount of intracoronary thrombus. The study also found that the predictive power of the ABG/eAG ratio was superior to that of blood glucose levels at admission . Similarly, in non-surgical diabetic patients with heart failure, the ABG/eAG ratio at hospital admission was associated with acute renal injury and major systemic infections . A recent study published in mid-2023 highlighted a significant relationship between a high ABG/eAG ratio and the complexity of coronary artery disease in patients with acute coronary syndrome and diabetes mellitus . Another study evaluated the importance of the ABG/eAG ratio in predicting pneumonia risk following type A acute aortic dissection surgery, a cardiovascular emergency. This retrospective study of 124 patients identified the ABG/eAG ratio, along with prolonged ventilation time, as an independent predictor of PP . In a multicenter, retrospective study involving 1631 diabetic patients, a significant relationship was found between the ABG/eAG ratio and adverse outcomes in patients presenting with pneumonia clinic . Additionally, in a retrospective study of 395 patients hospitalized for COVID-19, the ABG/eAG ratio calculated from admission blood values was associated with worse outcomes and in-hospital mortality . Unlike previous studies, in our research, we derived the ABG value from the average perioperative blood glucose levels, reflecting the operational stress experienced by patients. We recorded this as the mABG value. We demonstrated that the mABG/eAG ratio is an independent predictor of pneumonia risk following isolated CABG surgery. Typically, the ABG/eAG ratio in the literature is calculated using blood glucose levels obtained during acute stress . However, our patient group consisted of insulin-dependent diabetic patients who were hospitalized and prepared for elective CABG surgery. We calculated the average blood glucose levels during surgery, based on their response to surgical stress. Therefore, we used the mABG value instead of the standard ABG value in our study. Additionally, we found that re-intubation and increased blood product use were independent predictors of pneumonia risk. The risk of nosocomial pneumonia increases with repeated intubation. A study by Perrotti et al. revealed that the incidence of pneumonia increased in patients who required re-intubation after elective cardiac surgery. During blood product storage, various immunosuppressive substances are transferred from white blood cells to red blood cells, creating an immunomodulatory state that increases the risk of postoperative infection . A study by Topal and Eren identified increased blood transfusion as an independent predictor of pneumonia risk following cardiac surgeries. Our study has several limitations. It was a single-center, retrospective study, which presents inherent limitations. Additionally, the study was restricted to patients undergoing isolated coronary bypass surgery, and the relatively small sample size is another constraint. Blood glucose levels in CABG operations performed with CPB may be influenced by many factors. Prolonged ventilation and various postoperative morbidities can also lead to increased blood glucose levels, which is a significant limitation of our study. Multicenter prospective studies with continuous blood glucose monitoring throughout the perioperative period are needed. This study introduces a new perspective by modifying the ABG/eAG ratio, which has recently been increasingly used to predict disease development. We demonstrated that our modified mABG/eAG ratio is an independent predictor of pneumonia development in diabetic patients undergoing coronary bypass surgery. Furthermore, when combined with the amount of blood product use, the predictive value was strengthened. Based on our results, high-risk patients can be identified by calculating the mABG/eAG ratio.
A comprehensive operative risk assessment driving the application of major and emergency surgery in octogenarians
ac09fa2b-99ed-4486-bbef-61015b0f9024
11835961
Surgical Procedures, Operative[mh]
INTRODUCTION Given the considerable rise in global life expectancy, patients requiring in‐hospital invasive treatments have been gradually aging (Fowler et al., ). The increasing proportions of elderly patients within the surgical population has moved healthcare providers and multidisciplinary teams to early identify more specific criteria for operative risk assessment. Accordingly, the patient's age at the moment of surgery turned out to be a non‐univocal indicator of fitness to surgery, while the balance between patient's biological characteristics and type of surgical procedures may provide elements for a more precise operative risk assessment. Frailty is defined as the increased vulnerability resulting from reductions in physiological reserve and multisystem functions and represents the main determinant of the operative stress response in the elderly (Lin et al., ). A prompt and comprehensive multidimensional assessment (MDA) is the key element for allocation of resources, identification of optimization areas, and careful choices in geriatric or palliative settings. This report describes the clinical case of an octogenarian who, further to the precise definition and enhancement of his functional reserve, underwent tailored approaches of esophagectomy for cancer and emergency laparotomy afterward. CASE PRESENTATION The present case report has been reported in compliance with the Consensus‐based Clinical Case Reporting Guidelines (CARE). An 83‐year‐old gentleman presented with significant medical history (i.e., severe chronic obstructive pulmonary disease, insulin‐dependent diabetes, and hypertension) and physical deterioration due to malnourishment (i.e., complete dysphagia with about 8 kg weight loss in 9 weeks) secondary to a locally advanced esophageal adenocarcinoma. According to our dedicated institutional board a MDA was performed, involving different medical specialists, such as surgeon, anesthetist, oncologist, physiotherapist and nutritionist. Regarding the several patient's comorbidities, age and disease‐related disablement, a dedicated thorough respiratory, cardiac and performance status assessment was performed through specific tests (i.e. Karnofsky performance scale, spirometry, 6‐min walk test, nutritional risk screening score, Duke Activity Status Index) (Inoue et al., ; Wijeysundera et al., , ) in order to define the actual patient's functional impairment guiding the medical decision (Table ). The initial MDA (Figure ), suggested an 8‐week prehabilitation program, including muscular and respiratory physical exercises with combined artificial (i.e., enteral + parenteral) nutrition (Table ). Prehabilitation was mostly conducted at home with relatives' involvement and under the physiotherapist's guidance. The intermediate MDA registered general improvements in nutritional status (4 kg weight gain) and respiratory capacity (Table ). Therefore, the patient was submitted to esophageal resection with a minimized surgical approach (i.e., transhiatal) to avoid chest opening and single‐lung ventilation. Transhiatal esophagectomy was performed with abdominal and lower‐mediastinal lymphadenectomy, cervical end‐to‐end esophagogastric anastomosis, feeding jejunostomy, and large right groin hernia repair. Operation was uneventful (307 min; blood loss of 210 mL), and ICU monitoring was not necessary after surgery. Postoperative recovery was complicated by a conservatively treated anastomotic leak and bilateral pleural effusion with pneumonia, requiring antibiotics and non‐invasive oxygen support. However, the most significant complication was a hemorrhagic shock caused by acute colonic bleeding originating from an occult cecum adenocarcinoma. To drive the urgent treatment strategy an additional MDA was performed and, given the functional reserve reduction, the patient was submitted to palliative ileocecal resection with minimization of both surgical approach (resection restricted to the source of bleeding, without radical lymphadenectomy) and anesthesia management (exclusive awake regional anesthesia) (Table ). Emergency laparotomy was uneventful (119 min; blood loss of 120 mL), and ICU monitoring was not necessary after surgery. Afterward, postoperative recovery did not present further complications, and the patient was discharged at home after pneumonia resolution. After 90 days, the patient was alive and returned to normal daily activities with regular oral feeding along with 500 kcal nutritional supplementation, daily administered through the feeding jejunostomy. DISCUSSION Increasing evidence has been supporting the application of major surgery to elderly population, including the most impactful operations, such as gastroesophageal resections (Laurent et al., ). It has been achieved that age does not represent an exclusive criterion limiting indications to surgery, while the choice should be made with regard to objective parameters, the need for surgery, and possible alternatives. The main role of the MDA (Figure ) is to highlight the most relevant biological determinants in order to identify possible areas of optimization and to establish individual treatments tailored to patient functions. In patients undergoing major abdominal surgery, enhanced recovery after surgery (ERAS) guidelines recommend a careful preoperative cardiovascular and respiratory evaluation, to optimize the management of existing comorbidities and to plan the best intraoperative conduct and postoperative monitoring (Feldheiser et al., ). The development of multidisciplinary pathways to assess and improve patients' medical conditions should be especially encouraged in frail patients (Pang et al., ). Despite the low level of evidence, multimodal prehabilitation programs for patients receiving esophageal cancer resections appear to be associated with improvements in functional capacity (Minnella et al., ). In a recent prospective study by Halliday et al. , esophageal cancer patients with a low baseline fitness were more likely to increase their functional capacity after prehabilitation, although these patients also presented a lower level of adherence to exercise programs. In our experience, effective prehabilitation programs can be successfully achieved through a close clinical monitoring only. The MDA implementation can be profitably extended to acute conditions and emergency surgery, demonstrating that operative stress minimization (including both surgical and anesthesiology components) aligns with treatment optimization and tailoring. We extensively explained the operative stress physiology and recovery after emergency treatments, showing management limitations or optimizations (Puccetti et al., ). In a recent meta‐analysis, Hajibandeh et al.  reported the significantly high operative risk of octogenarians undergoing emergency general surgery, identifying risk characteristics correlating with mortality. As commented by Rubin et al. , the type of anesthesia also plays a relevant role and contributes to the operative stress composition. Both general and neuraxial awake anesthesia present respectively advantages and contraindications in these specific groups of patients. Considering the severe impairment in respiratory function (i.e., recent pneumonia with the need for oxygen support), we decided to manage the emergency procedure only through a loco‐regional anesthesia, placing a thoracic epidural catheter. Also, a goal‐directed fluid management allowed the intraoperative maintenance of hemodynamic stability, although an adequate level of patient collaboration has to be ensured before treatment. The main limitation of this case report is the single‐patient analysis, which includes multiple elements of clinical peculiarities and potentially reduces the generalizability of these results. However, clinical aspects of perioperative management, functional assessment, and accurate patient‐tailored diagnostics could be effectively shared by other surgical units, who wondered about the best way to treat any delicate category of patients. The present brief report suggests that octogenarians represent a frail category of patients, carrying a high risk for postoperative morbidity and early mortality. Palliative oncological surgery might include either major or urgent operations, as long as this decision is carefully considered and endorsed by a thorough multidimensional assessment. Accordingly, the choice for invasive approach demands the absence of possible therapeutic alternatives, while every potential consequence must be accurately and timely pondered within a comprehensive counseling to patients and relatives. In conclusion, either emergency or elective major surgery in elderly patients should not be excluded a priori, but these can be tailored/minimized according to the patient functional resources at a specific time. ERAS protocols and prehabilitation are strongly recommended in elderly since a high functional improvement can be attained in order to include the patient in invasive treatments. Therefore, specific and dedicated multidimensional assessment and medical protocols should be developed for elderly and frail patients. No funding information provided. No competing interests declared. Review Board of IRCCS San Raffaele Scientific Institute (Approval ID: 91/INT/2021, 30 June 2021).
Bacterial etiology of ocular and periocular infections, antimicrobial susceptibility profile and associated factors among patients attending eye unit of Shashemene comprehensive specialized hospital, Shashemene, Ethiopia
fc144999-b25a-4ebb-89c8-dcb0a53d3647
7106738
Ophthalmology[mh]
The eye is one of the sense organs in humans which is important throughout life for daily activities. The awareness given to eye health and cleanliness is essential. Dust, high temperature, microorganisms, and other factors can lead to eye diseases which may lead to loss of sight . Although the eye can be infected, it is remarkably resistant to colonization and infection by microbes. There is disparity in the type of bacteria that colonize the eye and other parts of the body. Although the eye is remarkably resistant to colonization and infection by microbes, it is prone to infection because the lens and vitreous are avascular and protein-rich structures; thus, ideal media for the proliferation of many pathogenic bacteria. The external part of the eye is susceptible to bacterial, fungal, viral and parasitic infections . Microorganisms can also invade and damage the internal parts of the eye, which often results in loss of vision . The source of eye infection can be exogenous or endogenous . Clinically external eye infections can be presented as conjunctivitis, keratitis, blepharitis, canaliculitis, dacryocystitis, external hordeolum and cellulitis . The clinical signs and symptoms of inflammation of the eyes along with pus are frequently caused by bacteria. Globally, purulent bacterial conjunctivitis is mainly caused by Gram-positive bacteria. The most common causative agents are Staphylococcus epidermidis , Staphylococcus aureus , Streptococcus pneumoniae , and Haemophilus influenzae . The microbial etiology and drug susceptibility, as well as resistance profile may differ with geographic location . The common way of transmission of pathogens is the contact with contaminated fingers, eyelids margins, and adjacent skin, from the nasopharynx via the nasolacrimal duct, from infected eye drops or contact lenses and more rarely from the genitals or via the bloodstream . Bacterial eye infection needs immediate treatment. Treatment of bacterial eye infections involves empirical treatment with topical ophthalmic broad-spectrum antibiotic formulations that become a prevailing practice among ophthalmologists and general practitioners. This along with the irrational use of drugs, availability of antibiotics without prescription, has led to the development of resistance to commonly used antibiotics . There are 1.4 million blind children estimated worldwide, of whom about 320,000 live in Sub-Saharan Africa . In Ethiopia, the prevalence of blindness was reported to be 1.6% and about 87.4% of the cases were due to preventable causes, bacterial infection is one of them . Therefore the aim of this study was to determine the etiology of ocular and periocular infections, antimicrobial susceptibility profile and associated factors. Study design, period and area Study population Variables Operational definitions Data collection Data analysis Data were entered and cleaned by using SPSS version 22.0 software. All variables were subjected to descriptive and inferential statistics. A P -value, 95% Confidence Interval (CI), and logistic regression were used to interpret the results. If factors showed a P value less than 0.25 in bivariate analysis, it was furthers assess by using multivariate analysis and P -value less than 0.05 was considered as statistically significant. A hospital-based cross-sectional study was conducted among patients suspected of ocular and periocular infections at Shashamane Comprehensive Specialized Hospital (SCSH), eye unit from September 1, 2018, to March 30, 2019. SCSH is located in Shashemene town, Kuyera sub-city. Shashemene is located 250 km to the South of Addis Ababa, the capital city of Ethiopia. The Hospital has 267 beds in the inpatient department, five outpatient departments, and other health service delivery units. All patients seeking treatment for an eye infections at SCSH was considered as source population. Patients with signs and symptoms of ocular and periocular infectious were included in the study. Patients on antibiotics were excluded from the study. In this study convenience sampling technique was used. Sample size was calculated by using a single proportion formula, n = Z 2 P (1-P)/d 2 ; where n = number of study participants, Z = Reliability coefficient (confidence level) which is 95% = 1.96, P = previous prevalence from Southern part of Ethiopia, 21% , the margin of error = 0.05, A contingency of 30% was taken. Based on the calculation the sample size was 332. Dependent variables: Bacterial infection of ocular and periocular and Antimicrobial susceptibility profile. Independent variables: Sociodemographic and clinical data. Conjunctivitis Blepharitis Trauma Blepharo-conjunctivitis Dacryocystitis An eye presented with purulent reflux with medial canthal massage, fever, cellulitis surrounding the affected lacrimal sac, altered visual acuity and pupillary reaction, diplopia loss of peripheral vision. An eye with redness in colour (bloodshot), oedematous, and have whitish discoloration of discharge which is purulent, sub-conjunctival haemorrhage with lesion. An eye with gritty (sore eye), with crusting on lashes and appears red, lid-margin inflammation or redness, collarettes around the base of each eyelash, the thickening and cloudiness of the clear oil of the meibomian glands, lash loss, itching or a tickling sensation around or on the eyelids and the presence of Demodex mites. An eye presented with pain, producing watery, foreign body sensation and sensitive for the light. And any sign of corneal laceration (distorted pupil), feeling something blow in to the eye and looks like red, any stains with fluorescein. An eye presented with burning, irritation or itchy sensation, physically appeared redness and dryness of the eyelids, scaly. Sociodemographic and clinical data Specimen collection Culture and identification Antimicrobial susceptibility testing Socio-demographic data of each study participants were collected by attending nurses using the structured questionnaire. Ocular and periocular examination (clinical data) was obtained by using a slit lamp bio-microscope to identify any focus of infection or inflammation for all study participants by attending ophthalmologist. The diagnosis was recorded and the specimen was collected by attending ophthalmologist from all study participants presented with Ocular and periocular infections. Quality of the sociodemographic and clinical data was ensured by using a structured, pretested questionnaire. The specimen was collected from eyelids and conjunctiva using a sterile cotton swab moistened with sterile saline. The swab was rolled over the eyelid margin from medial to the lateral side and back again. Pus from lacrimal sac (dacryocystitis) and blepharitis was collected using dry sterile cotton-tipped swab either by applying pressure over the lacrimal sac to allow the purulent material to reflux through the lacrimal punctum or by irrigating the lacrimal drainage system . Two swabs were collected per individual, labeled and transported immediately to the Microbiology Laboratory of SCSH. One swab was Gram stained to assess the presence of bacteria, its Gram reaction and presence of polymorphouclear cells. The second swab was inoculated on to 5% sheep blood agar, MacConkey agar, chocolate agar and mannitol salt agar (Oxoid, Ltd) and incubated at 37 °C for 24–48 h. The aerobic atmospheric condition was maintained for the MacConkey agar and mannitol salt agar, while the chocolate agar and 5% sheep blood agar were incubated at 5–10% CO 2 atmosphere. All plates were initially examined for growth after 24 h and cultures with no growth were re-incubated for an additional 48 h. After pure colonies were obtained, further identification was conducted using standard microbiological techniques, which include Gram stain, colony morphology, and biochemical tests. Gram-negative bacteria were identified by using several biochemical tests such as; kligler iron agar, citrate utilization test, lysine decarboxylase test, urease test, motility test, indole test, oxidase test, tributyrin, X and V factors. Gram-positive bacteria were identified using hemolytic activity on sheep blood agar, catalase test, coagulase test, bile solubility and optochin disk test . The quality of laboratory data was ensured by checking the expiry date of all reagents and culture media, checking the sterility of culture media before use and by conducting performance tests of culture media by using known strains such as S. aureus (ATCC 25923), E. coli (ATCC 25922) and Pseudomonas aeruginosa (ATCC 27853), H. influenzae (ATCC 49247), Neisseria meningitidis serogroup-A (ATCC 13077), S. pneumoniae (ATCC 49619) and Neisseria gonorrhea (ATCC 49226). Antimicrobial susceptibility testing was carried out for each identified bacterium using disc diffusion method based on CLSI 2018 guideline . Nine antibiotic disks such as amoxicillin-clavulanic acid (AMC) 20 μg, ceftriaxone (CRO) 30 μg, ciprofloxacin (CIP) 5 μg, trimethoprim sulphametoxazole (SXT) 25 μg, erythromycin (E) 15 μg, gentamicin (CN) 10 μg, tetracycline (TE) 30 μg, chloramaphenicol (CAF) 30 μg penicillin (P) 10 U and clindamycin (DA) 2 μg were used. (Oxoid Ltd., Basingstoke, and Hampshire, UK). Briefly, 3–5 pure colonies of bacteria were transferred into a test tube containing one ml of sterile normal saline and mixed until the suspension becomes homogenous. The suspension was adjusted to 0.5 McFarland standards. The suspension was uniformly inoculated on to Mueller hinton agar (MHA) for non-fastidious organisms. For fastidious organisms such as Neisseriae species MHA enriched with 0.5% sheep blood was used and Haemophilus test medium (HTM) was used for H. influenzae . The antibiotic disks were placed using disc dispenser on the MHA and incubated at 37 °C for 18–24 h and the zone of inhibition around the disc was measured to the nearest millimeter using a graduated caliper in millimeters. The isolates were classified as susceptible, intermediate and resistant according to CLSI guideline . There are no antibiotic susceptibility breakpoints for topical antibiotic therapy, and it is assumed that comparable or higher antibiotic concentrations are achieved in the ocular tissue during topical treatment. Sociodemographic data Clinical data Bacterial etiology of ocular and periocular infections Antimicrobial susceptibility profile Factors associated with ocular and periocular infection None of the factors were significantly associated with ocular and periocular infections ( P > 0.05). The proportion of bacterial eye infection among study participants with 0–2, 3–11, 12–17, 18–39, ≥ 40 age groups in years were 14(82.3%), 44(83%), 26(66.7%), 73(54.9%), 41(45.6%) respectively. The proportion of bacteria eye infection in rural and urban area were 74(67.9%) and 124(55.6%) respectively. The proportion of bacterial eye infection among participants with repeated infections was 100(50.5%) and among non-repeated infection were 98(49.5%). The proportion of bacterial eye infection among participants with no formal education, elementary, high school, kindergarten, College and above were 12(44.4%), 73(64%), 36(48%), 32(91.4%), 45(55.6%) respectively. The proportion of bacterial eye infection among participants with surgery was 4 (1.2%) without surgery was 194(98%). In the current study, a total of 332 patients seeking treatment for eye infection at SCSH were included; there were no non-respondents. From the total study participants, 177 (53.3%), 133 (40.1%) and 223 (67.2%) were males, in 18–39 years age group and from rural areas respectively. Most of the study participants were students and married (Table ). Among 332 study participants assessed, the proportions of clinical finding were as follows: conjunctivitis 109 (32.8%), dacryocystitis 76 (22.9%), blepharitis 60 (18.1%), trauma 48 (14.5%), and blephero-conjunctivitis 39 (11.8%). 91 (83.5%) of conjunctivitis, 10 (20.8%) of trauma, 35 (46.1%) of dacryocystitis was caused by bacteria (Table ). Out of 332 study participants who were examined for ocular and periocular infection, 198(59.6%) were culture positive, mixed infection was not found. Among the total bacteria isolated, 135(68.2%) and 63(31.8%) were Gram positive and Gram negative bacteria respectively. S. aureus was the predominant bacteria (Table ). The predominant bacterium among almost all clinical presentation was S. aureus except for dacryocystitis where the predominant bacteria were S. aureus and Coagulase Negative Staphylococci (CoNS) (Table ). From 135 Gram positive-bacteria 124 (91.9%), 120 (88.9%), and 114 (84.4%), were susceptible to gentamicin, clindamycin, and erythromycin respectively. Among 74 S. aureus, 69 (93.2%) and 57 (77%) were resistant to penicillin and tetracycline respectively. Majority of CoNS were resistant to penicillin 56 (98.2%) and tetracycline 50 (87.7%). All S. pneumoniae isolates were susceptible to penicillin (Table ). Among 63 Gram-negative bacteria isolated 62 (98.4%), and 60(95.2%) were susceptible to ciprofloxacin, and ceftriaxone respectively. All N. gonorrhoeae isolates were susceptible to ciprofloxacin, ceftriaxone, tetracycline, and penicillin. All N. meningitidis were susceptible to trimethoprim-sulphametoxazole, ciprofloxacin and ceftriaxone. 2/17(11.8%), 2/9(22.2%), 3/3(100%) of E. coli, K. pneumoniae, and P. mirabilis were resistant to amoxicillin-clavulanic acid (Table ). The prevalence of culture-positive ocular and periocular infections caused by bacteria found in this study, 59.6%, is in line with studies conducted in various parts of Ethiopia [ , , ]. Our finding is low compared to the report from India (88%) , Nigeria (74.9%) and Southern Ethiopia (74.7%) . But it is higher than report from Bangalore (34.5%) , Gondar (47.4%) and Addis Ababa (54.2 and 54.9%) . Addis Ababa (54.9%) . The difference can be attributed to geographic location, study period, study population, sanitary condition and laboratory method used. In the current study we did not diagnose eye infection caused by Chlamydia trachomatis ; this could have caused low prevalence compared to other studies. Overall the prevalence of culture positive ocular and periocular infection in our study is comparable with finding from other parts of Ethiopia , but it is low compared to another study from Ethiopia . Gram positive bacteria were predominant in our study like report from . In this study, Gram-positive cocci were the most common isolates (68.2%) which is in line with other studies from Ethiopia and other countries [ , , ]. The finding is low compared to a report from other part of Ethiopia (93.7%) . In the current study, the predominant bacterial isolates were S. aureus (37%) followed by CoNS (29%). The finding of this study is comparable with previous studies conducted in Ethiopia [ , , , ], Nigeria and India . The proportion of Gram-negative bacteria isolated, (31.8%) in this study is high compared to report from Ethiopia [ , , ]. Among Gram-negative bacteria isolated in the present study, E. coli (8.6%) was the most prevalent followed by K. pneumoniae (4.6%) and Moraxella species 4%). N. gonorrhea was also isolated from 6 patients (5 from those with conjunctivitis and 1 from those with blepharitis) suggesting contamination of the eye from the genital area. The high proportion of E. coli in this study may indicate fecal contamination of the eye. The finding of the current study is in line with study from Nigeria . Conjunctivitis was the dominant type of clinical presentation (32.8%) observed in this study followed by dacryocystitis (22.9%), blepharitis (18.1%), trauma (14.5%) and Blepharo-conjunctivitis (11.7%). In other studies conjunctivitis was reported to be predominant . The proportion of conjunctivitis found in this study is lower than report from Addis Ababa, Ethiopia (40.5%), the share of bacteria in causing conjunctivitis is comparable to our study (83.5%) . In the current study, S. aureus was the most common isolates in all clinical presentation. This finding is similar to a report from India . In this study, the majority of bacteria were resistant to tetracycline and penicillin, while most of them were susceptible to ciprofloxacin. This finding is in agreement with the study conducted in Gondar, Ethiopia , Jimma, Ethiopia and Uganda . The reason for increased resistance to penicillin and tetracycline may be prior exposure of the isolates to these antibiotics. Moreover, these antibiotics are common and patients can access them easily with low price and often can be purchased without prescription over the counter in different pharmacies . The majority (77%) of S. aureus were resistant to tetracycline and to penicillin (93.2%%); however, 97% were susceptible to gentamicin. A similar finding was reported from other part of Ethiopia . However, low susceptibility (71.9%) to gentamicin and high susceptibility to penicillin was reported from other parts of Ethiopia . Like S. aureus , most of CoNS (98.3%) were resistant to penicillin; similarly high resistance to penicillin was reported from Ethiopia . Unlike other studies, majority of CoNS were resistant to tetracycline . The rate of resistance to clindamycin was high compared to finding from other parts of Ethiopia . All S. pneumoniae isolated in this study were susceptible to penicillin, erythromycin, gentamicin; this is not in line with other studies . In contrast to another study from Ethiopia , all E. coli isolates in this study were susceptible to ciprofloxacin and gentamicin. 11.8% of E. coli were resistant to amoxicillin-clavulanic acid. All K. pneumoniae isolates in this study were susceptible to ciprofloxacin , ceftriaxone, and gentamicin. Seven (77.8%) of them were resistant to ampicillin and 22.2% were resistant to amoxicillin-clavulanic acid, this is in partial agreement with Getahun et al. report. In this study, none of the factors were significantly associated with ocular and periocular infections caused by bacteria ( P < 0.05]. However, most bacteria were isolated were from participants within the 3–11 age group, those who reside in rural, and those in kindergarten school. Our finding is not comparable with other studies. A report from other parts of Ethiopia indicated a significant association between being farmer and external eye infection caused by bacteria . But other study did not report significant association between factors measured and external eye infection caused by bacteria . According to Getahun et al. previous use of antimicrobials and duration of present illness was significantly associated with bacterial eye infection. Limitation of the study The lack of reagents limited the diagnosis of Chlamydia infections. As we used convenience sampling technique selection bias was not avoided and the study population was not representative of all bacterial eye infection in the study area. Identification of the bacteria in this study does not necessarily mean that the isolated bacteria were responsible for the infection/inflammation. In the current study the most prevalent clinical presentation was conjunctivitis followed by Dacryocystitis. From 332 study participants with ocular periocular infections, 59.6% were culture positive. Gram-positive bacteria were the most prevalent with S. aureus taking the largest share followed by CoNS. Most Gram positive-bacteria were resistant to Penicillin and Tetracycline. None of the factors were significantly associated with external eye infection caused by bacteria.
Transferable, easy-to-use and room-temperature-storable PCR mixes for microfluidic molecular diagnostics
15fc72d4-3567-4387-938b-4c7183769ad7
8353973
Pathology[mh]
Introduction As the outbreak of coronavirus disease 2019 (COVID-19) , on-site molecular diagnosis is becoming increasingly important . Recent advances in point-of-care (POC) testing, especially microfluidic technology, make it possible to develop rapid, simple, cost-effective and portable molecular diagnostic tools on site . However, traditional PCR reagents typically made for central laboratories are not applicable for microfluidic molecular diagnosis, unless a freeze-drying method is introduced. Firstly, traditional PCR reagents can't be stored at room temperature (RT) as the water molecules they contained drive many destabilization pathways . Freeze-drying allows for the preservation of activity in qPCR reagents over the long-term storage at RT because this process removes most of the water molecules . Thus, microfluidic chip contained the freeze-dried reagents can be stored everywhere irrespective of local preservation conditions. Secondly, liquid-form PCR reagents are cumbersome and complicated to prepare, whereas freeze-dried reagents are convenient to use because they only require reconstitution in water. With reduced operating steps, the operational complexity, preparation time, pipetting errors andThirdly, instead of loading each component of PCR reagent to microfluidic chip separately, one can transfer all these components into the chip easily if they are freeze-dried as beads ( Aii). This can also simplify the design of microfluidic chip as only one chamber is needed to store all the components for PCR. Last but not least, microfluidic molecular diagnostics is typically used to manipulate small volume of liquids, including small volume of samples, which results in reduced detection sensitivity. Freeze-dried PCR mixes can make up for it if they are reconstituted with sample instead of water ( B). Several publications on freeze-dried PCR mixes have been reported during the past 20 years . In previous studies, electrophoresis has frequently been used to evaluate the activities of freeze-dried PCR reagents . However, the nucleic acids detected by electrophoresis represent the final products of the PCR, which are all the same when PCR reaches the plateau phase. Under this circumstance, the band intensities of PCR reagents with different levels of residual activity will appear the same in the electrophoresis assessment ( Civ). Quantitative real-time PCR (qPCR) is able to distinguish differences in reagent activity because it can monitor changes in nucleic acids throughout the PCR process. Using this method, the quantification cycle (Cq) values of degraded reagents would be larger than those with 100 % activity ( Ciii). However, most freeze-dried PCR reagents detected by qPCR have demonstrated poor stability during long-term storage at RT , especially those designed for RNA targets detection , which contain the thermally unstable reverse transcriptase . And what's worse, currently, most PCR reagents are freeze-dried in PCR tube strips and cannot be transferred ( Ai), which poses a greater challenge to current microfluidic diagnosis technology. Even if the freeze-drying method meets all of the requirements detailed above, the stability testing process over long periods under normal conditions for each production batch is time-consuming, labor-intensive, and not cost-effective. Generally speaking, biological reagents age faster when stored at higher temperatures. Several relevant publications have attempted to use elevated temperatures in accelerated stability testing to shorten the evaluation period for freeze-dried PCR reagents ( Dv and ) . However, the temperatures they selected were incomplete, and the observation periods they reported were relatively narrow due to the poor freeze-drying methods used for their PCR reagents. Also, even if an accelerated stability testing can be used to shorten the evaluation period, a real-time stability testing at RT remains necessary to establish the correlation between storage periods at higher temperatures and RT, which can still be time-consuming. To get rid of the real-time stability testing at RT, researches have attempted to translate the accelerated stability testing data into a predicted shelf life at RT using mathematical models ( Dvi and ) tudy, we have presented a freeze-drying method to generate transferable, easy-to-use and RT-storable PCR mixes that are suitable for microfluidic molecular diagnosis. Besides, we have introduced accelerated stability testing to shorten the evaluation period of these freeze-dried PCR mixes. In addition, mathematical models were also employed to predict the shelf life of the freeze-dried mixes, with further verification of their accuracy and potential influence factors. The results of this study would foster the development of molecular diagnoses in both central laboratories and resource-limited areas. Materials and methods 2.12.22.32.42.52.62.7Specimens Enterovirus 71 (EV71), coxsackievirus A16 (CA16), human immunodeficiency virus (HIV), cytomegalovirus (CMV), hepatitis B virus (HBV), Escherichia coli BL21 ( E. coli ), and the human hepatoma cells (HuH-7) were supplied by the National Institute of Diagnostics and Vaccine Development in Infectious Diseases (Xiamen, China). Before use, viruses were inactivated using appropriate methods for each virus. Nucleic acid extraction Nucleic acids were extracted using Viral DNA/RNA Purification Kit, Bacteria DNA Purification Kit, or Tissue/Cell DNA Purification Kit with the DOF-9648 purification system (GenMagBio, China), according to the manufacturer's protocol. The extracted nucleic acids were stored in a 1.5-mL sample tube and maintained at −80 °C before PCR. PCR assay The 40-μL reactions and thermal cycling were the same as the ones described in our previous article , except for the sequence specific primers and probes . Freezing step All PCR components were added to a 15 mL centrifuge tube, supplemented with trehalose [10 % final concentration (w/v), Sigma-Aldrich], mannitol [2.5 % final concentration (w/v), Sigma-Aldrich] and polyethylene glycol 20,000 [PEG20000, 1.5 % final concentration (w/v), Sigma-Aldrich]. The mixes were aliquoted into liquid nitrogen with a pipette (Thermo Fisher Scientific) to generate spherical reagents ( A). After solidification, the beads were transferred to a metallic tray and stored at −20 °C for 1 h to anneal. The tray containing above beads was then transferred to a freeze dryer (Advantage 2.0, VITRIS) whose shelf was pre-cooled to −40 °C. Freeze-drying process After loading the samples, the shelf temperature of the freeze dryer was maintained at −40 °C for 720 min, followed by 25 °C for 180 min. The chamber pressure of the freeze dryer was maintained at 100 mTorr throughout the freeze-drying process. Once freeze-drying was completed, the dried mix was packaged into an aluminum foil bag using a vacuum packaging machine (DZ-400, Shanghai Hongde Packaging Machinery Co. Ltd, China). The entire freeze-drying process was performed in an environment with a humidity of less than 2 %. Karl-Fisher titration The residual moisture content of the freeze-dried reagents was detected using Karl-Fisher titration, as described in detail in our previous article . Real-time and accelerated stability testing The packed freeze-dried PCR mixes were stored at RT (18–25 °C), 37 °C, 56 °C, and 80 °C for a long time. During the long-term preservation, activity detections were performed at multiple time points by reconstituting the mixes to their original volumes with nuclease-free water, followed by qPCR. High, middle, and low concentrations of samples were employed in this study, whose Cq values were approximately 30, 33, and 36 when detected by qPCR, respectively. Differences between the Cq values of the freeze-dried reagents and freshly-prepared wet reagents (ΔCq) were used to evaluate changes in activity. The activities of the freeze-dried mixes were considered to be acceptable (ΔCq < 1), altered (1 ≤ ΔCq < 10), or lost (ΔCq ≥ 10) according to their corresponding ΔCq values. Results and discussion 3.13.23.33.43.53.6Transferable and easy-to-use freeze-dried PCR mixes In this study, a freeze-drying method was established to produce microfluidic applicable PCR reagents using pipettes and liquid nitrogen ( A). All components in PCR reagents were mixed together and freeze-dried as a bead, which could be transferred to the microfluidic chip designed for nucleic acid testing in our previous article ( B). Besides, it's convenient to use this pre-mixed freeze-dried reagent as the only operation is to reconstitute it with water. And the reconstitution process could be done within 5 s ( C). Besides application in microfluidic molecular diagnostics, this freeze-dried PCR reagent can also bring convenience for laboratory testing. With convenient operating steps, the preparation of liquid-form qPCR reagents is no longer complicated, cumbersome or time-consuming. The pipetting errors , errors associated with improper handling of wet reagents, requirements for the operating environment and personnel quality can all be reduced. In all, the characteristics of the freeze-dried PCR mixes described above are suitable for both microfluidic application and laboratory testing. However, cleanliness of the freeze-dried PCR beads may be influenced when they are transferred by tweezers ( B). Our future researches will focus on developing a specialized method to transfer these beads automatically, through which reagents could be dropped from a clean and sealed bead carrier to the microfluidic chips in turn . Stability of the freeze-dried PCR mixes when stored at RT for 1–2 years The freeze-dried PCR mixes were then stored at RT for a long time to verify the stability during long-term storage. 19 PCR mixes for different target detection purposes were able to be stored at RT (18–25 °C) for 1–2 years, thus far , including the reagents for RNA target detection: EV71-1 (569 days, A), EV71-2 (432 days, B), CA16 (312 days, C), HIV-1 (358 days, D), HIV-2 (503 days, E), GAPDH-1 (321 days, F), and GAPDH-2 (358 days, G); and the reagents for DNA target detection: CMV-1 (617 days, H), CMV-2 (362 days, I), CMV-3 (495 days, J), HBV-1 (426 days, K), HBV-2 (426 days, R), E. coli -1 (512 days, Fig. 3M), E. coli -2 (483 days, Fig. 3N), E. coli -3 (483 days, Fig. 3O), E. coli -4 (483 days, Fig. 3P), E. coli -5 (483 days, Fig. 3Q), ACTB-1 (464 days, Fig. 3R), and ACTB-2 (464 days, S). All tested reagents were able to maintain consistent performance compared with freshly-prepared wet reagents at RT for at least 1–2 years, and their corresponding real-time stability tests at RT remain ongoing. In above 19 PCR mixes, except for the sequence specific primers and probes , the other components, including 10 × PCR Buffer, dNTPs, DNA polymerase, and reverse transcriptase (for RNA target detection only) were all the same. The consistent stability of all the 19 freeze-dried PCR reagents demonstrated that different primers and probes have little influence on the reagent stability and the freeze-drying method described in our study is universal. This conclusion is of great importance: when researchers are going to freeze-dry a new PCR reagent, the only operation required is to provide relevant primers and probes, instead of spending plenty of time optimizing the PCR components, freeze-drying process, lyophilized additive and so on. In an emergency situation, such as the outbreak of coronavirus disease 2019 (COVID-19), this method could allow for pathogen-associated RT-storable PCR reagents to be immediately synthesized, freeze-dried, and distributed. In the following experiments, reagent EV71-1 was selected as a representative of the reagents stored for RNA target detection because the storage period for this mix is currently the longest (569 days, A). Similarly, reagent CMV-1 was used as a representative mix for DNA target detection due to having the longest storage period (617 days, H). Selection of temperatures for accelerated stability testing Before the accelerated stability testing was employed to shorten the stability testing period, appropriate temperatures were chosen. Because the packaging consumables could not withstand temperatures above 200 °C, temperatures of 60, 80, 100, 120, 140, 160, 180, and 200 °C were chosen to determine the highest temperature that could be used for testing. When the freeze-dried PCR bead ( Ai) was stored at 100 °C or higher, it shrank rapidly ( Aii). Then, the bead gradually turned from white to yellow ( Aiii) and finally brown ( Aiv). These changes occurred faster at higher storage temperatures ( B), and led to loss of reagent activity. By contrast, no obvious changes in appearance or activity were observed when the bead was stored at 60 °C or 80 °C for more than 12 h. Thus, 80 °C was chosen as the highest temperature. To obtain comprehensive evaluation results, 37 °C and 56 °C were also examined, in addition to 80 °C and RT. Therefore, the temperature gradient selected for accelerated stability testing was RT, 37 °C, 56 °C, and 80 °C. Shortened evaluation period for the freeze-dried PCR mix Accelerated stability testing was further performed following storage at the selected temperatures (37 °C, 56 °C, and 80 °C) to verify the feasibility of shortening the evaluation period. The freeze-dried PCR mixes for RNA target detection could be stored at 37 °C, 56 °C, and 80 °C for 99 days, 6 days, and 1 day, respectively ( A and ). The mixes for DNA target detection could be stored at 37 °C, 56 °C, and 80 °C for 617 days, 26 days, and 21 days, respectively ( B and ). These results indicated that the evaluation process for freeze-dried PCR mixes could be accelerated using higher temperatures: for the reagents used for RNA detection, 99 days at 37 °C, 6 days at 56 °C or 1 day at 80 °C would be equivalent to 567 days at RT. Similarly, 26 days at 56 °C or 21 days at 80 °C of reagents for DNA detection would be equivalent to 617 days at RT or even 37 °C. In addition, the higher the temperature was employed, the shorter the evaluation process lasted ( A–B). When 37 °C was used for accelerated stability testing, the freeze-dried PCR mixes were relatively stable. The evaluation period of reagent for RNA target detection could only be reduced to 99 days, and the one of reagent for DNA target detection was even longer than 617 days. This indicated that 37 °C in accelerated stability testing was not adequate to shorten the evaluation period. On the contrary, the evaluation periods of the freeze-dried PCR mixes could all be shortened to less than one month when stored at 56 °C and 80 °C. To our knowledge, few studies use 56 °C to perform accelerated stability testing for freeze-dried PCR mixes, and this represents the first study to use 80 °C to perform the same test. However, reagents are more prone to exhibit instability when stored at higher temperatures . Under the circumstances, subtle variations among different reagents maybe indistinguishable as inconsistent results might be caused by above instabilities. Low temperatures have the opposite effect. To obtain more reliable and comprehensive results, an ideal strategy would include the evaluation of various temperatures for accelerated stability testing. Predicted shelf lives of freeze-dried PCR mixes using the Q10 method To get rid of performing real-time stability testing at RT, attempts have been made to predict the shelf lives of freeze-dried reagents directly, using mathematical models. However, in fact, no mathematical method was specially designed and developed to predict the shelf life of freeze-dried PCR reagents, and their accuracy has not been verified up to now. In this section, we would verify the feasibility and accuracy of the most widely used mathematical model, the Q 10 method in attempts to determine the shelf lives of PCR reagents. In the Q 10 method, the relationship between the shelf life of a biological material at RT and that at higher temperatures is calculated as follows : S h e l f l i f e = S x × ( 1 + Q 10 T X − T R T 10 ) where T RT = 25 °C; Tx = X °C (That is, T 37 = 37 °C, T 56 = 56 °C, T 80 = 80 °C); S x = the accelerated shelf life at higher temperatures ( T X ); Q 10 = the temperature coefficient. For most biological reagents, Q 10 ranges from 1.8 to 3 , and a lower (higher) Q 10 is associated with a shorter (longer) estimated shelf life. Thus, Q 10 = 1.8 and Q 10 = 3.0 were employed together to predict the shelf life ranges of the freeze-dried PCR mixes used in our study. For the freeze-dried PCR reagents intended for RNA target detection, the shelf lives were predicted to be 299–469 days, 43–187 days, and 26–422 days, based on the shelf lives when stored at 37 °C, 56 °C, and 80 °C, respectively ( C). For the freeze-dried PCR reagents intended for DNA target detection, the shelf lives were predicted to be 1866–2922 days, 186–810 days, and 553–8860 days, respectively, based on the shelf lives when stored at 37 °C, 56 °C, and 80 °C ( D). Thus, this method was convenient and time-saving to predict the shelf lives of freeze-dried PCR mixes without performing real-time stability tests at RT. However, some of the shelf-life ranges predicted using the Q 10 method did not match the real-time stability testing results. For example, the predicted shelf lives for freeze-dried RNA reagents based on the shelf lives at different temperatures were all shorter than the actual RT shelf life ( C). In addition, the predicted ranges were so wide and general that they did not provide sufficiently precise information. For example, the shelf life for RNA reagents was predicted in the range of 553–8860 days based on the outcome at 80 °C; however, whether any reagent remained active could not be determined after storage for 2000 days ( D). Therefore, this method appears to have limited utility for the accurate prediction of the shelf life of freeze-dried PCR reagents. To our knowledge, this is the first study focused on verifying the feasibility of using the Q 10 method to predict the shelf life of freeze-dried PCR reagents. Although this method was time-saving, it was limited in actual usage because the predictions (summarized in ) were wide and inaccurate, which may be due to the potential influences of various factors on the complex prediction process. Factors that influence the prediction of shelf life To explore factors that might affect the predicted shelf life of freeze-dried PCR mixes, various freeze-drying parameters were introduced, including different combinations of lyophilized additives [Formula 1 (the initial combination), Formula 2, and Formula 3 ,]; residual moisture contents of PCR mixes [>9 %, 1–2% (the initial residual moisture content), and <1 % (these moisture contents were obtained by freeze-drying the PCR mixes at 25 °C for 0 min, 180 min and 300 min) ,]; and PCR components [Initial (the initial PCR reagents), Buffer (the buffer in the initial PCR reagents was replaced by another one, similarly hereinafter), dNTPs, Polymerase, and Reverse transcriptase ]. PCR regents freeze-dried with above various parameters were then stored at different temperatures and detected with middle concentration of samples at multiple time points. The corresponding results ( and ) showed that these freeze-dried regents had inconsistent shelf lives in most cases, either stored at RT or higher temperatures. Overall, lyophilized additives and residual moisture contents had greater impacts on the stability of freeze-dried PCR reagents compared to PCR components. Besides, there were also differences within the same class of parameters. For example, the stability of the freeze-dried PCR reagents containing different residual moisture contents, from high to low, in most cases, were 1–2%, <1 % and >9 %. When attempts were further tried to relate the storage periods of these various factors at different temperatures, one conclusion could be identified was that reagents with more stability at RT showed longer shelf lives at higher temperatures in most cases. However, there were no obvious mathematical relations between the elevated temperatures and shortened storage days. What's worse, comparisons of shelf lives at different temperatures may come to the opposite conclusion. For DNA target detection, the reagent replaced by another kind of dNTPs had longer storage days than the reagents replaced by another kind of buffer or polymerase when stored at 56 °C, whereas the latter had longer shelf lives when stored at 80 °C. These results indicated that different freeze-drying parameters would affect the shelf lives of freeze-dried reagents to varying degree, which were reagent specific and unpredictable. In addition, as the mathematical formula such as the Q 10 method was unable to differentiate the various influencing factors, the individual characteristics of all the freeze-dried PCR reagents cannot be conveyed through a simple mathematical calculation process. Conclusion This study presented a transferable, easy-to-use and RT-storable PCR mix to meet the requirements of microfluidic molecular diagnosis. The manufacture method is universal as 19 PCR mixes for different DNA and RNA targets detection could be stored at RT for 1–2 years. This is of great importance when researchers are going to freeze-dry a new PCR reagent, as the only operation required is to provide relevant primers and probes. Accelerated stability testing at higher temperatures was also proposed to shorten the evaluation periods to less than 1 month. We have discussed the pros and cons of different temperatures used in accelerated stability testing for freeze-dried PCR mixes. When attempts were further tried to predict the shelf lives for freeze-dried PCR mixes, our findings challenged the classic view of the Q 10 method as a prediction model for freeze-dried PCR mixes and confirmed for the first time that this prediction was influenced by different factors (lyophilized additives, residual moisture contents, and PCR reagents) at varying degrees. These studies and findings are important to promote burgeoning microfluidic molecular diagnoses in the future. Jiasu Xu: Methodology, Formal analysis, Writing – original draft. Jin Wang: Conceptualization, Writing – original draft. Xiaosong Su: Data curation. Guofu Qiu: Resources. Qiurong Zhong: Formal analysis. Tingdong Li: Writing – review & editing. Dongxu Zhang: Writing – review & editing. Shiyin Zhang: Conceptualization, Validation, Writing – review & editing. Shuizhen He: Validation, Writing – review & editing. Shengxiang Ge: Supervision. Jun Zhang: Supervision. Ningshao Xia: Supervis
Discrepancies between pre-specified and reported primary outcomes: A cross-sectional analysis of randomized controlled trials in gastroenterology and hepatology journals
6b7b7d54-47d1-4899-9fa1-f2b2f247ae98
11584078
Internal Medicine[mh]
Gastroenterology and hepatology are fields that often involve multifaceted treatment regimens and diverse patient populations. In 2019, digestive diseases accounted for more than one-third of prevalent disease cases, representing a significant global health care burden . In the era of evidence-based medicine, high-quality randomized controlled trials (RCTs) stand as pivotal sources of evidence in scientific research, owing to their robust study designs and significant value . These trials often serve as primary references for formulating clinical guidelines and shaping medical decision-making. However, numerous trials encounter the issue of selective and incomplete reporting of results, which distorts their evidence-based value . The Centre for Evidence-Based Medicine Outcome Monitoring Project has found that, on average, each trial in top-ranked medical journals silently adds 5.3 new outcomes . Such discrepancies in trial results can potentially exaggerate benefits or underestimate adverse outcomes, leading to misguided clinical recommendations, wastage of resources, and, in severe cases, harm to patients . To address these concerns, prospective registration of RCTs becomes imperative to ensure transparency and complete disclosure of proposed primary outcomes. Prospective research protocols and registrations are critical in curbing incomplete or selective reporting by serving as predetermined blueprints for evaluating comprehensive reports and facilitating comparisons. Recognizing this need, the International Committee of Medical Journal Editors (ICMJE) announced in 2004 that prospective registration of clinical trials in a public trial registry would be a prerequisite for publication consideration starting in 2005 . This proactive initiative aims to reinforce transparency and alleviate reporting bias by enabling comparisons between the outcomes initially planned for the trial and those ultimately reported. Despite strides toward improving timely trial registration, a significant number of published trials remained unregistered or lacked prospective registration altogether . A study examining RCTs registered in any World Health Organization trial registry platform in 2018 found that only 41.7% were registered prospectively . Moreover, selective outcome reporting bias persists across biomedical disciplines, including anesthesiology, psychology, otorhinolaryngology, headache medicine, mental health and orthopaedical surgery despite the implementation of ICMJE guidelines. Studies in these biomedical fields have reported a huge field difference in discrepancies between registered and published primary outcomes, ranging from 25.9% to 92% . Despite the huge field difference, research on this important topic remains limited in the field of gastroenterology and hepatology. A single relevant study by Li et al. revealed a discrepancy rate of 14.2% between registered and reported results in five general and internal medicine journals and five gastroenterology and hepatology journals with the highest impact factors in the 2011 Clarivate Analytics Journal Citation Report (JCR) report . However, the study did not compare discrepancies based on journal quartiles. It is also crucial to assess whether the situation has improved over the past decade. Therefore, a comprehensive and updated evaluation of the consistency between registered and published primary outcomes is warranted within the context of gastrointestinal and hepatic journals. Therefore, in this study, we aimed to: 1) analyze the distribution of RCT registration from 2017 to 2021 and assess variations in registration practices among gastrointestinal and hepatic journals; 2) compare the primary outcomes initially pre-specified during registration with the final outcomes reported in subsequent publications, aiming to identify the extent of reporting bias, and evaluate whether any bias tends towards reporting significant results; and 3) investigate the factors influencing the inconsistency between the registration and publication of primary outcomes. Search strategy and eligibility criteriaStudy selection and data extractionStatistical analysis Statistical analyses are conducted using Microsoft Excel (2022 version; Microsoft Corporation, USA) and IBM SPSS Statistics (version 22; IBM, Armonk, NY, USA). Descriptive statistical analyses are employed to outline the basic characteristics of included studies within the four quartiles. In addition, the proportion of trials exhibiting inconsistent primary outcome measures between registration and publication, as well as the proportion reporting favorable results, are determined and stratified by quartiles. For continuous variables, mean and standard deviation (for normally distributed data) or the median and range (for non-normally distributed data) are reported. Categorical variables are presented as frequencies and percentages. The chi-square test or Fisher’s exact test is used to assess differences in each trial characteristic across domains, including registration status and registration type. We also used these tests to assess primary outcome consistency differences and report favorable results across journal domains. Additionally, univariate analyses are performed to determine the effect of each variable on primary outcome discrepancies. Multivariable logistic regression includes all the significant variables in the univariable analysis to identify factors influencing discrepancies between the registered and reported primary outcomes. A significance level of p<0.05 (two sides) is applied to all analyses. Cohen’s kappa coefficient is used to assess inter-observer variability in judging discrepancies. Cohen’s kappa coefficient measures the agreement of evaluators, with values interpreted as follows: ≤0.2 indicating poor agreement, 0.21–0.40 fair agreement, 0.41–0.60 moderate agreement, 0.61–0.80 strong agreement, and ≥0.81 very good agreement . In this cross-sectional analysis, we retrospectively selected the top three journals from each quartile within the "Gastroenterology and Hepatology" subcategory of the 2020 JCR. Journals that did not include the original article, as per their author instructions and our preliminary searches, were excluded. Primary reports of RCTs published in the 12 chosen gastrointestinal and hepatic journals over a 5-year period (January 1, 2017 to December 31, 2021) were identified using PubMed as of March 2022. The details of the final search are provided in . We present this articlereporting checklist . We employed the Cochrane Handbook’s definition of RCT, which characterizes it as “A clinical trial that involves at least one test treatment and one control treatment, concurrent enrollment and follow-up of the test- and control-treated groups, and in which the treatments to be administered are selected by a random process, such as the use of a random numbers table” . The inclusion criteria are RCTs published in English within the 12 eligible journals that specifically assessed the intervention in human subjects. We excluded reviews, observational studies, systematic reviews or meta-analyses, cost-effectiveness analyses, animal or in vitro studies, case reports, cross-over studies, as well as ancillary studies (e.g., protocol studies, secondary analyses, follow-up studies, subgroup analyses, and post hoc analyses). We accessed free articles directly through open-access journals. For articles that were not freely available, we thoroughly searched the full text of each article using multiple sources, including Web of Science, Embase, and Scopus, among others. Articles for which full text was unattainable through these means were subsequently excluded from our analysis. The reports obtained from the search were imported into Endnote (version 20; Clarivate Analytics, USA). A total of three investigators (BHS, FHY, and YL) formed cross-groups in pairs, i.e., three groups. Two investigators from each group independently screened the articles for eligibility, initially by title and abstract, followed by a full-text assessment. Any disagreements were resolved through discussion until a consensus was reached. We developed a standardized extraction form using Microsoft Excel (2022 version; Microsoft Corporation, USA) to mitigate potential bias, encompassing journal information, article characteristics, registry details, and primary outcome discrepancies . All investigators underwent training to ensure consistency and minimize discrepancies during data extraction. For article characteristics, one investigator (YL) reviewed the full text of each study and extracted the data, which were then cross-validated by another investigator (BHS). Any conflicts were resolved through discussion with a third reviewer (FHY) until a consensus was reached. For registry information, we utilized the World Health Organization’s International Clinical Trials Registry Platform (ICTRP), comprising 20 main trial registries globally, to identify and download registration records and to retrieve pre-specified primary outcomes of published trials. This approach ensured uniformity in search mechanisms. If no registration number was provided in the publication, we manually searched the ICTRP using the publication title, author names, trial participants, and primary sponsors to identify any possible registration numbers. If no registry number was uncovered in this way, the trial was deemed unregistered. If a registration number was provided in the publication, we entered the registration number into the ICTRP to retrieve relevant registration information. If the authors provided a registration number but we did not find a corresponding registration record, we treated these trials in the same manner as studies that did not report a registration number at all. Primary outcomes were defined as those explicitly reported in the study. In cases where no outcome measure was explicitly named as primary, we recorded the outcome stated in the sample size calculation of the study. Additionally, if neither was available, we adopted a conservative approach, categorizing the article as having no reported primary outcome and subsequently excluding it from the analysis of outcome discrepancies. Two investigators (BHS, FHY) compared the primary outcomes reported in the articles with those initially registered to determine consistency. These outcomes reflect those specified at the time of the initial registration and do not include any outcomes added or modified in subsequent updates. We chose these primary outcomes because, while the ICTRP provides a comprehensive registry, it does not always allow for a clear assessment of historical versions or the trial stage at which changes were made. For instance, prospective registrations may have retrospective changes where authors modify primary outcomes after concluding the trial. We extracted data from 10 random samples of RCTs to ensure a consistent understanding of discrepancies. Each discrepancy was independently categorized into five types, based on criteria initially proposed by Chan et al. and refined by Mathieu et al. : The registered primary outcome was reported as a secondary outcome in the published article. The registered primary outcome was omitted in the published report. A new primary outcome was introduced in the published article (i.e., an outcome that does not appear at all in the registry but is introduced as primary in the article). The published primary outcome was described as a secondary outcome in the registry. The timing of the assessment of primary outcomes differed between the registered and published data. Similar to Chan et al.’s study , a discrepancy was deemed to favor statistically significant outcomes “if a new statistically significant primary outcome was introduced in the published articles, or if a nonsignificant primary outcome was omitted (e.g., the omitted outcome might not have achieved statistical significance, leading to its exclusion from the published results. This assumption is based on the notion that statistically significant results are more likely to be reported and published due to the well-known publication bias) or defined as nonprimary in the published articles, or if registered statistically significant secondary outcomes became published primary outcomes”. Articles retrospectively registered or lacking explicit primary outcomes were excluded from this analysis, as it was not feasible to ascertain whether any modifications to the primary outcome had occurred. Twelve journals within the gastrointestinal and hepatic domain were identified from the 2020 JCR report, distributed across four quartiles. However, Nature Reviews Gastroenterology & Hepatology in Quartile 1 and Gastroenterology Clinics of North America in Quartile 3 did not include original articles. After that, we excluded these two journals and included another two sequentially ( Gastroenterology and Techniques in Coloproctology ). The eligible journals in the specialty of “Gastroenterology and Hepatology” are as follows: Quartile 1 ( Journal of Hepatology , GUT , and Gastroenterology ), Quartile 2 ( Hepatology International , Liver International , and Liver Transplantation ), Quartile 3 ( Expert Review of Gastroenterology & Hepatology , Colorectal Disease , and Techniques in Coloproctology ), Quartile 4 ( Journal of Gastrointestinal Oncology , Journal of Pediatric Gastroenterology and Nutrition , and Digestive Surgery ) . Mandatory registration of RCTs was not found in GUT (Quartile 1), and Hepatology International (Quartile 1). Among the trials published in journals requiring mandatory registration, 14.5% (41/282) were not registered . Search results and trial characteristicsRegistrationDiscrepancy of registered and published primary outcomesDiscrepancy of primary outcomes favoring significant results In the analysis of outcome reporting bias, 20 studies that were retrospectively registered among the 76 papers with a discrepancy between the registry and the publication were excluded. Among the remaining 56 studies, 57.1% (32/56) exhibited a discrepancy favoring a statistically significant primary outcome . Specifically, 46.9% (15/32) of the discrepancies involved omitting non-significant registered primary outcomes, and 40.6% (13/32) involved the introduction of significant new primary outcomes in publications. No significant differences were observed in potential outcome reporting bias across journal quartiles (p = 0.28). A total of 3,384 records were screened. shows the publication selection process. Exclusion reasons included review type (n = 605), systematic reviews or meta-analyses (n = 230), observational studies (n = 1,452), animal or in vitro studies (n = 543), case reports (n = 14), cost-effectiveness analysis (n = 8), and other studies (n = 127). Subsequently, 405 articles underwent full-text screening for eligibility. Of them, 42 cross-over studies and ancillary studies were further excluded. And we were unable to obtain the full text of one trial. Finally, 362 trials were eligible for inclusion. presents the characteristics of the 362 eligible trials. Of note, the highest number of RCTs was published in Quartile 1, accounting for 54.4% (197/362). Sample sizes ranged widely, with a median of 115 and a range spanning from 12 to 32,063. Overall, larger trial sample sizes were observed in Quartile 1 journals compared to Quartiles 2 to 4. The unspecified clinical phase of the study (50.0%), multicenter institutions (57.2%), nonprofit-funded (43.1%), superiority trial design (93.4%), efficiency/tolerance/safety outcome (98.9%), and European study site (40.9%) predominated among the publications in these trials. Only 61 (16.9%) studies declared adherence to the Consolidated Standards of Reporting Trials (CONSORT) reporting checklist in the main text. Table show the registration and prospective registration by year of publication, study location, quartile, funding, and trial design, respectively. Among the included RCTs, 86.2% (312/362) were registered , with 79.8% (249/312) registered prospectively and 20.2% (63/312) registered retrospectively . The registration rate of RCT studies had no significant difference over the five-year period (p = 0.83) . However, there were significant differences in study location, quartile, and funding between registered and unregistered trials (p = 0.009, p<0.001, and p = 0.01, respectively, ). Furthermore, there were significant differences between prospective and retrospective registration in quartile, funding, and trial design (p<0.001, p = 0.001, p = 0.006, respectively) . In analyzing discrepancies between registered and published primary outcomes, Cohen’s kappa coefficient among the three reviewers for the extraction of differences was 0.872, indicating very good agreement. Only studies reporting primary outcomes were included in further analyses. Among the 312 registered trials, 285 were eligible for primary outcome discrepancy analysis, as 27 (8.7%) RCTs were excluded due to imprecise primary outcomes. A total of 26.7% (76/285) of the trials exhibited at least one major discrepancy between the registry and publication of the primary outcomes . As detailed in , the most common discrepancy was different assessment times for the registered versus published primary outcomes (n = 32, 42.1%). Other notable discrepancies included registered primary outcomes omitted in publications (n = 21, 27.6%), registered secondary outcomes promoted to primary outcomes (n = 13, 17.1%), registered primary outcomes demoted to secondary outcomes (n = 6, 7.9%), and new primary outcomes introduced (n = 4, 5.3%). No significant differences were found in primary outcome inconsistencies across different journal quartiles (p = 0.14). Univariate analyses revealed that only the publication year (year 2020) was associated with the discrepancy between registered and published primary outcomes (OR = 0.267, 95%CI:0.101,0.706, p = 0.008) ; therefore, multivariable logistic regression analysis was not conducted. Specifically, the result indicated that in 2020, the likelihood of primary outcome discrepancy was 0.267 times that of 2021, and this difference was statistically significant. The proportion of studies with primary outcome discrepancies was lower in 2020 compared to 2021 (12.1% vs 34.0%). Journal quartile, study location, study center, phase of the study, funding, trial design, registration status, CONSORT endorsed, and whether RCT registration was mandatory were not associated with the discrepancy of registered and published primary outcomes. Our study reveals several key findings: 1) Not all 12 enrolled journals mandated trial registration, and over 10% of unregistered trials were still published in journals requiring registration. The registration rate of the 362 eligible RCTs had no significant difference over time from 2017 to 2021, with 86.2% (312 out of 362) registered and 79.8% (249 out of 312) registered predominantly prospectively. Both registration and prospective registration were statistically different by journal quartile and funding. 2) Over a quarter of the trials exhibited evidence of discrepancies in primary outcomes, with the top three discrepancies being differences in assessment times (42.1%), omission of primary outcomes (27.6%), and reporting the registered secondary outcomes as primary outcomes (17.1%). In addition, over half of the discrepancy favored a statically significant primary outcome, mainly attributed to the omission of non-significant registered primary outcomes and the introduction of significant new primary outcomes. 3) Primary outcome discrepancies were lower in the publication year 2020 compared to year 2021 (OR = 0.267, 95% CI: 0.101, 0.706, p = 0.008). However, no such associations were found regarding journal quartile, study location or center, funding, trial design, registration status, study phase, adherence to CONSORT, or registration compulsion. Furthermore, no significant differences were observed concerning potential reporting bias across journal quartiles. Comparison with similar research and explanations of findingsStrengths and limitationsImplications and actions neededConclusionsTrial registrationDiscrepancy in primary outcomes Primary outcomes play a crucial role in assessing the effectiveness of interventions for specific symptoms or the disease of interest, providing the strongest evidence for their maximum effects . The pre-specification of primary outcomes is essential as it prevents deviations from study protocols and selective reporting. When pre-defined outcomes are altered or omitted, this protective mechanism is compromised. Our results align with previous research that found differences between registry entries and publications . We observed a higher level of outcome reporting bias compared to an earlier gastroenterology study (26.7% vs 14.2%) , possibly because the previous analysis focused on the top 5 impact factor journals, while our study assessed research across all impact factor quartiles. This highlights that despite the increased registration rate of RCTs, the divergence in reporting main results has concurrently risen. Our study underscores the persistent challenge of outcome reporting bias in gastrointestinal research, emphasizing that registration alone does not eliminate the risk of selective outcome reporting. Notably, our investigation into the varied nature of discrepancies revealed that differences in the timing of primary outcome assessments accounted for the highest proportion of discrepancies in our study, which may often be overlooked or underappreciated. Different assessment times can impact the demonstration of efficacy or harm, leading to biased findings if earlier non-significant timepoints were registered while optimal timepoints were chosen to report significant effects. Additionally, omitting a primary outcome and reporting the registered secondary outcomes as primary outcomes were the second and third most common discrepancies. A point claimed that the discrepancies were because the researchers discovered unintended effects or harms caused by the intervention, leading to the selective omission of pre-specified primary outcomes . However, it is important to note that changes to primary endpoints do not inherently indicate poor practice. There can be plausible reasons for such changes, which should be transparently discussed in the study publication. Our review of the 76 papers with primary endpoint changes revealed that none provided explanations for the changes to the primary outcomes. The lack of explanations for primary outcome changes does not necessarily imply issues with the study but highlights a gap in procedural transparency. Without these explanations, it is challenging to assess the appropriateness and rationale behind the changes. Therefore, while our findings suggest a higher rate of discrepancies in primary outcome reporting, it is crucial to approach these results with caution. In addition, whether certain endpoints are more prone to discrepancies was not the focus of this study and warrants further investigation in future research. Moreover, most baseline characteristics showed no significant influence on changes in primary outcomes, except for the year of publication (year 2020 compared to 2021). The risk of discrepancy in primary outcomes remains consistent among the articles, regardless of whether they are published in high quartile journals, journals requiring compulsory registration, conducted in developed regions or multiple centers, funded by industry, prospectively registered, CONSORT endorsed, etc. This discovery aligns with a study by Damen et al. , which scrutinized 163,129 RCTs and identified a modification rate of 22.1% in primary outcomes. However, specific attributes of the author’s team were the only factors associated with a reduced risk of modifications in primary outcomes. Our study diverges from prior research by focusing on top journals within different quartiles. Our initial hypothesis was that top journals in the higher-ranked quartile would exhibit fewer discrepancies in trial reporting. Contrary to expectations, we found no significant differences in the risk of outcome reporting bias based on the journal’s impact factor quartile, suggesting that quartile may not reliably indicate the reporting consistency of primary outcomes. Interestingly, we found no correlation between the mention of adherence to CONSORT guidelines and the mandatory nature of registration with discrepancies in primary outcomes. Simply stating compliance with CONSORT in publications does not guarantee strict adherence to all recommendations . Researchers may tend to report positive results while neglecting non-significant ones, as evidenced by our findings revealing that 57.1% of prospectively registered studies with primary outcome discrepancies tended to report statistically significant results and evidenced by another article indicating a primary result spin of 66.6% in RCTs of endometriosis pain . This bias could be attributed to the traditional preference for publishing studies with positive outcomes over those with negative ones . Despite some journals requiring a complete CONSORT checklist during manuscript submission or recommending reporting guidelines, the extent to which journals or peer reviewers verify authors’ compliance with all checklist items remains unclear. Potential factors may include insufficient training resources for peer reviewers, with only a 15% training rate for reviewing clinical trials . Concerning mandatory registration requirements, certain journals may announce compulsory registration of clinical trials, but editors may sometimes overlook registration requirements and exhibit a lack of scrutiny in enforcing trial registration. Adherence to prospective trial registration varies across medical fields, with studies in psychiatry, pediatric surgery, and anaesthesia reporting a range from 33.1% to 71.1% . Compared to a previous investigation on reporting bias in gastroenterology , our research reveals a substantially higher proportion of prospectively registered studies (79.8% vs. 37.2%), indicating a remarkable improvement in the registration of RCT studies within this field. Nevertheless, 14.5% of studies were still published without registration in journals requiring mandatory registration. Our study identified variations in how journals, despite claiming adherence to ICMJE guidelines or being ICMJE members, articulate registration requirements in the author’s instructions. Vague or ambiguous statements may contribute to a lack of rigorous enforcement by authors or journal editors. As reflected in our findings, mandatory registration requirements appear to be a significant motivator for registration. When extracting data, we observed invalid trial registration numbers, indicating potential lapses in the meticulous verification of registration information by journal editors or reviewers. Factors associated with prospective registration, such as publication in high-impact journals, non-profit funding sources, and trial designs, present opportunities to promote early registration at different stages of the research process. A study in surgical journals reported a notable association between trial registration and higher journal impact factors, aligning with our research findings. These insights underscore the need to continue enhancing and standardizing trial registration practices in this field. StrengthsLimitationsOur study included journals across impact factor quartiles, not limited to high impact factor journals, to evaluate reporting issues, thus enhancing the generalizability of the findings. The study’s contemporary 5-year timespan increases the sample’s generalizability and relevance to current research practices. In addition, we include as many basic features of the study as possible to ensure a thorough examination of factors influencing discrepancies. Firstly, our research only evaluated modifications to primary outcomes, as these are crucial for addressing the primary research question. Selective reporting bias could be reflected through modifications in other trial design elements after registration, e.g., secondary outcomes and sample size, which we did not inspect. Secondly, the lack of blinding researchers to the registration status may introduce bias in data extraction and analysis. Thirdly, this study is restricted to a specific field and covers a relatively small period of time, specifically five years. Consequently, the findings of this study may not be generalizable to other fields or longer timeframes. Further research is needed to explore these aspects in different contexts and over extended periods. Finally, although we conducted a thorough search to determine whether a study was registered, we did not assess whether the study protocols were published prior to the registration of the trials. It is possible that some trials were not registered due to the existence of a prior published protocol. Future research could benefit from investigating this aspect. Firstly, the prevalence of discrepancies within gastroenterology and hepatology journals, particularly those favoring statistically significant primary outcomes, underscores a broader issue within the scientific community—a prevailing bias against negative or non-significant results. Therefore, there is a pressing need for a cultural shift that embraces and values negative findings. Sterling highlighted in 1959 that studies with significant findings were more likely to be published than those with non-significant results . Meanwhile, evidence also shows a gradual shift. A study in 2017 covering publication trends from 1985 to 2013 found that while significant results still dominate, there is a notable increase in the reporting of non-significant results across various journals, suggesting a slow but positive change in attitudes towards these findings . This proactive approach would probably contribute to mitigating reporting bias. Secondly, it is imperative to provide comprehensive training for authors. For instance, emphasizing the importance of trial registration, updating registries when primary outcomes are modified, and transparently reporting both positive and negative results. Moreover, training should extend beyond authors to include biostatisticians, clinical research coordinators, and staff at clinical study centers. This comprehensive training approach ensures that all parties involved in clinical trials are aware of the standards necessary for transparent reporting. Thirdly, the observed shortcomings in the quality of RCT reviews suggest that raising the awareness among editors and reviewers involved in assessing RCTs to check registration information is crucial. Editors could receive training to strengthen their capabilities to rigorously evaluate trial protocols, registration details, and outcome reporting. One underlying issue is inadequate training sources and content, a key issue which requires attention . Reviewers should work closely with editors to ensure consistency between registration and reporting in manuscripts, and make more specialized comments when needed. Finally, more journals should explicitly outline registration requirements in their instructions for authors, avoiding vague statements such as "encouragement of trial registration." Furthermore, while some journals require mandatory registration in the author’s instruction, they still publish studies without registration, indicating oversight in the review process. Implementing a more robust checking mechanism in the editor workflow, such as demanding registration identifiers before peer review, reviewing any revisions by comparing the manuscript with versions of the registered protocol, and giving special attention to the timing of primary outcomes, as well as any omissions or introductions of primary outcomes, could significantly improve authors’ motivation to adhere to registration, update, and report any changes. Additionally, implementing artificial or artificial intelligence-assisted review on the transparency and completeness of clinical trial reporting following the CONSORT checklist could further promote transparency and reliability in clinical trial reporting. Our findings reveal that despite increasing registration rates, inconsistency between the pre-registered primary outcomes and those reported in publications persists. Notably, factors such as high journal quartile, developed regions, industry funding, multiple centers, prospectively registered trials, adherence to CONSORT guidelines, and mandatory registration did not show a significant association with discrepancies in primary outcome reporting. This lack of association across various trial characteristics suggests that discrepancies in primary outcome reporting are not confined to specific types of trials or contexts, but rather, are a prevalent issue within the gastroenterology and hepatology research community. Therefore, detailed registration and updates of primary and secondary outcomes in trial registries are heavily warranted. Efforts to improve reporting practices are recommended to be driven by journal-level policies and workflows. This includes implementing measurements such as withholding support for clinical trials that do not disclose registry information or do not report discrepancies between pre-specified and reported outcomes. Editorial checking mechanisms should also be put in place to ensure the transparency and reliability of reported trial outcomes. S1 File Search strategy. (DOCX) S2 File STROBE checklist. (DOCX) S3 File Data extraction form. (DOCX) S1 Appendix Dataset. (XLSX)
Encountering Pharmacogenetic Test Results in the Psychiatric Clinic
8c4e10ec-e548-4b98-9007-fd129bc941a5
8892046
Pharmacology[mh]
Pharmacogenetic (PGx) testing is a personalized prescribing approach that utilizes a person's genetic information to inform medication selection and dosing decisions. This approach has been successfully implemented by numerous medical centers/health systems across North America, Europe and Asia, , is supported by an expert review and consensus and endorsed by professional pharmacy and pharmacology organizations. , In addition, a recent meta-analysis of five randomized controlled trials showed patients with major depressive disorder that received pharmacogenetic-guided prescribing were 71% times more likely to achieve symptom remission relative to those that received treatment as usaul. However, PGx testing remains controversial in psychiatry and consensus on its utility in day-to-day management of patients has not been reached. In Canada, PGx testing is primarily performed by commercial laboratories, some offering testing directly to patients. Therefore, psychiatrists should expect to be presented with and asked to use PGx test results by their patients. In addition, psychiatrists are likely to be asked by other psychiatry or family practice colleagues to provide advice on how to interpret the results of PGx testing. While some psychiatrists will feel comfortable providing consults, managing and perhaps integrating these test results into their practice, many will feel unsure about the merits of PGx testing or how to interpret and act on the results. This article addresses these common concerns and offers strategies and resources to prepare psychiatrists for patient care situations, where PGx test results are encountered. Assessing the Validity of the Pharmacogenetic Test Interpreting and Implementing Pharmacogenetic Test Results When interpreting and implementing PGx-based prescribing recommendations into practice a number of questions often emerge. Here we address a few of the most frequently asked questions. What Does the ‘*’ Mean? Are Prescribing Recommendations Medication Specific? What Other Factors Can Impact the Interpretation of PGx Results? Why Are Medications I Prescribe Not on the PGx Report? Am I at Risk for Litigation If I Don’t Act on PGx Test Results? How Long Are PGx Test Results Valid? As prescribing recommendations are based on an individual's genetic information and this information does not change, the PGx test results remain valid over a person's lifetime. However, as the evidence evolves, the number of medications with PGx-based recommendations will increase and some recommendations will be refined. Therefore, even though a person's genes do not change, the recommendations associated with them might. Regular updates will ensure PGx recommendations remain valid over a patient's lifetime. However, unlike many clinical workflows that are integrated within the electronic health record (EHR), most PGx results encountered in psychiatry are in formats that do not facilitate direct integration into the EHR. In most settings PGx test results are scanned into the EHR. Fortunately, new EHR systems have been designed to receive and store PGx data in a manner that allows for easy updates and enables automated prescribing alerts. PGx tests offered by laboratories in Canada and abroad are largely unregulated, unstandardized, and are not equivalent. In fact, in 2018 the US Food and Drug Administration raised concerns about unapproved claims about the ability to predict response to specific medications using genetic testing and later in 2019 issued a warning letter to Inova Genomics Laboratory for deceptive marketing practices and questionable clinical validity. As such, each PGx test's analytical validity (i.e., the test's ability to detect genetic variants of interest) and clinical validity (i.e., how well the test results correlate with medication efficacy or tolerability) should be checked prior to interpreting and implementing the results. We recognize that formal evaluation of every test encountered in practice is not feasible and as such we recommend looking for three test characteristics that when present, can boost confidence in the results provided. Characteristic 1: The Laboratory That Performed the Test Should Be Accredited Characteristic 2: The Genes and Alleles Tested Should Be Associated With Medication Pharmacokinetics, Efficacy or Tolerability Characteristic 3: Prescribing Recommendations Should Be Supported by an Evidence-Based Guideline Prescribing recommendations without reference to an evidence-based guideline (e.g., CPIC, DPWG, FDA, Health Canada) should be viewed skeptically as the source of the recommendation may lack sufficient clinical validity for use in practice. For example, some laboratories offer prescribing recommendations based on an association reported in a single clinical study or selectively cite studies that support the recommendations without disclosing the number of studies that failed to find the association. Acting on recommendations that have not undergone a rigorous review process, such as that used by guideline developers, increases the risk of unexpected and potentially harmful outcomes. Assuming these three test characteristics are met, it is reasonable to consider the PGx test results as part of the overall medication selection and dosing decision-making process. However, the presence of these characteristics does not ensure the results will be clinically useful. To maximize the usefulness of PGx testing, thoughtful interpretation and implementation of the results are required. The Standards Council of Canada offers laboratory accreditation aligned with standards developed by the International Organization of Standardization, referred to as ISO 15189. However, the Canadian Association for Laboratory Accreditation (CALA), the College of American Pathologists (CAP), and the Clinical Laboratory Improvement Amendments (CLIA) also offer accreditation to laboratories providing PGx testing in Canada. Of note, several provinces (i.e., Alberta, British Columbia, Manitoba, Ontario, and Saskatchewan) have their own accreditation bodies but all provinces follow one or more of the standards mentioned above for accrediting laboratories. If one of these or an equivalent accreditations is not evident, the analytical validity of the test could be questionable and caution in using the test results is advised. There are over 25 genes with evidence-based guidelines developed by expert groups such as the , and Canadian Pharmacogenomics Network for Drug Safety (CPNDS) as well as regulatory bodies such as Health Canada and the FDA. However, only five of these genes ( CYP2D6, CYP2C19, CYP2C9, HLA-A, HLA-B ) are associated with the pharmacokinetics, efficacy or tolerability of one or more psychiatric medications. ( Supplementary Table 1 ) shows the five genes and 33 associated medications with evidence-based guidelines. To our knowledge, all PGx test panels available in Canada include CYP2D6 , CYP2C19 and CYP2C9 , while less than half include HLA-A or HLA-B. The absence of the HLA genes should only be a concern if the treatment plan involves the use of carbamazepine, oxcarbazepine, phenytoin, of fosphenytoin. Of greater concern is that laboratories often test and report results for genes with limited or no association to medication pharmacokinetics, efficacy or tolerability. Examples of genes commonly appearing on psychiatry test panels that lack sufficient evidence or expert developed guidelines include: ABCB1, ADRA2A, ANK3, ANKK1, BDNF, CACNA1C, COMT, CYP1A2, CYP3A4, CYP2C8, DRD2, FKBP5, GNB3, GRIK1, HTR2A, HTR2C, MC4R, MTHFR , and SLC6A4 . Although many of these genes have excellent face validity and biological plausibility, there are limited or no data on clinical validity for these genes and they are not currently mentioned in a prescribing guideline. Test panels including these genes should be viewed critically and recommendations linked to these genes used with caution. In addition to the genes, it is also worth examining the list of alleles (e.g., single nucleotide variants, copy number variants) that are tested for each gene, particularly the five genes relevant to psychiatry. The optimum number of tested alleles varies by gene and for most genes a consensus set of alleles has not been established, but in general as the number of alleles tested increases so does the sensitivity and specificity of the PGx test and the more applicable it will likely be across different populations (e.g., European, Asian, African, Indigenous). This latter point is particularly important when interpreting PGx test results of a patient of non-European ancestry because most panels are biased toward alleles observed in individuals of European ancestry. As a result, some panels are more likely to omit gene variants that are rarely observed in individuals of European ancestry but are more common in those of non-European ancestry. Of note, minimum allele sets that take into account ancestry have been proposed for CYP2C9 (*2, *3, *5, *6, *8, *11), CYP2C19 (*2, *3, *17), , CYP2D6 (*3, *4, *5, *6, *10,*17, *41, *1xN, *2xN), HLA-A (*31:01), and HLA-B (*15:02). Testing laboratories typically reported PGx results as a genotype using the star nomenclature. For example, CYP2D6 *4/*5. The *4 and *5 are star alleles, each representing a unique group of genetic variants that are inherited together (i.e., haplotypes), one from each parent. Each star allele is assigned a function (i.e., no, decreased, normal, increased, unknown, or uncertain) based on the current evidence, such as that curated by the Pharmacogene Variation Consortium (PharmVar). , By combining the function of the two star alleles, laboratories can infer a person's phenotype, such as medication metabolizer status (i.e., poor, intermediate, normal, rapid, ultrarapid), medication transporter function (i.e., increased, normal, decreased, poor), or high-risk medication sensitivity status (i.e., positive, negative). In our example, the CYP2D6 *4 and *5 alleles have no function and the combination of these two alleles result in a poor metabolizer phenotype. In contrast, if the CYP2D6 genotype was *1/*1 this would translate to a normal metabolizer because the *1 allele is a normal function allele and the combination of two normal function alleles results in a normal metabolizer phenotype. These genotype-inferred phenotypes are the basis from which medication selection and dosing recommendations are made. Recommendations are made at the level of each gene-medication pair. Two medications in the same class will not necessarily have equivalent recommendations. The reason is typically due to differences in how the medications are metabolized. For example, escitalopram and paroxetine are both selective serotonin reuptake inhibitors but escitalopram is primarily metabolized by CYP2C19 , whereas paroxetine is primarily metabolized by CYP2D6 . Thus, an individual that is a CYP2C19 normal metabolizer and CYP2D6 poor metabolizer would receive a recommendation for escitalopram that suggests initiating therapy at the standard starting dose. Whereas, the recommendation for paroxetine would suggest selecting an alternative medication or reducing the staring dose due to their CYP2D6 poor metabolizer status. Several demographic (e.g., age, sex ) and clinical (e.g., concomitant medications, pregnancy, , inflammation , ) factors can influence the interpretation of PGx-based recommendations. For example, a genotype-inferred CYP2C19 normal metabolizer that commences use of esomeprazole to treat gastroesophageal reflux would likely be converted to an intermediate or poor metabolizer, depending on the esomeprazole dose taken. This conversion is a result of esomeprazole being an inhibitor of CYP2C19 enzyme activity. , Conversely, if the same individual instead began taking St John's wort (a CYP2C19 inducer), they would likely be converted to a rapid or ultrarapid metabolizer. In both of these scenarios the medication recommendations associated with their CYP2C19 normal metabolizer phenotype could be inappropriate and may require adjustment before clinical implementation. Adjusting recommendations provided by PGx testing laboratories can be a challenge without access to proper resources. To assist physicians, some testing laboratories offer web-based tools or consultations with one of their pharmacists or physicians. If these tools or consultations are not available, some health authorities have centralized consult services. For example, Alberta Health Services supports a Clinical Pharmacology Physician Consultation Service capable of assisting with PGx-related inquiries. There are also free web-based tools, such as Sequence2Script (sequence2script.com) that enable physicians to (re)generate evidence-based prescribing recommendations for their patients while accounting for concomitant medications. Interpretation of results may also be impacted by the strategy employed by PGx testing laboratories to translate genotypes to recommendations. Some commercial testing laboratories deliberately conceal – for proprietary reasons – the process by which pharmacogenetic testing results are translated into clinical recommendations. This so called ‘black box’ strategy is in conflict with open and peer-reviewed approaches adopted by CPIC and other clinical guideline development groups and significantly impairs critical appraisal of results produced using this strategy. Fortunately, there are tools and resources, such as the Pharmacogenetics Knowledgebase (PharmGKB) and Sequence2Script, that can help ‘by-pass’ the black box via direct interpretation of the raw genotype (e.g., CYP2D6 *1/*4) or phenotype (e.g., CYP2D6 intermediate metabolizer) results provided by these testing companies. Notably, this by-pass procedure can be time consuming and no evidence exists on whether this approach produces recommendations that are superior to those provided by black box approaches. It does however, offer full transparency. PGx testing laboratories differ on the medications they support. Some laboratories focus on medications relevant to specific practice settings (e.g., psychiatry, cardiology). What is important is that the medications that do appear in the report are supported by the current scientific evidence . For example, certain benzodiazepines (e.g., alprazolam), ACE inhibitors (e.g., enalpril), antipsychotics (e.g., quetiapine, olanzapine), antidepressants (e.g., bupropion, desvenlafaxine), and analgesics (e.g., aspirin) are commonly included on PGx testing panels despite the absence of PGx-based guidelines for these medications. Implementing recommendations for medications that are not supported by evidence-based guidelines could do more harm than good. With an increase in the clinical use of genetic testing, there are concerns of increased liability exposure to physicians for failure to use or misuse PGx information. Any patient has the right to lodge a complaint and pursue litigation if they believe that their physician's failure to act on PGx test results caused injury. The potential liability is further amplified by the availability of PGx prescribing guidelines as well as regulatory bodies (e.g., Health Canada, FDA) increasingly requiring drug manufactures to include PGx information on their product labels. - To reduce litigation risk, physicians should become familiar with these guidelines and product labels, particularly those relevant to medications they most frequently prescribe. Physicians should also consider PGx test results when they are available and use them to facilitate shared decision-making with their patients, including efforts to set realistic expectations about how the results can be reasonably used. We would also suggest documentation of this shared decision-making process in the patient's medical record and when appropriate, encourage seeking expert consultation. When a patient presents for an appointment with PGx test results in hand or a colleague requests advise from you on how best to interpret these results, common initial reactions may include sceptical, uncomfortable and perplexed. Given that the results are unexpected and the source is often unfamiliar, these types of reactions are understandable and predictable. However, it is important not to allow these reactions to trigger premature dismissal of PGx information and with it the potential opportunity to improve care and engage patients in treatment decision-making. Practical strategies and accessible resources are available to assist psychiatrists in effective consideration, interpretation and implementation of PGx test results that they encounter in their practice. In combination with existing strategies for prescribing, PGx testing can serve as an informative complement to the psychiatrist's toolbox. sj-pdf-1-cpa-10.1177_07067437211058847 - Supplemental material for Encountering Pharmacogenetic Test Results in the Psychiatric Clinic Click here for additional data file. Supplemental material, sj-pdf-1-cpa-10.1177_07067437211058847 for Encountering Pharmacogenetic Test Results in the Psychiatric Clinic by Chad A Bousman, Gouri Mukerjee, Xiaoyu Men, Ruslan Dorfman, Daniel J Müller and Roger E. Thomas in The Canadian Journal of Psychiatry
Modulation of blood-tumor barrier transcriptional programs improves intratumoral drug delivery and potentiates chemotherapy in GBM
0c9e6104-8815-4267-b180-fe4404cee623
11864199
Cardiovascular System[mh]
The effective treatment of brain malignancies such as glioblastoma (GBM), remains a critical challenge in the neuro-oncology field. GBM is the most common malignant primary brain tumor, representing ~15% of all central nervous system (CNS) neoplasms ( ). Median survival is 15 to 18 months, and less than 10% of patients survive beyond 5 years after diagnosis ( ). The standard of care involves maximal safe surgical resection, followed by radiotherapy and alkylating chemotherapy with temozolomide (TMZ). This inevitably leads to the development of untreatable recurrent disease. The major challenges in GBM therapy are (i) its invasiveness, which prevents complete surgical resection, (ii) high levels of intratumoral molecular and cellular heterogeneity, (iii) a cancer-promoting tumor microenvironment, and (iv) the presence of the blood-brain barrier (BBB) and blood-brain tumor barrier (BTB), which limit drug entry. The BBB maintains homeostasis of the CNS for its proper functioning ( ). In an oncogenic context, the BBB responds to cues by the cancer cells, which promote the formation of new blood vessels and the BTB. The BTB is a distinct and heterogeneous biological entity, resulting from cellular interactions between brain tumor cells, newly formed blood vessels, and the preexisting BBB ( ). Molecular characteristics that define the impermeability of the BBB, such as tight junction and adherens junction formation, high efflux pump expression, and nonfenestrated endothelium, are compromised in brain tumors mainly due to hypoxic/angiogenic conditions, which also promote tumor growth, migration, and invasion ( ). Regardless of the disruption of these brain-protecting BBB properties, non-BBB penetrant drugs still do not penetrate GBM tissue efficiently. This is supported by studies suggesting that the BTB is highly heterogeneous ( ), with some regions maintaining “healthy” BBB features that protect GBM cells from antineoplastic agent accumulation. The BTB remains poorly understood, especially its molecular and cellular composition and identification of target molecular pathways that could render it permissive to chemotherapy uptake. Strategies to improve drug delivery to GBM include focused ultrasound ( , ), convection-enhanced delivery ( ), optogenetics ( ), systemic administration of drug-loaded nanoparticles ( ), and drug-conjugated cell-penetrating peptides ( ), with most of these showing promising preclinical results. These strategies rely on physically overcoming the BBB/BTB and have advantages of controlled release, preservation of drug stability, and drug delivery at selected anatomical sites. In addition to these approaches, the identification of compounds that could target molecular elements that selectively regulate BTB permeability, but not healthy BBB, would enable mechanistic control over the biological processes involving the BTB/GBM tumor interactions and could be used to potentiate intratumoral drug penetration. Moreover, if these compounds could simultaneously hinder tumor development and synergize with chemotherapeutic regimens, then this would be potentially useful in the clinic. Previously, we identified the anti-invasive and immunomodulatory properties of the indirubin-derivative 6-bromoindirubin acetoxime (BIA) in GBM and showed some benefit in murine GBM models ( ). We also developed a BIA-loaded nanoparticle formulation, PPRX-1701, which was well-tolerated, and able to reach intracranial brain tumors in mouse models. Indirubins are bisindole alkaloid compounds used as a component of traditional Chinese medicine for the treatment of proliferative disorders and autoimmune conditions. Indirubin is a component from the Indigo naturalis extract ( ). BIA is widely known as a GSK-3 inhibitor ( ), but several other kinases have been found to be inhibited by this compound, including cyclin-dependent kinases and Src family kinases ( ). Here, we report that BIA has significant effects on BTB permeability by reducing the expression of BTB signature genes, including the tight junction protein CDH5 [vascular endothelial cadherin (VE-cadherin)]. BIA treatment increased cisplatin accumulation in tumor tissue in mouse tumor models, but not in healthy brain, and enhanced the cytotoxic capacity of cisplatin. BIA in combination with cisplatin prolonged survival of xenograft GBM models. Together, our work provides evidence of potential candidate targets at the BTB and the use of BIA for improved drug delivery and chemotherapy potentiation in GBM. Identification of GBM endothelium–enriched transcripts via in silico screening CDH5 is highly expressed in GBM tumor–associated endothelium BIA targets BTB-related transcriptional programs in brain endothelial cells BIA disrupts barrier formation in BBB models in vitro BIA targets several kinases in brain endothelium in vitro BIA increases intratumoral drug accumulation in xenograft models of GBM BIA potentiates cisplatin cytotoxicity by fostering its DNA damage capacity BIA enhances cisplatin preclinical efficacy in patient-derived GBM xenografts The BTB represents an obstacle to therapeutic drug delivery and remains a poorly defined component of GBM biology. Thus, to identify molecular signatures of the GBM vasculature for targeting of the BTB, we performed an in silico–based approach by accessing bulk RNA sequencing (RNA-seq) data from The Cancer Genome Atlas, Rembrandt, and the Ivy Glioblastoma Atlas Project (IVY GAP) databases ( ). In the cBio portal, we used Spearman’s rank correlation to select GBM genes that showed coexpression with four well-characterized endothelial cell reference transcripts defined by Dusart et al. ( ) for screening brain tumor–associated vascular gene expression signatures: Platelet Endothelial Cell Adhesion Molecule-1 ( PECAM1 ) /CD31 , CD34 , Von Willebrand Factor ( VWF ), and C-lectin 14A ( CLEC14A) (table S1). Next, we used the Rembrandt dataset for identifying those genes of this list enriched in GBM tissue above healthy controls and lastly identified their tumoral regional expression by using the IVY GAP resource, a laser microdissection-based RNA-seq bulk dataset obtained from clinical specimens. With this approach, we identified a signature composing of 12 GBM endothelium–enriched genes ( ). These genes present abundant expression at microvascular proliferation regions in the tumor ( ). Gene Ontology (GO) analysis revealed their primary involvement in vasculature system development, morphogenesis, cell migration, and responses to transforming growth factor–β (TGF-β) pathway activation and positive regulation of receptor serine/threonine kinase signaling ( ). Search Tool for the Retrieval of Interacting Genes (STRING) network analysis showed that the genes form significant interactions within the network [protein-protein interaction (PPI) enrichment P value: <1.0 × 10 −16 ]. With the exception of MYO1B , all genes were interconnected ( ), suggesting strong functional relationships within the identified signature gene set. To confirm endothelial-associated specificity of the identified 12-gene network, we reperformed our analysis using PECAM1 and VWF only as probe marker genes. This led to a reduced list of six genes that were also identified using the Dusart et al. markers ( ): ACVRL1 , CD93 , ENG , ENPEP , MYO1B , and PCDH12 (fig. S1, A to C). Analysis of CD34 and CLEC14A expression using the IVY GAP resource showed enrichment of these genes at microvascular proliferative regions (fig. S1D), supporting their regional enrichment at the GBM-associated vasculature. Given the pleiotropic nature of these identified transcripts, as well as some known nonvascular functions, the possibility that our bulk RNA-seq–based analysis is providing transcript signals from other cell types is feasible. To address this, we consulted a recently published single-cell brain vasculature atlas by Wälchli et al. ( ). Using a reported interactive website ( https://waelchli-lab-human-brain-vasculature-atlas.ethz.ch/ ), we visualized our used vascular reference markers ( PECAM1 , VWF , CD34 , and CLEC14A ) and the 12 newly identified GBM endothelium–enriched transcripts from GBM clinical sample–sorted single cells (fig. S2A). Our GBM endothelium–enriched transcripts were highly expressed in endothelial cells above other cell types in the tumor microenvironment, with the exception of CD93 , PDGFRB , and ENPEP . CD93 was seen highly expressed in neutrophils, and PDGFRB and ENPEP , which expression was mostly seen in pericytes and smooth muscle cells, both which are components of the BBB. Furthermore, there was increased expression of our GBM-enriched signature genes in fluorescence-activated cell sorting (FACS)–sorted GBM endothelia above endothelia of healthy temporal lobe and other brain malignancies such as low-grade glioma, metastases, and meningioma (fig. S2B). We then examined the expression of our endothelial-associated transcripts across different endothelium subtypes from FACS-sorted GBM endothelial cells (fig. S2C). Our findings indicate that angiogenic capillaries have enriched expression of these genes. MYO1B , ENPEP , and PDGFRB were strongly expressed in cells undergoing endothelial-to-mesenchymal transition (EndoMT) and proliferative EndoMT cells, suggesting that these genes are confined to a subset undergoing mesenchymal state transitions. The STRING analysis showed that CDH5 (VE-cadherin, CD144) may represent a central node in the BTB signature gene network. CDH5 is a calcium-dependent adherens junction protein with a fundamental role in maintaining BBB integrity ( ). To further investigate its potential role in the BTB of GBM, we reanalyzed spatial transcriptomic data from malignant glioma tissue samples published by Ravi et al. ( ) and confirmed the enriched expression of CDH5 in comparison to matched nontumor brain cortex tissue UKF_248 ( ), UKF_242, UKF_259, and UKF_334 (fig. S3, A to C). CDH5 was highly expressed in clusters enriched in vascular markers such as PECAM1 and VWF ( , and fig. S3, D to F). However, CDH5 also enriched in other PECAM1 -negative clusters, which indicates that CDH5 spatial expression is heterogeneous and occurs in multiple cell types in addition to endothelial cells. CDH5 -expressing clusters were enriched in Biological Process GOs related to ( ) “Regulation of Vasculature Development” and “Angiogenesis” and “Vascular Process in Circulatory System” (fig. S4), supporting its involvement in vascular biology in GBM. Moreover, we identified a list of 61 additional genes regionally coexpressed with CDH5 (table S2), which includes genes such as CCL2 ( ) and WNT7B ( , ), which have reported roles in the BTB and BBB, respectively. To confirm the presence of CDH5 in GBM tumors, we performed immunofluorescence (IF) staining in GBM patient specimens for anti-CDH5 in combination with anti-CD31. GBM tumors were confirmed in these samples by histology (fig. S5, A to F). We observed CDH5 expression across the tumor specimens (fig. S6, A to F) and present in CD31 + blood vessels as expected, with diverse correlation of coexpression of CD31 and CDH5 across samples (fig. S6G). CD31-negative cells also expressed CDH5. Potentially, tumor cells undergoing vascular mimicry are known to express CDH5 ( – ). To reveal CDH5 expression heterogeneity in the BTB, we performed IF visualization of CDH5 in CD31 + blood vessels in a patient-derived xenograft GBM model focusing on the tumor core and leading edge regions (fig. S7A). This imaging showed strong expression of CDH5 in not only CD31 + vasculature but also in green fluorescent protein (GFP)–labeled engrafted tumor cells. Pearson correlation analysis did not show any regional difference of CDH5/CD31 + vessel coexpression (fig. S7B). However, CDH5 is significantly more expressed in CD31 + blood vessels than in GFP + tumor cells at the core and at the leading edge (fig. S7C), confirming that CDH5 is enriched in tumor-associated vasculature. Together, these data demonstrate the presence of CDH5 in GBM endothelium, where it may play a role in BTB function. Previously, we demonstrated that BIA has anti-angiogenic effects in murine intracranial models of GBM ( ). This led us to investigate the transcriptional alterations associated with BIA treatment of brain endothelium. Bulk RNA-seq analysis of a well-characterized human brain microvascular endothelial cell (HBMEC) line, HCMEC/D3, treated with BIA showed considerable transcriptional dysregulation. The top 15 differentially expressed genes (DEGs) are displayed according to significance in a heatmap ( ). CDH5 was one of the most down-regulated genes upon BIA treatment (−3.06-fold, log 2 ). GO analysis of significantly up-regulated (862 genes) and down-regulated (652 genes) differentially expressed transcripts revealed that BIA mostly induced expression of genes in processes related to amino acid transport. BIA also decreased expression of genes involved in annotated processes of cell migration, motility, angiogenesis, and endothelial proliferation, as well as nitric oxide synthesis and pathways of receptor tyrosine kinases ( ). Volcano plot analysis ( ) of log 2 fold change (log 2 FC) versus P value significance of down-regulated DEGs highlights CDH5 and other angiogenesis-related genes such as MMRN2 , a direct interactor with CDH5, CD93, ACVRL1, KDR, SMAD6, and S1PR3. BIA also promoted expression of genes such as PHGDH , AXIN2 , TCF7 , VLDLR , and VEGFA , showing that BIA has broad effects on genes involved in diverse pathways. We then examined dysregulated genes that are potentially involved in BTB biology by focusing on BBB permeability/integrity and in biological functions of angiogenesis ( ). BIA modulates 8 of our 12 BTB signature genes we identified in the in silico screening from clinical samples ( , highlighted). Most of these genes were down-regulated by BIA, except PCDH12 , which increased its expression. This finding suggests that BIA targets the expression of BTB-associated transcriptional programs in brain endothelial cells. Our spatial transcriptomic analysis of sample UKF_248 showed the increased expression of these 12 GBM endothelium–enriched transcripts in GBM clinical samples above cortex controls ( ) and spatially coexpressed to endothelial markers by clustering analysis ( ). These genes were expressed across different tumor samples (fig. S8) and indicate the relevance of these pathways in GBM. Given the prominence of CDH5 in the BTB transcriptome and its known role of maintaining vascular barrier integrity, we focused our efforts in further characterizing CDH5 expression in the endothelium upon BIA treatment. IF staining of CDH5 showed a marked decrease at the membrane periphery in endothelial cells in vitro after treatment with BIA (fig. S9A). We also observed a marked reduction in the BBB tight junction molecule ZO-1 but no difference in levels of Claudin-5 (fig. S9A). BIA decreased levels of CDH5 mRNA in two brain endothelial cell lines, which declined for up to 48 hours following BIA treatment (fig. S9B). BIA also reduced the expression of CDH5 in G34-pCDH GBM cells, with simultaneous decline of WNT7B and S1PR3 expression, suggesting that BIA can modulate these endothelial barrier-related molecules in the tumoral context as well and is not restricted to vascular cells only (fig. S9C). Protein levels of CDH5 reached maximum reduction at 12 hours post-BIA treatment and remained down-regulated for a further 48 hours (fig. S9D). To understand whether BIA might alter barrier formation properties in brain endothelial cells, we performed trans-endothelial electrical resistance (TEER) analysis of monolayers of HCMEC/D3 cells. Treatment with BIA led to a marked decrease of barrier integrity ( ). Moreover, addition of BIA 24 hours after plating endothelial cells completely prevented barrier establishment in two endothelial lines (fig. S9E). These effects occurred from 100 nM to 10 μM BIA (fig. S9F), confirming that BIA can disrupt BBB integrity in vitro. To confirm that CDH5 is an important factor in modulating the loss of barrier formation in the HCMEC/D3 cells, we electroporated a siRNA against cadherin 5 (siCDH5) sequence to HCMEC/D3 cells and assessed their capacity to form a barrier via TEER ( ). Loss of CDH5 decreased the rate of barrier formation in these cells, supporting the notion that loss of barrier formation in brain endothelia due to BIA is, at least in part, because of CDH5 down-regulation. We next tested the effects of BIA on vascular permeability measuring dextran uptake using an in vitro multicellular BBB spheroid model ( , – ), which has been shown effective to measure permeability of peptides and chemotherapies. In this experiment, BIA decreased the expression of CDH5 in a dose-dependent manner ( ) as shown by IF staining. F-actin was also reduced considerably ( ). Incubation of a fluorescent dextran (70 kDa) with the BBB spheroids treated with BIA showed a dose-dependent increase in permeability ( ). To understand whether the effects we observed are a consequence of endothelial cell death, we screened for apoptosis via flow cytometry, which did not show late apoptosis/necrosis at any of the BIA concentrations used in comparison to a cisplatin control (fig. S10A). Cellular adenosine triphosphate (ATP) content was reduced up to 30% in the ~1 to 5 μM BIA range and ~50% and above for HBMECs treated at the same concentrations (fig. S10B), indicating that BIA affects endothelial cell metabolism. Visual assessment of HCMEC/D3 cells treated with BIA did not reveal signs of apoptosis or necrosis but an elongated phenotype with long filipodia (fig. S10C). Cell cycle analysis via flow cytometry showed a slight decrease in proportions of cells in G 1 and G 2 /M phases, indicating that BIA affects endothelial cell proliferation but does not induce cell death at the concentrations tested (fig. S10D). Cell counts of endothelial cells treated continuously with BIA showed that cell numbers decreased significantly after 4 days posttreatment (fig. S10E). BIA is a broadly selective protein kinase inhibitor ( , ). To elucidate the kinase signaling pathways altered by BIA that could be involved in barrier modulation, we treated HCMEC/D3 cells with BIA and performed phospho-kinase array profiling ( ). We observed a decrease of activating phosphorylation in members of the mitogen-activated protein kinase (MAPK) family (p38α, c-Jun N-terminal kinase 1, mitogen- and stress-activated protein kinase 1/2, and extracellular signal–regulated kinase 1/2), SRC family (SRC, YES, and FGR), and transcription factors at activator sites [cAMP response element–binding protein (CREB), signal transducer and activator of transcription 1 (STAT1), STAT2, STAT5a/b, and c-JUN]. The MAPK and SRC pathways are known to control endothelial transcriptional programs through CREB and other transcriptional regulators ( – ). On the other hand, we observed increased phosphorylation of STAT3 at S727 and Y705 and in p70 S6 kinase, which suggests activation of the Mammalian Target of Rapamycin (mTOR) pathway. Secretome analysis of HCMEC/D3 cells treated with BIA indicates a proinflammatory secretion profile with an increase of cytokines such as tumor necrosis factor–α, interferon-γ, interleukin-17A (IL-17A), IL-6, IL-1β, prolactin, CCL8, and CCL4, among others (fig. S11A); whereas significant down-regulation was seen to occur for CCL2, CCL5, Angiopoitein-2, and CXCL10 (fig. S11B). We confirmed the increased expression of IL-6 and reduction of WNT7A/B by Western blot, which correlated with GSK3b (Ser 9 ) reduction (fig. S11C). Moreover, we confirmed our phospho-kinome array by observing down-regulation of phosphorylation in STAT5A/B, TYK2, p38α, SRC, and CREB upon various BIA doses in endothelial cells (fig. S11D). This correlated with decreased CDH5 expression. Overall, our results indicate that BIA operates at different cellular signaling levels that induce diverse biological changes in brain endothelium, which might be required to induce the endothelial barrier disruption phenotype observed. To understand whether BIA could also increase permeability in the BTB in the context of GBM in vivo, we implanted patient-derived GBM cells (G30) in nude mice and treated them with BIA and administered sodium fluorescein as indicated in . Increased accumulation of sodium fluorescein within the tumor was observed after BIA administration, in comparison with untreated controls ( ). Analysis of the fluorescent signal showed significant accumulation in the tumor but not in the healthy brain, suggesting that BIA administration promoted intratumoral uptake of sodium fluorescein. We then interrogated whether BIA treatment could increase the intratumoral accumulation of cisplatin, a nonbrain penetrant chemotherapy. For this, we injected cisplatin (5 mg/kg) and allowed circulation in the system for 5 hours. We collected and processed the tissue downstream (see Materials and Methods) for inductively coupled mass spectrometry (ICP-MS) analysis–based platinum quantification (fig. S12A). Pretreatment with BIA permitted significant cisplatin intratumoral accumulation in patient-derived ( ) and syngeneic murine GBM tumors ( ). No significant difference of uptake was seen in contralateral healthy brain regions, indicating that BIA acts selectively in the tumor but not in the brain. Moreover, no difference in platinum accumulation was seen in peripheral tissues such as the heart or liver, thereby supporting the notion that BIA selectively increases cisplatin uptake in tumor but not healthy tissue (fig. S12B). Tumor sizes between control and BIA-treated groups before administration were comparable by IF imaging for the patient-derived and syngeneic models (fig. S12, C to E). Further studies showed that the uptake of cisplatin is dependent on the dose of BIA ( ). To test possible mechanisms of how BIA operates in augmenting drug accumulation in tumors, we treated GBM cells (fig. S12F) and brain endothelial cells (fig. S12G) with BIA and cisplatin simultaneously. In either case, we did not observe any advantage in drug accumulation due to BIA addition, suggesting that direct cellular internalization is not a mechanism of operation for BIA. Treatment of endothelial cells with BIA did not show any changes of protein levels of CAV1 or MFSD2A (fig. S12H), important molecular actors in endocytosis and transcytosis in the BBB. Next, we evaluated CDH5 expression in our patient-derived xenograft GBM models and its potential alterations upon BIA treatment. Administration of BIA showed a notable decrease of CDH5 in the whole section and in CD31 + endothelial cells 24 hours after treatment ( ). On the other hand, we did not observe significant changes in expression of CDH5 in contralateral healthy brain regions, which is consistent with the observation that increased drug delivery effects due to BIA are tumor associated endothelium specific. In addition, we assessed the expression of ZO-1 and Claudin-5 in these tissues. We observed mild reductions of ZO-1 expression as well but no visible differences in Claudin-5 staining (fig. S12I). To identify any possible effects of BIA treatment on additional components of the BBB/BTB endothelium or basement membrane, we stained our G9-pCDH xenograft model for the endothelium (CD31) (fig. S13A) and for pan-laminin (fig. S13B) and COL1A1 (fig. S13C). We did not observe morphological differences on the CD31 + blood vessels nor changes of signal intensity of COL1A1; however, we did find a marked decrease of laminin staining on these vessels. This indicates that BIA might alter the expression of additional elements of the BTB besides CDH5 in the endothelial compartment, which supports the notion that BIA can modulate the BTB transcriptome at multiple molecular layers. We performed IF on additional GBM endothelium–enriched transcripts that we identified in two patient-derived xenograft models. We observed a considerable reduction at the endothelium of ACVRL1 staining (fig. S14A) upon BIA administration above control. We also saw a mild decrease of PDGFR-β + cells alongside CD31-expressing vessels (fig. S14C) and endoglin (fig. S14D) in the vascular regions of G9-PCDH and, to a greater extent, in G34-PCDH. We observed considerable down-regulation of Endothelial Cell Adhesion Molecule (ESAM) after BIA treatment (fig. S14B) in the G34-pCDH model but not G9-pCDH, which might reflect intermodel variability of response to BIA. Collectively, our data provide evidence that BIA selectively targets the tumoral vasculature at the BTB, which down-regulates CDH5 expression and other GBM endothelium–enriched transcripts, disrupting tight junction formation and increasing accumulation of chemotherapy in murine GBM tumors. Before animal efficacy studies, we also asked whether BIA and cisplatin in combination could also show a therapeutic advantage than administration of either agent alone. Several studies have shown cytotoxic synergy of small-molecule kinase inhibitors in combination with cisplatin in cancer ( – ). Accordingly, we cultured a panel of patient-derived GBM neurospheres and treated with BIA and cisplatin combination, with single-treatment groups as controls ( ). Using a cell viability assay, we observed that combination of BIA markedly increased the cytotoxic effects of cisplatin alone. The most significant combinatorial effects were observed at cisplatin concentrations of 1 μM and below. BIA single-treatment controls only mildly reduced cellular ATP production. In accordance, the BIA/cisplatin combination decreased the neurosphere formation capacity and growth of G9 and G34 cells (fig. S15, A to D). To identify potential synergistic interactions between BIA and cisplatin, we used SynergyFinder 3.0 software. BIA potentiated cisplatin toxicity (overall δ-score = 8.24), at a concentration of 2.5 μM and below ( ). We also identified a high likelihood of synergy (highlighted area, δ-scores > 10) at the lower doses for cisplatin (~0.6 to 2.5 μM) in combination with all tested BIA doses ( ). At the upper cisplatin dose ranges, its interaction with BIA remained nonsynergistic. Thus, cisplatin and BIA in combination show synergistic antiglioma cytotoxic effects. Next, we assessed the DNA damage levels of the BIA/cisplatin combination by IF imaging of γH2AX nuclear foci. This showed that the BIA/cisplatin combination significantly augmented the frequency of γH2AX foci in the nucleus of GBM cells above single-treatment and nontreated controls ( ). This increase in γH2AX events in the BIA/cisplatin combination was also observed by flow cytometry, which correlated with loss of cell cycle progression (fig. S15, E and F). We evaluated the phosphorylation and protein levels of CHK1, an important regulator of the DNA damage response during cisplatin exposure ( ). Simultaneous exposure of BIA and cisplatin reduced the expression of CHK1 and its activation (Ser 345 ) greater than single-treatment controls. In turn, γH2AX levels were induced upon this combination ( ). Given the strong depletion of CHK1 activity, we performed small interfering RNA (siRNA)–dependent knockdown in our GBM cell lines. Use of siCHK1 increased the susceptibility of these cells to cisplatin titrations, mainly at concentrations below 1 μM cisplatin (fig. S15G), supporting the notion that targeting of CHK1 is an important factor in the BIA-induced potentiation of cisplatin cytotoxicity. Last, we investigated whether BIA and cisplatin combination regimens could provide a therapeutic effect in our intracranial GBM murine models. We proceeded with a dose regime of BIA preadministration 24 hours before cisplatin injection at 5 mg/kg to promote and maintain increased platinum delivery ( ). The BIA and cisplatin combination regimens prolonged the survival of tumor-bearing mice significantly ( P = 0.0052) over the single-treatment and control arms, indicating efficacious results by this approach ( ). BIA is highly hydrophobic, making it difficult to dissolve in physiological solutions, which limits its clinical translation. To address this, we used PPRX-1701, a formulation of BIA, designed for improved in vivo delivery ( ), which inhibits GSK3 as indicated by a G9-TCF cell line reporter (fig. S16, A and B). Previously, we have shown that PPRX-1701 is not toxic when administered systemically in C57/BL6 mice, as shown by liver and spleen histology ( ). We implanted a second patient-derived GBM xenograft model ( ) and performed systemic preadministrations of PPRX-1701 before cisplatin injections. Combination of PPRX-1701 with cisplatin was also more efficacious in comparison with vehicle control plus cisplatin, PPRX-1701 alone, and nontreated controls ( P = 0.0016) ( ). Assessment of DNA damage by γH2AX staining indicated that PPRX-1701 enhanced the genotoxicity of cisplatin, correlating with the extended survival observed ( ). Together, our data highlight potential molecular targets associated to the BTB in GBM. In addition, we demonstrated that BIA exerts preclinical efficacy in GBM murine models through its dual capacity to selectively target transcriptional programs of the BTB, promoting intratumoral drug delivery and by showing cytotoxic synergistic effects with DNA-damaging chemotherapy ( ). Effective drug delivery remains a major challenge for the treatment of brain tumors. Here, we have identified a network of genes associated with the BTB in GBM and have demonstrated the dual functionality of the indirubin derivative, BIA, to increase intratumoral drug delivery by targeting the BTB-associated gene network and enhance chemotherapy cytotoxicity via DNA repair machinery modulation. Our work should provide grounds to establish further studies for progressing BTB-targeting approaches toward clinical application. We used an in silico strategy from bulk RNA-seq datasets, based on GBM-associated vascular markers identified by Dusart et al. ( ), to identify a set of 12 genes (GBM endothelium–enriched transcripts) with elevated regional expression within the tumoral endothelium in GBM. We performed this analysis by identifying gene coexpression with known endothelial markers PECAM1 and VWF , as well as with angiogenic and tumoral vasculature reference transcripts CD34 and CLEC14A . CD34 is also a well-known marker of progenitor bone marrow cells, some with reported potential to microglial differentiation ( , ). Moreover, albeit unfrequently, GBM tumor cells have reportedly presented CD34 positivity ( ). In contrast, CD34 expression has been consistently reported in progenitor endothelial cells and tip-angiogenic cells ( – ), being widely used as an endothelial marker when used simultaneously with PECAM1 and/or VWF canonical markers. CLEC14A has been implicated in tumoral angiogenesis ( ) and is used as a tumor endothelial marker ( ), undergoing preclinical applications such as CLEC14A-specific Chimeric Antigen Recombinant T-cell (CAR-T) targeting ( ). Thus, we proceeded using these markers for tumoral vasculature screening. We performed spatial transcriptomic data reanalysis ( ) of CDH5 and the other identified GBM endothelium–enriched transcripts. This showed high spatially clustered expression of these genes in perivascular regions of GBM clinical samples, suggesting a functional relevance for this disease. Most of the GBM endothelium–enriched transcripts have been associated with angiogenesis and blood vessel recruitment, especially ACVRL1 , CD93 , ENG , FLT4 (VEGFR3), and PDGFRB . Previous studies ( ) have indicated the coexpression of ACVRL1 , CDH5 , CLEC14A , PECAM1 , ENG , GRP4 , ROBO4 , and PCDH12 in the tumor-associated endothelium in several solid tumor types, including GBM. This gene set has been related to vascular development, blood vessel morphogenesis, and tumor angiogenesis processes. These biological processes have also been identified in endothelium of primary GBM specimens ( , ). The mentioned reports support our findings and indicate the functional relevance of these genes in the vascular developmental processes of the BTB in GBM. It is of importance to note that the identified GBM endothelium–enriched transcripts also have functional roles in other non-endothelial compartments. For instance, PDGFR-β is a well-known pericyte marker, although it also has been reported to have a role in endothelial angiogenesis via the Talin1/FAK axis ( ). ACVRL1 deficiency promotes arteriovenous vascular malformations due to reduced mural cell coverage upon VEGF stimulation ( ). Mutations in ACVRL1 and ENG promote the incidence of hereditary hemorrhagic telangiectasia, which present increased bleeding and vascular aberrations ( ). ROBO4, a member of the ROBO family of receptors for Slit ligands, has been shown to modulate the BTB permeability ( ) but also guides and modulates angiogenesis for vascular network structuring ( ). Moreover, ROBO4 performs as a guide molecule for cortical neuron development ( ). In a similar manner, PCDH12 is required for neuronal timely differentiation and migration upon cortical development ( ). Yet, PCDH12 expression has also been reported in endothelia ( ). These GBM endothelial transcripts have pleiotropic functions, and what are the consequences of their targeting with BIA in other non-endothelial compartments remains to be studied. To confirm that these transcripts are relevant for GBM vascular biology, we screened their single-cell expression in the Wälchli et al. ( ) brain vasculature atlas. The analysis confirmed that the 12 GBM endothelial–enriched transcripts are mostly expressed in endothelial and perivascular cells, that these are up-regulated in GBM vasculature above healthy brain tissue and other malignancies, and that their expression can vary across endothelial subsets, tumors, and patients with GBM. In the context of GBM vasculature, our work links the modulation of these GBM endothelium–enriched transcripts and vascular pathways to alter BTB permeability for improved drug delivery in tumors. Future work by us would involve functional studies on these molecules for deeper understanding of their involvement in BTB biology. Our screening led us to investigate CDH5 (VE-cadherin) as a central element in the tumoral-associated vascular transcriptome. CDH5 is fundamental for endothelial barrier integrity, but its role in BTB permeability is not fully understood. Transcriptional and IF studies showed prominent expression of CDH5 in vascular regions of GBM clinical samples, and its expression was correlated with endothelial markers above nontumor cortex when analyzed by spatial and single-cell transcriptomics. Genetic depletion of CDH5 caused a delay in barrier formation capacities in brain endothelial cells as measured by trans-endothelial resistance, and its down-regulation strongly correlated with increased drug accumulation after BIA injection in our GBM murine models. It was of interest to us that CDH5 was not only present in endothelium but also highly expressed in other cell types negative for CD31, a universal marker of blood vessels. This correlated with our spatial transcriptomic data showing spatial distribution of CDH5 in endothelium and other cell clusters. CDH5 colocalization with CD31 was mainly observed in smaller vessels, but not hyperplastic vessels, which is an interesting biological effect that we will pursue to address in future studies that will involve multiplex analysis. Several reports have shown that GBM tumor cells engage in vascular mimicry to facilitate blood vessel formation that permits tumoral migration and invasion ( ). CDH5 has been associated with enabling vascular mimicry capacity in GSCs ( ), permitting formation of vascular-like structures that supply with nutrients and facilitate anti-angiogenic therapy resistance. CDH5 enrichment has been associated with increased immune infiltration and positive prognosis in other solid tumors, such as bladder cancer ( ), thus indicating potential benefits for endothelial CDH5 targeting in combination with immunotherapies. Our data using Pearson correlation for CD31 + and CDH5 coexpression suggested that GBM vasculature expresses CDH5 across the tumor in a similar manner, and this expression is significantly elevated compared to CDH5 + tumor cells, supporting the hypothesis that CDH5 is a central element in the vasculature comprising the BTB. Our findings show that BIA down-regulates CDH5 gene expression and other GBM endothelium–enriched transcripts such as ACVRL1 and ENG, in endothelial cells in vitro ( ) and in vascular and tumoral compartments in vivo ( and figs. S10 and S11). We also observed decreased expression of ENG, ESA, ACVRL1, PDGFR-β, and laminin alongside CD31 + vessels. Current efforts are taken to elucidate the effects of loss of PDGFR-β in the perivasculature and laminin at the basal membrane for drug delivery purposes. On the other hand, our bulk RNA-seq studies lead to identify altered expression levels of genes involved in TGF-β and Wingless-Type (WNT) signaling, fundamental pathways in BBB formation and stability. The TGF-β pathway maintains BBB integrity through cross-talk with oligodendrocytes, pericytes, and endothelial cells ( , ). We observed down-regulation of several members of this pathway. The WNT/β-catenin pathway is fundamental for brain and retinal barrier genesis and maintenance, especially the Norrin/WNT7A/B axis ( , ). We observed a decrease of WNT7B and WNT ligand receptors FZD4 and FZD7 , with simultaneous increase of expression of WNT4 , WNT10A , and WNT11 ligands. In addition, transcriptional alteration of genes involved in angiogenesis (i.e., ANGPT2 , ENG , and ANG ) and BTB permeability ( S1PR1 and S1PR3 ) was also seen. As such, CDH5 down-regulation and alteration of other BBB integrity components might work together to contribute to the BTB permeability modulation exerted by BIA. Future work by our team will focus on functional interrogation of the potential roles of CDH5 and other GBM endothelium–enriched transcripts in the tumor-associated vasculature and their relevance in the permeability of the BTB for drug delivery purposes. Our current understanding of BTB biology has relied mainly on in vivo models of GBM and patient data. However, several in vitro models are under development to study the molecular and pathophysiological heterogeneity of the BTB in brain tumors ( – ). These models involve utilization of lab-on-a-chip microfluidic models of perivascular, endothelial, and tumoral coculture ( – ) under blood flow–simulating conditions. These models will also aid in screening other BTB-modulating compounds and their mechanisms of action, allowing the identification of additional therapeutic options for BTB targeting. The BTB shows both inter- and intratumoral heterogeneity, with some regions maintaining healthy BBB characteristics, preventing efficacious drug intratumoral accumulation ( , ). Spatial single-cell studies must be pursued to elucidate BTB heterogeneity for identification of drug biodistribution modulators in GBM. These could be coupled with mass spectrometry–based tools, such as matrix-assisted laser desorption ionization–mass spectrometry imaging ( ) and laser ablation ICP-MS ( ), which would be useful to inform on drug uptake in a regional fashion. The administration of BIA to tumor-bearing xenograft and syngeneic mice enhanced the accumulation of cisplatin and sodium fluorescein in the brain tumor tissue but not the healthy brain. The specificity of this effect toward tumorigenic regions remains under study by us. It is likely that BIA, being a small-molecule kinase inhibitor, targets cells with elevated kinase signaling activity, such as the case of angiogenic/proliferative endothelium, but spares slow cycling/quiescent cells that constitute the nontumorigenic brain vascular networks. Vascular development and motility programs are active in angiogenic endothelial cells, and the multitargeting quality of BIA can dysregulate multiple elements involved in these pathways. We also observed inactivation by dephosphorylation of the endothelial nitric oxide synthase, important blood pressure regulator, and the p38α/CREB axis, which can control gene expression of CDH5 and other genes important to endothelial biology. The p53 tumor suppressor protein also showed an increase of activating phosphorylation sites (S46) and loss of phosphorylation modulating proapoptotic (S392) and gene regulation (S15) activities. The p53 factor has been involved in modulating endothelial vasodilation and functions in vascular remodeling ( – ). On the other hand, BIA promoted the expression of genes relevant to l -serine metabolism and amino acid transport processes. l -serine has been reported to improve cerebral blood flow, which provides neuroprotection during CNS disease ( ). In this regard, normalized blood flow can also promote drug accumulation in solid tumors ( , ). The mTORC1 complex is an important amino acid sensor, which regulates protein synthesis and energy modulation. We observed an increased phosphorylation of p70 S6 kinase, a downstream target of the mTORC1 pathway. This is consistent with the observation that BIA promotes expression of genes related to amino acid transport and synthesis. This could be a result of indirubin-derived metabolic secondary effects. Future work focusing on the potential role of the mTOR pathway in GBM vascular permeability should be performed to confirm and address the therapeutic relevance of such observations. Simultaneous exposure to BIA and cisplatin had a synergistic killing effect in GSC-like cells. This correlated with increased DNA damage and CHK1 inhibition. Other studies have shown that indirubin derivatives induce DNA damage in HCT-116 cancer cells ( ). However, the present work reveals a previously unidentified applicability of BIA, and potentially other indirubins, in combinatorial regimens to synergize with DNA-damaging chemotherapy. Administration of BIA or PPRX-1701 nanoparticles, which we have previously shown to be safe at 20 mg/kg intravenously and reach murine brain ( ), followed by cisplatin after 24 hours, caused a significant extension of survival of two different GBM xenograft models. Most likely, this improved preclinical efficacy stems from the increased platinum delivery intratumorally and the additive cytotoxicity exerted by both agents. Given this finding, other DNA-damaging chemotherapeutics should be screened in combination with BIA to identify alternative drug candidates that would benefit from the increased accumulation and BIA antineoplastic synergism in GBM treatment. The mechanism of how BIA down-regulates CHK1 expression at the protein level and what alternative therapeutic modalities (i.e., TMZ and radiotherapy) will benefit from BIA simultaneous administration remain in an ongoing study by us. There is no evidence available to us yet that the down-regulation of CDH5 and the GBM-associated transcripts and vascular permeability due to BIA is mechanistically related to this antiglioma synergistic effect in combination with cisplatin, but rather independent pathways targeted by BIA simultaneously. Our Western blot and siRNA experiments link this to BIA modulation of CHK1, which potentiated cisplatin cytotoxic effects in GSCs. Clinical studies performed with indirubins are very limited. Oral administration of indirubin has been investigated for the treatment of ulcerative colitis and inflammatory bowel disease, where doses ranged from 0.5 to 2.0 g daily ( , ), and for chronic myeloid leukemia, with an intravenous administration of 100 mg daily ( ). No synthetic indirubin, including BIA, has been tested in the clinic to date. Using basic allometric scaling calculation ( ), we infer that an approximate of 113 mg of BIA would be necessary for a 70-kg person to present similar effects as seen with a 20 mg/kg–dose preclinically. Hence, the preclinical dosing we present here has the potential to be directly translated into clinical trials using BIA for GBM management. Other strategies to improve drug delivery in GBM involve vascular normalization ( – ), using bevacizumab ( ), and focused ultrasound for transient disruption of the BBB ( ), among other approaches ( ). Some of these methods are restrained by dose-limiting toxicity concerns. In the case of BIA, further challenges would involve the identification of its optimal clinical dose to achieve maximal delivery of a coadministered therapeutic, and this should be performed in parallel with metabolic imaging methods, such as magnetic resonance (MR) spectroscopy, to assess cotreatment internalization dynamics ( ). Successful clinical evaluation of drug delivery would accelerate the translation of BIA into further clinical trials in combination with additional antineoplastic agents. Together, our work reveals novel molecular markers of the BTB, which in future studies should be functionally characterized to understand their role in the biology of the BTB-GBM interaction. The identification of BIA as a selective regulator of BTB permeability for improved drug delivery and potentiating agent of DNA-damaging chemotherapy supports the use of BIA in further preclinical and clinical studies of GBM. Primarily, further research should be pursued on screening for non-BBB penetrant chemotherapies and biologicals that would benefit from higher intratumoral internalization in combination with BIA, such as clinically tested small-molecule inhibitors, DNA-damaging chemotherapies, and therapeutic antibodies. Experimental assessments whether BIA can also potentiate alternative drug internalization–promoting strategies now tested in the clinic, such as focused ultrasound or vascular normalization, to maximize pharmaceutical intratumoral accumulation would be considerably relevant. GBM clinical specimens GBM endothelium–enriched transcript in silico screening GO analysis Gene interaction network analysis and gene set clustering Spatial transcriptomic dataset analysis and clustering methods Spatial GO analysis Spatial pathway analysis Visualization of identified GBM endothelium–enriched transcripts in GBM endothelium using single-cell RNA-seq datasets Mice Cell lines Cell viability assay Growth in low attachment assay Synergy analysis of BIA and cisplatin combinations in GBM neurospheres in vitro RNA-seq of HCMEC/D3 cells treated with BIA IF staining BBB spheroids and dextran permeability assay Trans-endothelial electrical resistance Real-time PCR Western blot Phospho-kinase array siRNA transfections Flow cytometry for cell cycle, DNA damage, and apoptosis assays Cell counts Secretome quantification BIA and PPRX-1701 preparation G9-TCF reporter assay BIA quantification in vivo In vivo studies ICP-MS for platinum quantification Sodium fluorescein BTB permeability studies Data and statistical analysis Numerical results were recorded, graphed, and statistically analyzed using the Prism software (GraphPad). Experiments were independently replicated at least three times, unless indicated differently in the figure legends. Archived brain tumor tissues [formalin-fixed paraffin-embedded (FFPE) samples] are available via an Institutional Review Board (IRB) approved protocol (IRB #816619) from the Lifespan Rhode Island Hospital IRB. All samples were from patients who preoperatively consented to use their tissues via an informed consent process. This includes archived slides, as well as FFPE tissue samples for molecular analysis. Pathology confirmed the presence of GBM tumor in the specimens. To identify genes related to BTB function, we initiated an in silico–based approach by accessing GBM clinical specimen bulk RNA-seq data from The Cancer Genome Atlas via the cBio portal for Cancer Genomics (https://cbioportal.org/). We initiated a correlation analysis of genes coexpressed with endothelial markers PECAM-1 (CD31), VWF, CLEC14A, and CD34, previously identified as useful markers of GBM vasculature by Dusart et al. ( ). A selection of top 50 genes (Spearman’s rank correlation) commonly observed in three of the four markers was done and interrogated their expression levels in GBM tumors in comparison to healthy brain by using the GlioVIS portal ( https://gliovis.bioinfo.cnio.es/ ) by visualizing the Rembrandt study ( ). Those genes significantly elevated in the tumor over the healthy brain were selected as candidate GBM vascular–associated targets due to their possible relevance in GBM. Then, the regional expression of these selected genes in GBM was assessed by using the IVY GAP dataset ( https://glioblastoma.alleninstitute.org/ ) visualized in the GlioVIS portal. Using this tool, we further selected genes with increased expression in microvascular proliferative regions, which are associated with the vasculature in tumors. Those genes with significantly enriched expression at the microvascular proliferation regions were denominated as the GBM endothelial–associated transcripts. All graphs of GlioVIS and cBio portal datasets were generated in the corresponding websites, and pairwise t tests were performed for statistical significance test. For GO analyses, we used the EnrichR (https://maayanlab.cloud/Enrichr/) website, generated by the Ma’ayan’s lab ( – ). We used the GO Biological Process 2023 visualization tool to identify biological processes of the identified gene sets. The Appyters notebook ( ) linked to EnrichR was used for graphic visualization. Gene sets were submitted to the STRING [ https://string-db.org/ ( )]. Scores were set to medium interaction (0.4). For interaction analysis of the genes targeted by BIA, we selected genes up-regulated and down-regulated by BIA equals or above twofold change (log 2 ). Only genes that presented interaction were associated by a 4-kmeans clustering. Gene sets comprising each cluster were submitted to GO analysis (using EnrichR as mentioned above) and ranked by P value significance. The most significant pathway by this method is indicated by color code in each cluster. Here, four specific datasets of a larger set of 28 were focused on data available from ( ) and the dataset that was deposited in Datadryad (https://doi.org/10.5061/dryad.h70rxwdmj) by the authors. These datasets were collected from patients with tumor and cortex control samples. To analyze the effect of CDH5, we started clustering our spatial dataset and visualizing the gene expression spot information with spatial dimensions using the SPATA2 package in R-studio ( ) ( https://github.com/theMILOlab/SPATA2 ). In addition, using the Seurat package (v5.0.0) in R (v4.2.2), the spatial transcriptomic data were processed in several steps. Initially, the data were loaded and preprocessed, followed by normalization using the log normalization method. Variable features were identified, and the data were scaled accordingly. Principal components analysis (PCA) was then applied to reduce the dimensionality of the dataset, with emphasis placed on the top 20 principal components for subsequent cluster and neighbor analysis based on PCA dimensions. The data were visualized in two dimensions using Uniform Manifold Approximation and Projection (UMAP). The tumor and cortex control datasets were merged into a single Seurat object using the merge function (Seurat::merge()). Subsequently, the spatial layers were processed to facilitate visualization of the data in two dimensions. Gene expression patterns were analyzed using the same dimension reduction plot, and expression levels were assessed with violin plots within each cluster identified by the Seurat algorithm. The GO analysis used a cluster-based methodology conducted in R, with clusters determined by the Seurat algorithm. Initially, differential expression analysis was conducted in Seurat to identify genes and their associated cluster information within the samples (Seurat::FindAllMarkers()). Subsequently, genes were individually grouped on the basis of their clusters, and GO analysis was performed using the enrichGO function in the clusterProfiler package (https://bioconductor.org/packages/release/bioc/html/clusterProfiler.html). A reference genome-wide annotation for human, primarily using mapping via Entrez Gene identifiers, was obtained from the org.Hs.eg.db package within the Bioconductor library and converted into a data frame. Visualization of the GO data was accomplished using the GOplot package (GOplot::GoBubble()). For optimal visualization, only the cluster containing CDH5 was selected and depicted in the bubble plot. Pathway analysis was conducted using a gene-based approach, where signature genes corresponding to each pathway were sourced from the Molecular Signatures Database (MsigDB; https://www.gsea-msigdb.org/gsea/msigdb/human/genesets.jsp ). Specifically, our focus was on the WNT pathway, VEGF angiogenesis pathway, and TGF-β pathway, with gene extraction performed using the msigdb package ( https://bioconductor.org/packages/release/data/experiment/html/msigdb.html ). These pathways are categorized within the Curated Gene Sets collection (C2 gene sets) under the Biocarta subcollection. The expression patterns of individual genes derived from this methodology were visualized using the FeaturePlot function in R, enabling two-dimensional visualization. Using the publicly available websites ( https://waelchli-lab-human-brain-vasculature-atlas.ethz.ch/ and https://brain-vasc.cells.ucsc.edu ) from Wälchli et al. ( ), we visualized our identified GBM endothelial–enriched genes by visualizing gene expression of FACS-sorted endothelial and unsorted GBM tumor cells. Female Nu/Nu mice (Envigo) and C57/BL6 (Charles River Laboratories) aged 8 weeks were used for in vivo experiments. All our procedures followed the guidelines by the Institutional Animal Care and Use Committee with support of the Center for Animal Resources and Education at Brown University: Preclinical studies on brain cancer, no. 24-09-0004. Glioma stem cell–like cell lines G9-PCDH, G34-PCDH, G33-PCDH, G62-PCDH, and G30-LRP were obtained and cultured as previously described ( , , ). Briefly, cells were grown as neurospheres using neurobasal medium (Gibco) supplemented with human recombinant epidermal growth factor (20 ng/ml; Peprotech), human recombinant fibroblast growth factor (20 ng/ml; Peprotech), 2% B-27 supplement (Thermo Fisher Scientific), 0.1% GlutaMax (Thermo Fisher Scientific), and 0.1% penicillin/streptomycin (Thermo Fisher Scientific). Cells were left to grow at least overnight for sphere formation. For single-cell dissociation, Accutase (Gibco) was used for 5 min at 37°C. For culturing GL261-Luc2 cells, we used 10% fetal bovine serum (Gibco), with 0.1% GlutaMax and 0.1% penicillin/streptomycin in Dulbecco’s modified Eagle’s medium/F12 media (Gibco). Growth and culturing of immortalized human cerebro-microvascular endothelial cells (HCMEC/D3) (Sigma-Aldrich), primary HBMECs (ScienCell), human primary astrocytes (Lonza Biosciences), and human primary pericytes (ScienCell) were performed as previously reported ( , ). Briefly, HCMEC/D3 and HBMEC cells were cultured in endothelial cell media (ScienCell) supplemented with fetal bovine serum, endothelial cell growth supplement, and penicillin/streptomycin as provided by the company. Astrocyte and pericyte cells were grown in complete formulations of astrocyte cell media (ScienCell) and pericyte cell media (ScienCell), respectively. For immunostaining experiments, HCMEC/D3 and HBMEC cells were grown in type 1 rat collagen-coated plates. These endothelial cells were used below passage 20 for maintenance of their BBB properties. Cells were plated at a density of 1500 cells per well in black-well clear-bottom 96-well plates and left growing in culture conditions overnight. Next day, cells were treated with titrating doses of the indicated compounds. For BIA-only cytotoxicity studies, cells were incubated with BIA for 96 hours. For BIA and cisplatin combinatorial studies, cells were incubated with BIA and cisplatin, and corresponding controls, for 5 days. Next, we used the Cell-Titer Glo 3D (Promega) following the provider’s guidelines and quantified for luminescence signal using a Molecular Devices SpectraMax M2 plate reader. Conditions were repeated in triplicates. Fluorescently labeled GBM cells (G9-PCDH and G34-PCDH, GFP-labeled) were plated in clear ultralow attachment 96-well plates (Costar) with a density of 2000 cells per well using 100 μl of complete neurobasal medium. Then, cells were centrifuged at 1200 rpm for 3 min. Cells were treated as indicated above, and fluorescence was visualized using a Nikon Eclipse Ti2 microscope. Sphere diameter was measured using ImageJ software. Conditions were repeated in triplicates. To identify whether the BIA and cisplatin combinations present synergistic antineoplastic effects in GBM cell line neurospheres, we used the SynergyFinder 3.0 software ( ). For this, cell viability assays (see above) were performed. Concentrations of 0, 0.3, 1, and 3 μM BIA were added in combination with 0, 0.62, 1.25, 2.5, 5, and 10 μM cisplatin, accordingly, for an exposure duration of 5 days. Cell Titer Glo 3D assays were performed for cell viability assessment. SynergyFinder 3.0 analysis was done with LL4 curve fitting, with outlier correction, following a Zero Interaction Potency (ZIP) synergy score. We performed a ZIP-based analysis since this models low false-positive rates while calculating synergy of anti-oncogenic drugs ( ). For reference, δ-scores of less than −10 could signify antagonism, −10 to 10 could signify additivity, and above 10 could signify synergism. For RNA-seq, HCMEC/D3 cells were plated at a density of 500,000 cells per well in a six-well plate, left to grow for 24 hours, and then treated with 1 μM BIA or dimethyl sulfoxide (DMSO; control). After 24 hours, cells were collected and processed for RNA extraction using the column-based RNeasy kit (QIAGEN), following the provider’s instructions. RNA quality and quantity were quantified using a NanoDrop One (Invitrogen). At least 500 ng of RNA was submitted for bulk RNA-seq at GeneWiz (Azenta Life Sciences). Quality control (QC) was accessed, and library was prepared with poly(A) selection. Sequencing was performed using Illumina HiSeq. Differential gene expression on the RNA-seq raw data (FASTQ files) was analyzed by Azenta Life Sciences using DESeq2 aligning to human transcriptome. Data QC was verified. Log 2 FC was calculated by log 2 (BIA group mean normalized counts/control group mean normalized counts). The Wald test P value and Benjamini-Hochberg adjusted P value were calculated. A heatmap and volcano plot of top adjusted P value DEGs in ensemble ID annotation biclustering to treatment conditions were generated. Control and BIA groups consisted of three-independent samples. For IF staining of HCMEC/D3 endothelial cells, we coated eight-well Nunc Lab-Tek chamber slides (Thermo Fisher Scientific) with 1× type 1 rat tail collagen (Corning) following the provider’s instructions. Then, we plated at a density of 50,000 cells per well and left in culture for 72 hours to allow for barrier formation. Next, we treated with BIA or control for 24 to 48 hours. Cells were then fixed with 10% formalin (Thermo Fisher Scientific) for 10 min, permeabilized for 30 min using 0.01% Triton X-100, and blocked with 0.1% normal donkey serum (Calbiochem) for 1 hour in 0.025% Tween 20 (Thermo Fisher Scientific) in phosphate-buffered saline (PBS) (Gibco). Then, the following primary antibodies were added: mouse anti-CDH5 (1:100; VE-cadherin, BioLegend), rabbit anti–Claudin-5 (1:100; Thermo Fisher Scientific), mouse anti–ZO-1 (1:100; Invitrogen), rabbit anti–pan-laminin (1:300; MiliporeSigma), rabbit anti-COL1A1 (1:100; Thermo Fisher Scientific), rabbit anti-ACVRL1 (1:100; Thermo Fisher Scientific), rabbit anti-ESAM (1:200; Thermo Fisher Scientific), goat anti-endoglin (1:200; R&D Systems), and mouse anti-PDGFRβ (1:100; Abcam). These primary antibodies were incubated overnight in the cold. Next day, the following secondary antibodies were used for 2 hours at room temperature: Alexa Fluor 594 anti-mouse (1:500), Alexa Fluor 594 anti-rabbit (1:500), and Alexa Fluor 647 anti-mouse (1:500), all of these were from Thermo Fisher Scientific. For cytoskeleton staining, phalloidin-iFluor 488 (1:1000; Abcam) was used for 30 min, and nuclei staining was performed using Hoechst 33342 (1:1000; Thermo Fisher Scientific) for 5 min, at room temperature. For GBM cell staining, cells were cultured in 10% DMSO in complete neurobasal media for 2 days and then plated at a density of 50,000 cells per well in eight-well Nunc Lab-Tek chamber slides. IF staining was performed as indicated above for endothelial cells. Primary antibodies used are as follows: rabbit anti-γH2AX (Ser 139 ) (Cell Signaling Technology) at 1:100 dilution and goat anti-rabbit Alexa Fluor 647 at 1:500 dilution. For mouse brain tissue staining, brains were collected from CO 2 -euthanized and PBS-perfused tumor–bearing mice and fixed in 10% formalin for 72 hours on rotation in the cold. Then, brains were transferred to 30% sucrose for 3 days at 4°C under rotation. Before cryo-sectioning, brains were frozen at −80°C for more than 30 min, embedded in optimal cutting temperature compound (Thermo Fisher Scientific), and transferred to −21°C to a cryostat (Leica CM1950) for sectioning (20-μm thickness). Sections were placed on slides and staining followed as indicated above. All pictures were taken using a LSM 880 Zeiss confocal microscope. CDH5 quantification was performed by converting images to RGB-stack eight-bit images, and red channel fluorescence was quantified for total CDH5 and by CDH5-to-CD31 + (blue channel) coverage by using the ImageJ (Fiji) software. For GBM clinical specimen staining, tissue was fixed in 10% neutral-buffered formalin straight after collection for at least 24 hours. Then, tissue was dehydrated with an increasing ethanol gradient, cleared in xylene, and embedded in paraffin wax before microtome sectioning. Sections were done at 4-μm thickness. For IF staining, sections were deparaffinated using xylene for 5 min, twice, and rehydrated in decreasing ethanol gradients (100, 95, and 70%) for 5 min each. Slides were rinsed with distilled water to remove ethanol. Slides underwent antigen retrieval by heating in microwave for 1 to 2 min and placed in citrate buffer at pH 6 for 15 min for cooling. Blocking was done with 10% normal donkey serum for 1 hour at room temperature. Slides then were stained as mentioned above for IF imaging. Pearson’s correlation analysis for IF staining was performed using CDH5 and CD31 or GFP as costain signals. Analysis was performed using a publicly available ImageJ Colocalization Threshold Plugin, Coloc2. BBB spheroids were grown and cultured with a fluorescein isothiocyanate (FITC)–conjugated (70 kDa) fluorescent dextran (MilliporeSigma) as previously reported ( , ). BBB spheroids were grown for 48 hours and then treated with BIA at increasing doses for 72 hours. Then, spheroids were collected and stained as indicated above for CDH5 and F-actin (phalloidin). In the case of fluorescent dextran incubation, BBB spheroids were collected in 1.5-ml microtubes (Eppendorf) and incubated for 3 hours at 37°C. Pictures were taken by confocal microscopy. For dextran permeability measurement, we captured 21 images using Z-stack layers of 5-μm intervals for achieving a total depth of 100 μm within the sphere. Fluorescent dextran intensity from maximal intensity projection was quantified using ImageJ (National Institutes of Health). HCMEC/D3 or HBMEC cells were plated in 8W10E+ PET eight-well arrays (Applied Biophysics) at a density of 100,000 cells per well in 500 μl. These arrays were placed in a prestabilized ECIS Z-Theta instrument (Applied Biophysics). Using the ECIS Z-Theta software (Applied Biophysics), measurements were set to 4000 and 64,000 Hz every 30 min. Cells were left to grow and form a barrier for 48 to 72 hours (normally, a resistance plateau would be reached, and capacitance showed at ~10 nF for 64,000 Hz). Cells were then treated with BIA and left to grow up to 5 days, with frequent drug-containing media readdition for maintenance of the culture. Resistance (ohm) and capacitance (nF) were recorded and plotted. Total RNA from GBM and HCMEC/D3 cells was obtained and processed as indicated above. For cDNA generation, we used 1 μg of RNA and processed with with the iScript cDNA synthesis kit (Bio-Rad), following the protocol indicated by the provider. All primers were designed using NCBI Primer-Blast tool. Detailed information on primer sequence can be found in table S3. Gene expression levels were quantified using PowerUp SYBR Green Master Mix (Applied Biosciences) on QuantStudio 6 Pro System (Applied Biosciences), normalized by housekeeping gene GAPDH expression and represented as relative expression using the comparative ∆∆C T method. HCMEC/D3 and GBM cell lysates were collected in radioimmunoprecipitation assay buffer (Thermo Fisher Scientific) supplemented with 1× protease/phosphatase inhibitor cocktail (Cell Signaling Technology). Lysate collection from murine tumor tissue samples (~30 mg) was performed under homogenization using 23G and 26G needles. Total protein concentration was measured using the Pierce 660 nm Protein Assay Reagent (Thermo Fisher Scientific) at 660-nm absorbance in a Molecular Devices SpectraMax M2 plate reader. Samples were incubated in 1× Laemmli sample buffer (Bio-Rad) at 95°C for 5 min before loading onto 10% Mini-PROTEAN TGX precast protein gel (Bio-Rad). The PageRuler Plus Prestained Protein Ladder (Thermo Fisher Scientific) was used as a ladder. Blocking was performed in 5% milk with 0.1% Triton X-100 in 1× PBS (TBST) (Gibco) for 1 hour at room temperature under shaking. Primary antibodies used were incubated in the cold under shaking overnight: anti-pCHK1 (Ser 345 ) (1:100; Cell Signaling Technology), anti-CHK1 (1:1000; Cell Signaling Technology), anti-pH2AX (Ser 139 ) (1:1000; Cell Signaling Technology), anti-WNT7A/B (1:1000; Proteintech), anti–IL-6 (1:1000; Thermo Fisher Scientific), anti–phospho-GSK3b (Ser 9 ) (1:1000; Cell Signaling Technology), anti-vinculin (1:1000; Proteintech), anti–phospho-TYK2 (Tyr 1054 and Tyr 1055 ) (1:1000; Thermo Fisher Scientific), anti-phospho-STAT5A/B (Tyr 694 ) (1:1000; Cell Signaling Technology), anti–phospho-p38α (Thr 180 and Tyr 182 ) (1:1000; Thermo Fisher Scientific), anti–phospho-SRC (Tyr 416 ) (1:1000; Cell Signaling Technology), anti–phospho-CREB (Ser 133 ) (1:500; Thermo Fisher Scientific), anti-MFSD2A (1:500; Proteintech), anti-CAV1 (1:1000; Proteintech), anti-CD144 (VE-cadherin) (1:1000; Thermo Fisher Scientific), and anti–β-actin (1:2000; Cell Signaling Technology). The appropriate secondary antibody goat anti–mouse–horseradish peroxidase (HRP; Sigma-Aldrich) or goat anti–rabbit-HRP (Sigma-Aldrich) was used in 5% milk in 1× TBST with 1:5000 dilution for 1 hour at room temperature. HCMEC/D3 cells were plated at a density of 1 million cells and treated with either 1 μM BIA or vehicle DMSO for 24 hours. Cell lysates were collected with the manufacturer provided lysis buffer 6 supplemented with aprotinin (10 μg/ml; Tocris), leupeptin hemisulfate (10 μg/ml; Tocris), and pepstatin A (10 μg/ml; Tocris) for protein preservation. Fifty micrograms of lysate from each sample was loaded into each membrane. All experiment procedures were performed using the Proteome Profiler Human Phospho-Kinase Array Kit (R&D Systems) following the manufacturer’s protocol. G9-PCDH and G30 cells were cultured to approximately 60% confluency and transfected using Lipofectamine RNAiMax Transfection Reagent (Invitrogen) for 1 day and then replated for Western blot or cell viability assays. All experimental steps followed the manufacturer’s protocol. siCHK1 (Ambion) was used for CHK1 depletion. MISSION siRNA universal negative control (Sigma-Aldrich) was used as control siRNA. Electroporation was performed for HCMEC/D3 cell line transfection. Briefly, 10 × 10 7 cells were trypsinized, washed with 1× PBS, and resuspended in resuspension R buffer (Invitrogen) at a density of 10 × 10 7 cells/ml. A total of 200 nM siCDH5 (Ambion, 4392420, s223070) or MISSION negative control was added to the cells. Cells with siRNA mix were transferred to a cuvette. The Neon Transfection system (Invitrogen) was set up at 1400 V and a pulse width of 20 ms for two pulses. After electroporation, cells were transferred to a well in a 24-well plate and left incubating at 37°C, 5% CO 2 for 48 hours; at this point, cells were collected for protein lysates and Western blot as indicated above. For cell cycle analysis, 100,000 HCMEC/D3 cells were plated in six-well plates and treated with indicated concentrations of BIA or control for 48 hours. Then, cells were washed twice with PBS and fixed/permeabilized with 5 ml of cold 70% ethanol added dropped-wise while vortexing at low speed. Cells were stored for 1 day at −20°C, washed three times with PBS, and treated with ribonuclease I (20 μg/ml; Thermo Fisher Scientific) and stained with anti-Ki-67 FITC-conjugated (1:1000) (BD Biosystems) and 1.5 μM propidium iodide (Thermo Fisher Scientific). After 30 min of incubation in the dark, cells were analyzed using a CytoFLEX system (Beckman Coulter). Fifty thousand events were counted, and data were analyzed using the CytoFLEX system software (Beckman Coulter). For apoptosis assessment, HCMEC/D3 cells were treated as indicated above for 72 hours with BIA at indicated doses. Cells were collected from the six-well plates, washed three times with PBS, and then incubated with SYTOX Blue nucleic acid stain (5 mM) with a dilution of 1:1000 for 15 min. Cells were submitted and analyzed in the CytoFLEX system and its software as indicated above. For DNA damage and cell cycle assessment of BIA and cisplatin, G62 cells were plated at a density of 100,000 cells per well in a six-well plate and grown in complete neurobasal media. Cells then were treated with BIA and/or cisplatin and control for 72 hours. Next, cells were collected and washed three times with PBS and stained with 1:500 of FITC anti-γH2AX phospho (Ser 139 ) (BioLegend) antibody and propidium iodide (1 mg/ml) at 1:1000 dilution for 30 min in the dark. Cells were taken for analysis in a BD Fortessa cytometer, and data were analyzed using a FlowJo software (BD Biosciences). HCMEC/D3 cells were counted and plated at a density of 300,000 cells per well in a six-well plate. After overnight growth in culture conditions, DMSO or BIA was added at 1 or 5 μM. Cells were counted every 2 days, and media with fresh BIA or DMSO was replaced for continuous growth. For cytokine analysis of brain endothelial cells after BIA exposure, HCMEC/D3 cells were plated at a density of 500, 000 cells per well in a six-well plate. Cells were treated with indicated doses of BIA or DMSO for 48 hours. Then, 1 ml of media was collected and processed for cytokine quantification in a Luminex platform (Thermo Fisher Scientific) following the provider’s instructions. BIA powder stocks (MilliporeSigma) were resuspended in DMSO at a concentration of 10 mM (in vitro usage) or 100 mM (in vivo usage). For animal experiments, 100 mM BIA was dissolved in 2% Tween 20 (Thermo Fisher Scientific) and 1% polyethylene glycol 400 (Thermo Fisher Scientific) in sterile PBS to achieve a concentration of 10 mM BIA. PPRX-1701 was prepared and provided by Cytodigm Inc. as previously reported ( ). G9-TCF cells were engineered by overexpressing a luciferase gene (Luc2) controlled by a TCF7-recognized promoter in the G9-PCDH cell line. Cells were plated in 96-well dark-well clear flat bottom plates at a density of 1500 cells per well. Next day, cells were treated with increasing BIA doses for 5 hours and then exposed to d -luciferin (10 μg/ml; Goldbio). Luminescence signal was quantified in the in vivo imaging system (IVIS). G30-LRP cells were implanted in nude mice as previously indicated, left to grow for 14 days, and injected with BIA or PPRX-1701 [20 mg/kg, intraperitoneal (ip)]. After 1 hour in circulation, mice were euthanized and perfused, and tumor and brain tissue were harvested. Tissue was frozen at −80°C until processing. Quantification of BIA was performed using a Q-Exactive HFX Orbitrap mass spectrometer (liquid chromatography–high resolution mass spectrometry) (Thermo Fisher Scientific). Sample processing and analysis were performed as previously described ( ). For intracranial tumor implantation, GBM neurospheres were grown to 70% confluency before dissociated into single cells on the day of surgery. Fifty thousand cells were resuspended in 3 μl of sterile PBS and injected intracranially into the striatum (2 mm right hemisphere, 1 mm frontal, 3 mm depth from bregma) of mice under anesthesia and stereotactically fixed. Tumors were left to grow for approximately 2 to 3 weeks, depending on the cell line. Animals were randomized to treatment groups. BIA injections consisted of 20 mg/kg (ip), except if indicated otherwise. PPRX-1701 was administered at 20 mg/kg [intravenously (iv)] via the lateral tail vein. Cisplatin injections were performed at 5 mg/kg (maximum tolerated dose, ip). All GBM tumor murine studies involved continuous condition and weight assessments, with endpoint considered when 20% of weight loss and/or moderate-to-high grimace scale and neurological symptoms were observed. Mice treated with cisplatin after BIA administration were euthanized and intracardially perfused with PBS, and tissue was harvested to be stored at −80°C. Tissue was processed, and platinum (Pt195) was quantified using an Agilent 7900 ICP-MS, as previously described ( ). Sodium fluorescein was administered to G30 tumor–bearing mice intravenously via lateral tail vein at 20 mg/kg. Then, 30 min after administration when peak fluorescence is reached in the brain, mice were euthanized and and intracardially perfused for at least 1 min using 1× PBS (Gibco), for the immediate brain tissue harvest. Fresh brain samples were visualized in a Xenogen IVIS. Quantification of pixel intensities from acquired images was performed in ImageJ.
The addition of
9e7a1ca3-c67f-48ab-8c7c-279a3f414a14
10994411
Microbiology[mh]
Kimchi is a representative lactic acid fermented vegetable food of Korea. The main ingredient of kimchi is baechu (kimchi cabbage, Brassica rapa ), red pepper powder, garlic, green onions, ginger, and various salted seafood ( jeotgal ) are added to salted baechu , and fermented for a certain period of time . Kimchi combines the various flavors of organic acids and free amino acids and ingredients produced during fermentation to give it a unique taste. Among the constituents, free amino acids affect the quality of kimchi by affecting the growth of lactic acid bacteria, as well as taste . The content of the free amino acids during fermentation is affected by the kind, amount, combination of the raw material, aging temperature, and more . Among the auxiliary ingredients added to kimchi, seafood is added in the coastal region to enhance the taste of kimchi and supplement the nutrients, such as amino acids . Seafood is decomposed by enzymes that are autolytic, and during kimchi fermentation, produces kimchi microorganisms, giving off a savory taste and becoming a nutrient source for lactic acid bacteria . Seafood added as minor ingredients for kimchi studied so far include sea squirt ( Styela clava ), kwamegi (semi-dried Clupea pallassii or Cololabis saira ), dried pollack ( Gadus chalcogrammus ) powder, octopus ( Enteroctopus dofleini ), squid ( Decapodiformes ), hairtail ( Trichiurus lepturus ), and croaker ( Micropogonias undulatus ), as well as other seafood . The jogi (yellow croaker, Micropogonias undulatus ) is a species of fish in the family Sciaenidae, which occurs around Korea, China, and Japan. The jogi contains 76.3 g of moisture, 19.0 g of protein, 4.0 g of lipids, 1.3 g of ash, and small amounts of minerals, such as sodium, potassium, calcium, and phosphorus, and vitamins, such as vitamin A, thiamine, riboflavin, and niacin per 100 g of edible part (The Korean Food Composition Table; http://koreanfood.rda.go.kr/eng/fctFoodSrchEng/main ). As such, jogi is a food ingredient that is rich in protein. In a previous study, we identified the effect of adding jogi to kimchi on microbial communities . Contrary to the expectation that microbial species that break down proteins will predominate due to the high protein content of the jogi , the addition of jogi did not significantly affect the change in bacterial communities. Therefore, this study examined the physicochemical properties and the effect on the quality of kimchi according to the fermentation period while storing jogi -added kimchi, and sought to determine the correlation between the amino acid content produced during aging and the microbial bacteria. Kimchi Color analysis Reduction sugar analysis Organic acid analysis Analysis of free amino acids Biographical correlation analysis between bacterial communities and free amino acids Comparative genomics Statistical analysis Kimchi, which was used for microbial community analysis , was used in this study. In brief, baechu-kimchi was home-made kimchi in Sangju, Gyeongsangbuk-do, and which was used as a control group. Baechu-kimchi was made by adding carrot, chives, garlic, ginger, jeotgal , leaf mustard, and red pepper into salted baechu and jogi—baechu—kimchi with approximately 5% jogi added to the total volume of the control group was used as an experimental group. Kimchi was immediately stored at 10 °C for 20 days, sampled every 5 days, and stored in a deep freezer at −80 °C. In order to reduce the prejudice of kimchi collected part, all samples were ground and used in the experiment. Kimchi sample for color difference analysis was ground with a blender, and 1 g was taken to measure the values of lightness (L), redness ( a ), and yellowness ( b ), which are the components of the Hunter color system, and Δ E values were derived. The chromaticity was measured by colorimeter (Chroma Meter CR-400, Konica Minolta, Tokyo, Japan) calibrated with a standard white plate (L = 93.28, a = −0.58, and b = 3.50). All experiments were conducted in triplicate. For measurement of the reducing sugar, kimchi sample was homogenized, filtered through sterilized cheesecloth, and a filtrate was used. Reduction sugars were measured by a colorimetric method using dinitrosalicylic acid (DNS) . The DNS reagent of 3 mL was added to 1 mL of a 50-fold diluted filtrate, stirred well, reacted in boiling water for 5 min, and cooled to measure the absorbance at 550 nm by UV/VIS spectrophotometry (Optizen POP, Mecasys Co., Ltd, Daejeon, Republic of Korea), and this measurement was then converted into glucose. All experiments were conducted in triplicate. Ground samples (5 g) were mixed with 45 mL of sterilized water. The mixture was sonicated for 10 min, and then filtered through a 0.2 μm syringe filter (Sartorius Stedim Biotech GmbH, Göttingen, Germany). The filtrate was analyzed by a custom service provided by the National Instrumentation Center for Environmental Management in Korea ( http://nicem.snu.ac.kr/ ). The analyses were performed using the Dionex Ultimate 3000 HPLC system (Thermo Scientific, Waltham, MA, USA). Chromatographic separation was conducted with an Aminex 87H column (300 mm × 10 mm; Bio-Rad, Hercules, CA, USA). Gradient elution was carried out with 0.01N H 2 SO 4 . Injection volumes were 10 μL, and the column temperature and flow rate were 40 °C and 0.5 mL/min, respectively. The detection was performed using a refractive index (RI) detector (RefractoMAX520, ERC Inc., Saitama, Japan) at 210 nm. The organic acid contents of two samples prepared in the same conditions were analyzed in triplicate. The same filtrate that was used for the analysis of organic acids was analyzed using a JASCO LC-2000 Plus Series HPLC system (JASCO, Tokyo, Japan). Chromatographic separation was conducted with an AccQ-Tag Amino Acids C18 Column (3.9 mm × 150 mm, 4 μm; Waters Corporation, Milford, MA, USA). Gradient elution was carried out with AccQ-Tag/water (solvent A; 10:90, v/v) and acetonitrile/water (solvent B; 60:40, v/v). The following binary mobile phase linear gradients were used: 100% A at 0 min, 98% A at 5 min, 93% A at 15 min, 90% A at 19 min, 67% A at 32–33 min, 0% A at 34–37 min, and 100% A at 38 min. Injection volumes were 10 μL, and the column temperature and flow rate were 37 °C and 1 mL/min, respectively. The detection was performed using a fluorescent detector. AccQ-Fluor Reagent Kit (WAT052880, Waters Corporation, Milford, MA, USA) was simultaneously used as derivatizing agents, according to the manufacturer’s instructions. The photodiode array (PDA) detector (MD-2018 Plus, JASCO, Tokyo, Japan) wavelength was set to 254 nm for the determination of the AccQ-Fluor Reagent-derivatized amino acids. The concentrations of individual free amino acids were determined using five-point calibration curves of the amino acid standard (WAT088122, Waters Corporation, Milford, MA, USA). The free amino acid contents of two samples prepared in the same conditions were analyzed twice. Bacterial communities at species level in the two types of kimchi were analyzed for correlation with the free amino acids using the Pearson correlation coefficient. Correlation analysis was performed using the Paleontological Statistics software package (PAST) version 4.13 . Comparative genomic analysis was performed to explain the production of amino acids produced in kimchi via genomic insights. Ten species assumed to be correlated with amino acid production were selected. Genome information was prioritized for species registered with complete genomes and type strain, and next selected as species registered with complete genomes, except for type strain and draft genomes. The genomic information for the 10 species was retrieved from the National Center for Biotechnology Information database (NCBI; http://ncbi.nlm.nih.gov/genomes ): Leuconostoc ( Leu .) mesenteroides ATCC 8293 T (GenBank accession no. CP000414), Leu . lactis DMLL10 (GenBank accession no. CP116456), Bacillus ( B .) subtilis NCIB 3610 T (GenBank accession no. CP020102), B . velezensis DMB06 (GenBank accession no. CP083763), B . siamensis B28 (GenBank accession no. CP066219), B . amyloliquefaciens DSM 7 T (GenBank accession no. FN597644), B . aerophilus KJ82 (GenBank accession no. CP091093), B . paralicheniformis 14DA11 (GenBank accession no. CP023168), Klebsiella ( K .) grimontii SS141 (GenBank accession no. CP044527), and Mammaliicoccus ( M .) sciuri NCTC12103 T (GenBank accession no. LS483305) Amino acid biosynthetic pathways were predicted using Rapid Annotations using the Subsystems Technology (RAST) server for SEED-based automated annotation , then confirmed using the iPath (ver. 3) module and CLgenomics ver. 1.55 software. Principal component analysis (PCA) was performed on the content of amino acids detected in both of the kimchi samples to visually differentiate between the bacterial communities of each kimchi sample during fermentation, using SPSS software v.27 (SPSS Inc., Chicago, IL, USA). Effects of Effects of Effects of Effects of Correlation of amino acids and microbial communities jogi addition on color value The effect of jogi addition on the color of kimchi was confirmed. In the case of BK without jogi , the L, a , and b values were 37.50 ± 0.22, 15.31 ± 0.18, and 22.78 ± 0.24, respectively. In the case of JBK with jogi , the L, a , and b values were 38.77 ± 0.17, 16.21 ± 0.10, and 22.80 ± 0.19, respectively, showing similar results to BK . During the fermentation, L and b values decreased slightly, while in the case of BK, the a value increased slightly. However, in the case of JBK, the L, a , and b values all increased. These results assumed that the increase in L, a , and b values was due to jogi addition, but in the case of kimchi with Alaska Pollack, the a and b values decreased , and thus we suggest this to be a change by case, rather than a change due to seafood addition. The a / b ratio, which is the ratio of red to yellow, increased from 0.67 to 0.72 on the 20 th day of fermentation, and from 0.71 to 0.74. In the case of the overall change in color (Δ E ), as the storage period increased, Δ E tended to increase, and all groups were in the numerical range 0 − 2.62, making the change difficult to distinguish with the naked eye. As the kimchi fermented, all samples, such as kimchi cabbage, seasoning, ingredients, and jogi , were mixed and aged, and the light yellow—white color of the jogi itself affected the L, a , and b values of kimchi, but the degree was weak, so there was no significant difference in color overall. jogi addition on reducing sugar production In kimchi fermentation, reducing sugar is used as a carbon source for microorganisms, and is decomposed into organic acids and CO 2 ; thus as fermentation proceeds, the content of reducing sugar decreases . Since this decomposition of reducing sugar gives kimchi its unique taste and flavor, the degree of aging of kimchi, the degree of growth of microorganisms, and changes in flavor are also evaluated by examining the changes in reducing sugar . To confirm the effect of jogi addition on the change in reducing sugar, the reducing sugar was analyzed. In the case of BK, on the 5 th day of fermentation, the reduction sugar content increased; but as fermentation progressed, it decreased . In addition, in the case of JBK, as fermentation progressed, it decreased, and the content of reducing sugar was less on 20 th day of fermentation, compared to the BK without jogi . As a result of the following literature survey to check if this was the result of jogi addition, the kimchi group with pollack added had a higher content of reducing sugar than the kimchi group without Pollack . However, kimchi groups with octopus or squid were found to have lower levels of reducing sugar than kimchi groups without octopus or squid . As a result of difficulty in affirming that by adding seafood, the reduction sugar has decreased or increased, these results are assumed to derive different results for each case. However, as fermentation progresses, reducing sugar clearly decreases. jogi addition on organic acid production Five organic acids and ethanol were identified in the two types of kimchi during fermentation . Citric acid was a major organic acid in the early stages of fermentation, but as fermentation progressed, it tended to gradually decrease. As fermentation progressed, acetic acid, ethanol, and lactic acid increased significantly, of which lactic acid increased the most. In comparison, the contents of formic acid and fumaric acid were not high during the fermentation period. Above all, formic acid showed no pattern of increase or decrease during the fermentation period. Accordingly, these results assumed that those two organic acids did not have a significant effect on kimchi fermentation. To determine the effect of jogi addition, we tried to confirm the change in organic acids between baechu—kimchi and jogi – baechu—kimchi , but it was confirmed that the effect on the production of organic acids was not significant. Kimchi is a representative lactic acid fermented food, and the results of our experiments show that the amount of lactic acid increases over time, indicating that fermentation is proceeding well. What is interesting is that the sum of the acetic acid and ethanol contents is approximately same as the content of lactic acid. Ethanol, as well as lactic acid, were produced from the same molecule from one molecule of glucose via the hetero-fermentative pathway (or phosphoketolase pathway) under anaerobic condition . On the other hand, the homo-fermentative pathway is known to produce lactic acid and acetic acid from glucose in the presence of oxygen . Kimchi is not a perfectly controlled anaerobic fermentation. Therefore, based on the results, it was estimated that hetero-lactic fermentative LAB in contact with oxygen produces acetic acid, and in its absence, ethanol. To confirm this, the same kimchi used in this experiment was compared with the results of bacterial communities, which results are shown in previous studies . It was found that the amount of bacterial communities was related to organic acids; Acetic acid, EtOH, and lactic acid increased from day 10 of fermentation , while Leu . mesenteroides , a heterolactic-fermentative LAB, was found to increase from day 10 and 5 of fermentation in the NGS results and culture-dependent results, respectively . These results suggest that acetic acid, EtOH, and lactic acid are produced by the influence of Leu . mesenteroides . However, in the culture-independent method , Lactobacillus sakei , homo-lactic fermentative LAB, was more predominately detected than Leu . mesenteroides . Lb . sakei only produces two molecules of lactic acid from one molecule of glucose, so the lactic acid content should have been higher than the sum of acetic acid or ethanol . Based on the content of organic acids , we assumed that during kimchi fermentation, Leu . mesenteroides could have influenced the production of organic acids, not Lb . sakei . To date, it is known that there is a difference in microbial aggregation according to culture-dependent and non-dependent methods , but considering the relationship with fermented products, we judged that it was more correlated with the results of culture-dependent methods. jogi addition on amino acid production The effect of jogi addition on the microbial community was weak , and there was little effect on reducing sugar, organic acids, and color (Figs and , ). In general, kimchi with seafood, such as oysters, is delicious right after being made, and it is recommended to speed up consumption. Above all, it has been reported that the texture is related to the rich taste immediately after making kimchi . The amino acids that affect the umami flavor are glutamic acid and aspartic acid, and thus check the free amino acid containing two amino acids . The increasing amino acids in fermentation for BK are alanine, arginine, leucine, lysine, phenylalanine, proline, tyrosine, and valine. The decreasing amino acids are aspartic acid, cysteine, glutamic acid, histidine, isoleucine, methionine, and threonine. In addition, JBK with jogi also shows similar results. Interestingly, the contents of glutamic acid and aspartic acid, which are rich taste components, in the early stages of fermentation, are higher in JBK than in BK . Above all, on day 0, aspartic acid was three times higher than BK, while at day 15 of the fermentation, it was about 1.5 times higher. On day 0, glutamic acid was 1.2 times higher than BK, while on day 15 of fermentation, it tended to become similar. In addition, throughout the fermentation, alanine, arginine, glycine, isoleucine, leucine, lysine, methionine, phenylalanine, tyrosine, and valine showed higher levels than BK. In contrast, histidine, serine, and threonine were detected high in BK, and in the early stages of fermentation, showed higher values. This indicates that jogi addition is related to the content of amino acids, rather than color, reducing sugar, and organic acids. However, it is considered necessary through additional experiments to verify whether in this experiment, an effective amino acid is produced by the addition of seafood, such as premature fish, or an amino acid was produced only in this fermentation. PCA analysis was performed through the SPSS statistical program using the measured amino acid content . In the PCA factor loading plot, aspartic acid, cysteine, histidine, threonine, proline, and valine were located in the negative direction based on PC1, while all other amino acids were located in the positive direction . In the PCA score, each kimchi sample was divided by the fermented day . On day 0 of fermentation, both JBK and BK were located in the negative direction, based on PC1 . However, as fermentation progressed, both JBK and BK migrated in the positive direction, based on PC1. In particular, on day 20, JBK is in the positive direction, based on PC1; it is located in the first quadrant, which is the positive direction based on PC2; and the rich content of amino acids can be visually predicted from the location of this point. As can be seen in , these results show that on day 20, the content is high, except for the six amino acids: aspartic acid, cysteine, histidine, threonine, proline, and valine, out of 17 amino acids. It was visually confirmed that aspartic acid and glutamic acid of the JBK were also higher in content than in the BK. These results suggest that jogi addition has a greater impact on amino acid content than on color, reducing sugar, organic acid, and microbial community. In particular, the results clearly show that in the early stages of fermentation, there was a significant difference in the content of savory components, such as glutamic acid and aspartic acid. In summary, based on the results so far, the jogi addition affects the change in amino acid content from the beginning of fermentation. In addition, although there is no significant difference in the production of organic acids, it was judged that the production of bacterial community and organic acids derived by the culture-dependent method was more correlated than the results of the non-culture-dependent method. Therefore, the results were derived by analyzing the correlation between bacterial community analyzed by the culture-dependent method and the amino acids with a heat map . Most species have been found to exhibit some correlation with amino acids . However, 6 out of the 17 amino acids, namely alanine, glycine, isoleucine, phenylalanine, tyrosine and valine, were not correlated with microorganisms. Leu . mesenteroides , which accounted for 53% of the total microbial flora, formed a positive correlation between arginine and proline. B . subtilis , which accounted for 17.8%, showed a positive correlation with cysteine, glutamic acid, histidine, and threonine. Unusually, in the case of the genus Weissella , there was no correlation with amino acid production. The addition of jogi in the early fermentation period differed in the content of aspartic acid and glutamic acid, and the microorganisms that showed a positive correlation were B . amyloliquefaciens , B . subtilis , B . velezensis , B . aerophilus , and K . grimontii , all of which at the beginning of fermentation are species that dominate, and then decrease. Interestingly, the amino acids that are positively correlated with the genus Bacillus encoded the bio-synthetic genes for production of those amino acids via Bacillus genome analysis , whereas LAB, including Leu . mesenteroides , did not encode the synthetic genes . This mainly suggests that during fermentation, amino acids were degraded by proteolytic enzymes from protein. Bacillus species and K . grimontii , which are positively correlated with a number of amino acids, possessed more protease genes, which degrade protein, than genomes of Leu . mesenteroides and Leu . lactis . Therefore, we assumed that the effect on the amino acid content is different, depending on the enzyme produced by the microorganism. Consequently, these results assumed that the amino acids contents were affected by bacteria biosynthesis, and that protease activity during fermentation could also have an effect. It was expected that the addition of jogi to kimchi would increase the protein content and the content of bacteria that break down the protein, but contrary to our expectations, previous studies did not show much difference in microbial flora from the control groups without jogi addition . In the current experiments, the results of confirming changes in physical and chemical components through jogi addition did not confirm dramatic changes in color, reducing sugar, and organic acid components (Figs and , ). However, in the case of amino acids, at the beginning of fermentation, aspartic acid and glutamic acid related to umami ingredients showed a significant difference in the jogi addition group, while on 20 th day of fermentation, most amino acids, including these two amino acids, were higher than in the control group . Although the paper could not confirm that in the early stages of fermentation, the addition of seafood made excellent nutritional or sensual differences, word of mouth or universally known content recommends the consumption of seafood-added kimchi as quickly as possible, as kimchi immediately after manufacture is delicious. This general knowledge is probably understood to be due to the significant difference in taste when fish is added at the beginning of the fermentation. It is not just the difference in taste that encourages consumption within a short period of time, but it may also have included concerns that degrading enzymes from fish cause rapid fermentation, and shorten the period of shelf life. The results of this experiment can be meaningful, in so far as it is a scientifically proven fact that immediately after the preparation of kimchi, the jogi addition differs in the composition of the umami. However, these results are based on homemade kimchi and experiments conducted within 20 days after production. Therefore, it is suggested necessary to conduct experiments in the future to explore variations in factors such as the amount of jogi , extended fermentation periods, and fermentation temperatures. S1 Fig Relative abundance of the four most prevalent bacterial species in the microbiome of baechu—kimchi and jogi—baechu—kimchi , analyzed by (A) Culture-independent, and (B) Culture-dependent methods. This analysis is based on the results of a previously published paper . (DOCX) S2 Fig Predicted amino acid bio-synthetic pathways based on the genome of the 10 species. The names of enzyme-encoding genes and amino acids are depicted in green and white on orange, respectively. Black arrows correspond to potential enzymatic reactions catalyzed by gene products. When a strain possesses a gene, the appropriate color for the strain is shown in the box next to the gene name. (DOCX) S1 Table Putative protease genes identified in the genome of the 10 selected strains. The gene is denoted by the locus number of the genome. The Enzyme Commission (EC) number is a numerical classification scheme for enzymes, based on the chemical reactions they catalyze. Abbreviation: -, not identified or determined. (XLSX)
Pharmacogenetic allele variant frequencies: An analysis of the VA’s Million Veteran Program (MVP) as a representation of the diversity in US population
288adbd3-2744-4c7e-af5f-24fe30e14daf
9956596
Pharmacology[mh]
Genetic polymorphisms of metabolic pathways and cytochrome P450 (CYP) genes are associated with altering pharmacokinetics and pharmacodynamics of the absorption, distribution, metabolism and excretion (ADME) of drug and toxic compounds (xenobiotics). Gaining a better understanding of the interindividual variations of this genetic makeup is necessary to understand the metabolic rate of efficiency a xenobiotic is metabolized. In general, heritable selective pressure is a major determinant of variant frequency among the different ethnic populations, typically presenting with two or more variants identified in most metabolic pathway genes. Common star allelic variants (referred to herein as “variant”) prescribe a “normal” metabolic cycle while others convey a heightened or depressed metabolic cycle. Much of this is well catalogued in a variety of collections, including PharmGKB ( https://www.pharmgkb.org/ ) , with clinically actionable variant-vs-drug combinations presented in the Clinical Pharmacogenetics Implementation Consortium (CPIC https://cpicpgx.org/ ) and the Dutch Pharmacogenetics Working Group (DPWG http://upgx.eu/guidelines ) . While these variants are catalogued in the multitude of databases, it is also important to recognize that many of the variants identified heavily rely upon data derived from unique ethnic populations. Ethnic population data are typically derived from a limited collection of self-identified subjects and the unique variants associated within that ethnicity. Moreover, certain variants designated as normal are unique to a select ethnicity and not represented among others, such as is known for CYP2D6 and codeine, or CYP3A5 and Tacrolimus . Lastly, while certain populations are considered relatively homogenous over several generations as dictum of culture, not all data is reflective of this consideration which further contributes to the diversity of drug responses. The distribution of inherited xenobiotic-metabolizing alleles differs considerably between populations and appears to be rigid in frequency among ethnically stable populations. While there are several large-scale data sets that can provide variant frequencies of pharmacogenomic genes for researchers and clinicians to use, most of these data are not representative of the “melting pot” of the genetic ancestries present within the United States. (US). While one could rely on self-reported ethnicity to improve variant frequency found among unique populations, such data is imperfect . Further complicating possible variant predictions is that nearly everyone has at least one pharmacogenomic variant allele with as many as 3% carrying 5 allelic variants . These findings limit the overarching use of ethnically related variant frequencies in diverse populations such as is present in the US. This is because the data available is limited to a specific self-reported ethnicity and/or does not consider other variants that could be present among other ethnicities. This is a particularly important consideration for research of personalized drug therapy and potentially changes the healthcare guidelines provided by groups such as CPIC and DPWG that select important alleles for clinical genotyping based in part on population prevalence. To address the issues with the use of pharmacogenomic variants and to further explore possibly pharmacogenetically-associated variant markers we used the Million Veteran Program (MVP) with >800,000 participants to generate a coherent representation of allelic frequencies present within a US population. The MVP cohort is mostly male but is very diverse and represents the US population ancestry in general. The genotype data was imputed using the African Genome Resources (AGR) and 1000 Genomes imputation reference panels. Our analysis is based on the Release 4 of MVP data with 658,582 individuals genotyped with the MVP-1 Axiom array . Participants were assigned ancestry based on the HARE algorithm (Harmonized Ancestry and Race/Ethnicity) . The MVP cohort is diverse with ~30% of the cohort assigned as non-European (EUR 467k, AFR 125k, HIS 52k, ASN 8k). A small fraction of the cohort was highly admixed and not assigned to any of the four major ancestries and is not included in this analysis (<2%). Our aim is to provide ancestry specific variant frequency catalog for a significant fraction of pharmacogenomics relevant variants in a large cohort representative of the US population. We examined Single Nucleotide Variants (SNV), an important pharmacogenetics relevant Copy Number Variant (CNV) as well as Human Leukocyte Antigen (HLA) 4-digit alleles. We defined our pharmacogenomics gene set by combining information from two publicly available data bases, PharmGKB and PharmVar . Overall, we were able to determine SNV frequencies for 273/1339 targeted SNVs, in 148/152 targeted genes. Details on variant selection can be found in Methods. provides a comprehensive table of all allele frequencies. As expected, SNV allele frequencies vary substantially among HARE groups . In addition to HARE allele frequencies, we used Local Ancestry Inference (LAI) to identify ancestral origin of individual chromosomal segments and compute allele frequencies based on the local ancestry. We “painted” the African American samples (125 k individuals) using two-way deconvolution, extracting allele frequencies for the AFR and EUR tracks. For the Hispanic individuals (52 k) we used three-way deconvolution to compute allele frequencies for EUR, AFR and Native American (AMR) tracks. Details on the LAI will be presented elsewhere. In we present allele frequencies derived from HARE groups, LAI as well as two publicly available databases, 1 k genome and gnomAD . The most striking differences are observed for Hispanics, a group that is extremely heterogeneous and not well defined in the genetics literature. Allele frequencies for the three major MVP HARE groups (EUR, AFR, HIS) are in good agreement with gnomAD derived estimates. However, comparison of gnomAD HIS frequencies with the LAI AMR track of the MVP HIS population reveals significant differences . Here we note that the LAI AMR track provides much better allele frequency ascertainment than the 60 AMR genomes that were used to anchor the local ancestry deconvolution. In the MVP HARE HIS population (52k individuals) the AMR track contributes ~30% of the genome resulting in an effective population size of ~15k individuals. Furthermore, while the 60 AMR genomes we used to anchor ancestry deconvolution provide sufficient multi-locus information to resolve local ancestry, the AMR track is a much better sampling of the AMR genome as it exists today in the US population. Sirolimus is a widely used immunosuppressant and the variant controlling its metabolism, rs2242480 (allele CYP3A4*1G ) , varies among populations. We observe widely different allele frequencies in the three major groups (EUR, AFR, HIS) for gnomAD (0.09, 0.74, 0.37) and MVP (0.09, 0.73, 0.35). However, the local ancestry derived AMR allele frequency (0.67) is almost twice as high as the HIS allele frequency. The same observation applies to tacrolimus, another significant immunosuppressant. The controlling variant, rs776746 ( CYP3A5*3 ), shows large variation in major MVP groups (0.07, 0.70, 0.21) and there is a significant difference between HIS and local ancestry derived AMR allele frequency (0.31). Thus, recent demographic history of individuals, and the fraction of inheritance derived from different major population groups, has a large impact on the allele frequency distribution of pharmacogenomics relevant variants. CYP2D6 is an important component of cytochrome P450 and is involved in the metabolism of many commonly prescribed medications, including antidepressants, antipsychotics, beta-blockers, opioids, antiemetics, atomoxetine, and tamoxifen . In addition to SNV frequencies for the most significant variants , presents allele frequencies for an important copy number variant, the whole gene deletion designated as CYP2D6*5 in the pharmacogenomics literature (Figs and ). We called the CYP2D6*5 CNV using UMAP, a machine learning algorithm . Assignments are clearly separated for copy gain and copy loss. Furthermore, we can clearly separate single and double copy loss . The UMAP approach offers a clear advantage over classification based on Principal Components Analysis (PCA, ). We note that UMAP does not represent a general approach to copy number variation detection. Hyper-parameters for the model are tuned for the specific, relatively common CNVs. Furthermore, we achieve optimal performance only when we tune the model separately for individual HARE ancestries. presents CYP2D6*5 allele frequencies for the three major HARE groups. The major survey of CYP2D6*5 finds slightly different allele frequencies, e.g., 89% in Beoris et al vs 78% in MVP for copy number 2 EUR. Our findings are closer to the frequencies reported by gnomAD (80%, MCNV_22_1026 | gnomAD SVs v2.1 | gnomAD (broadinstitute.org) . The differences might be due to different assays: single site PCR vs SNP genotyping (MVP) or sequencing (gnomAD), or differences in ascertainment of ethnic background. We are not able to run the UMAP algorithm on phased chromosomes, but we can use ancestry deconvolution and test CNV status in individuals ancestry-homozygous at CYP2D6, e.g. AMR/AMR individuals in the HARE HIS group. Results are shown in . The ancestral AMR genome harbors fewer single-copy samples while the EUR tracks are in close agreement with observations in the EUR HARE cohort. In addition to SNVs and specific CNVs we derived HLA alleles from SNP genotypes using the HIBAG algorithm (HLA Imputation using attribute BAGging). Although HLA status does not modify pharmacokinetics there are well established adverse drug reactions in the presence of specific HLA alleles. For example, abacavir, a common anti-retroviral, causes abacavir hypersensitivity syndrome in the presence of HLA-B*5701 ; Allopurinol is typically a safe drug for the treatment of gout but in the presence of HLA-B*5801 is associated with an increased risk for allopurinol induced severe cutaneous adverse drug reaction (SCAR) with most serious cases developing Stevens–Johnson syndrome and toxic epidermal necrolysis (SJS/TEN) . HLA 4-digit Class I and Class II allele distribution for four HARE groups is shown in . As expected, allele frequencies are highly variable in the four groups, including the three alleles most relevant for pharmacogenomics: HLA-A*3101 , HLA-B*5701 and HLA-B*58 : 01 . Details of HLA allele imputation will be presented elsewhere. Here, we note that HLA imputation precision was >90% for HARE EUR, AFR and HIS groups. However, we currently observe lower precision for the ASN predictions due to lack of an appropriate training set. We present a survey of pharmacogenetics relevant variants in the MVP, a sample representative of the US population. Using the MVP-1 Axiom array we can resolve a large fraction of known pharmacogenomics alleles, either as direct or as imputed genotypes. In addition, we use the genotypes to derive population allele frequencies for an important common CNV, whole gene deletion/duplication of CYP2D6 , as well as population distribution of HLA alleles, including HLA alleles important for drug delivery decisions. As expected, there is substantial variation in allele frequencies between ancestry groups for a subset of the examined variants. In addition to allele frequencies of individual ancestry groups (HARE EUR, AFR, HIS, ASN) we use an innovative approach, LAI, to derive allele frequencies of ancestral genomes present in recently admixed US populations; 2-way deconvolution for HARE AFR (EUR, AFR) and 3-way deconvolution for HARE HIS (EUR, AFR, AMR). LAI conveys important information on allele frequency distribution in under-represented populations. Allele frequency is an important consideration in the formulation of clinical genotyping guidelines provided by groups such as CPIC and DPWG. The AMR track is a much better sampling of the AMR genome as it exists today in the US population compared to the small, and not necessarily representative, number of samples from AMR populations (60 vs 15,000 effective genomes). Sites with significantly different allele frequencies in AMR and EUR/AFR tracks are sites where self-identification as HIS provides limited power to guess likelihood of drug sensitivity. Thus, they are sites where groups such as CPIC and DPWG should rely on LAI minimum allele frequency rather than ethnic group allele frequency for recommendations. The large sample size we use for our analysis is particularly important for low frequency variants. For example, single and double-copy deletions of CYP2D6 are relatively rare. Therefore, it is inappropriate to derive frequencies from a reference a panel, even under the assumption that the reference panel is representative of the general US population. There are limitations in our derived population allele frequencies. While SNVs are phased neither CNV nor HLA calls are phased genotypes. Furthermore, successful phasing in the overall genome does not guarantee successful phasing in complex genomic regions such as CYP2D6 . For CYP2D6 in particular, we have been able to resolve whole gene deletions and duplications, but we are certain that there is additional small scale copy number variation that cannot be resolved by our UMAP machine learning approach. For example, small deletions and complex rearrangements involving the proximal CYP2D7 and CYP2D8 pseudogenes. It is likely that such complex variation has a minor contribution to the population distribution of CYP2D6 pharmacogenetics. However, resolution of population level frequency of such variants will require specialized assays such as long-range sequencing. Improving phasing and imputation will aid the eventual derivation of star alleles in these regions. We think that this comprehensive allele frequency report in a population representative of the US genome diversity will become a useful reference for future guidelines of relative importance of alleles worth ascertaining in pharmacogenetics screens. High variance of allele frequency, not only among ethnic groups but most importantly among ancestral genomes contributing to mixed ancestry individuals in the US population, further underscores the need for individual typing rather than reliance on self-reported ethnicity on drug delivery decisions in clinical practice. We hope this manuscript promotes the adoption of personalized medicine in under-represented populations. Ethics statementMVP genotype dataIdentification of known pharmacogenetics variantsIdentification of pharmacogenetics variants in the MVP genotype datasetAllele frequency analysesAnalysis and visualizationHLA type predictionsThe Veterans Affairs (VA) central institutional review board (cIRB) and site-specific IRBs approved the Million Veteran Program study. The MVP Release 4 dataset includes 658,582 individuals and consists of a hard-called dataset of 667,955 variants prepared as described in Hunter-Zinck et al. 2020 , as well as an imputed dataset. Genotype calls passing initial quality control were further prepared for phasing and imputation by removing markers with high missingness (>20%), monomorphic markers, and markers significantly out of Hardy-Weinberg equilibrium (p < 1e-6 adjusted for ancestry). Haplotypes were then statistically phased using SHAPEIT v4.1.3 ( https://odelaneau.github.io/shapeit4/ ) and imputed into the African Genome Resources and 1000 Genomes imputation panels using Minimac4 ( https://genome.sph.umich.edu/wiki/Minimac4 ). Each individual in the cohort was assigned a HARE group (EUR, AFR, HIS, or ASN), a surrogate variable for ancestry and race/ethnicity (Fang et al. 2019). The MVP Release 4 cohort consists of 467,162 EUR, 124,756 AFR, 52,423 HIS, 8,364 ASN, and 5,877 unassigned individuals. All analysis was performed in GRCh37. We curated a catalogue of known or high-confidence pharmacogenetics variants by rsID from the PharmGKB and PharmVar databases. From PharmGKB, we downloaded variant summary data ( https://api.pharmgkb.org/v1/download/file/data/variants.zip ) and kept only variants with at least one Level 1 or 2 PharmGKB clinical annotation. From PharmVar, we downloaded the complete database (version 4.2.4) and kept all variants. In total, we identify 1,339 unique variants from 152 genes. Genotyped datasetImputed datasetWe selected the intersection of known pharmacogenetics variants with the catalog of SNPs in the MVP array. We identified pharmacogenetics variants by chromosome location and rsID . Imputation was performed using MINIMAC. We kept only variants with imputation R 2 > 0.9 within the ethnic group. We assigned rsIDs to imputed variants by intersecting variant genomic position with rsID genomic position in NCBI dbSNP (v154) using bedtools. We then identified pharmacogenetics variants by overlapping imputed variant rsIDs. In total, we find 193 pharmacogenetics variants from 136 genes in the genotyped data set. Including the imputed variants, we expand the set to 273 variants in 148 genes. If we relax the selection criteria to include all imputed variants that satisfy imputation R 2 > 0.9 in any one of the 4 HARE groups (EUR, AFR, HIS, ASN) we expand the set to 408 variants. Calculation of minor allele frequenciesLocal Ancestry Inference (LAI) based allele frequenciesGnomAD allele frequencies1000 genomes allele frequencies 1000 Genomes population-specific MAFs were extracted from 1000 Genomes Phase 3 VCFs. HARE group-specific minor allele frequencies (MAFs) were calculated for the MVP hard-called and imputed datasets using PLINK2. Briefly, we performed LAI using rfmix2. We used 3,942 reference samples for EUR, AFR and Native American (AMR) ancestry collected by the 1000 genome project and the Human Genome Diversity Project (HGDP). The reference VCF files were curated by the gnomAD team ( https://gnomad.broadinstitute.org/downloads#v3-hgdp-1kg ). We used local ancestry output to create separate, ancestry specific, VCF output files. Two files for the HARE AFR sample (EUR-AFR) and three files for the HARE Hispanic sample (EUR-AFR-NAT). The allele frequency extraction procedure was the same for LAI and gnomAD samples, described below. Population-specific frequencies were extracted as follows. LAI and gnomAD (v2.1.1 Genomes only, not Exomes) frequencies were stored in the INFO fields of VCF files. AFR and HIS LAI frequencies were stored in separate files. gnomAD frequencies were stored in population-specific INFO fields (AF_nfe, AF_afr, AF_amr for non-Finnish Europeans, African/African Americans, and Latino/Admixed Americans respectively). Using bcftools 1.10, VCF files were first filtered to the relevant SNPs (bcftools view—include ’ID = @<file of rsIDs> ’ <VCF file>), and frequencies were then extracted from the relevant INFO fields (e.g., bcftools query -’%ID%INFO/AF_nfe’). 1000 Genomes population-specific MAFs were extracted from 1000 Genomes Phase 3 VCFs. Visualization of MAFs, and calculation and visualization of MAF differences between MVP and 1000 Genomes, was performed using R. 4-digit HLA type predictions were generated for HLA-A and HLA-B from hard-called genotype data using HIBAG . We chose the pre-fit Affymetrix Axiom UK Biobank Array 4-digit resolution model ( https://hibag.s3.amazonaws.com/hlares_index.html ), as the MVP genotyping array covers > 95% of this model’s training variants for both loci. We used the European model for individuals assigned to HARE group EUR and the multi-ethnic model for individuals assigned to HARE groups AFR, HIS, and ASN. Predictions were generated by calling the predict function from HIBAG. Frequencies were calculated for each 4-digit allele by HARE group. S1 Fig Allele frequency comparisons between gnomAD and MVP HARE groups for three groups (EUR, AFR, HIS). For all three, correlation with gnomAD allele frequencies is high (R 2 >0.99). In the lower right we compare allele frequencies for gnomAD HIS and Local Ancestry Inference (LAI) derived allele frequencies for the AMR track of the HARE HIS group (R 2 = 0.91). We use three-way local ancestry deconvolution (EUR, AFR, AMR). (TIF) Click here for additional data file. S2 Fig Copy number variation in CYP2D6 using two computational approaches. Results are shown just for the HARE AFR cohort; clusters were derived using (a) Principal Components Analysis (PCA) and (b) UMAP(13). UMAP significantly reduces assignment ambiguity. (TIF) Click here for additional data file. S1 File Allele frequency table for HARE groups and LAI tracks. In addition to allele frequencies, we provide imputation quality information per site and ethnicity/LAI-track (imputation R2). In the same table we attach PharmGKB annotation per site, where available. In the table we include all sites with imputation R 2 > .9 in ANY of the four HARE groups (EUR, AFR, HIS, ASN) for a total of 408 sites. Four sites are multiallelic, thus the table has 412 rows. (CSV) Click here for additional data file.
Emma Wilson Mooers (1858–1911): die Neuropathologin an Aloys Alzheimers Seite
0c4738a0-22b3-4fca-8f41-c070d2f5a5b4
7704491
Pathology[mh]
Das bekannteste Bild aus seinem Münchner Labor zeigt Aloys Alzheimer selbst und die junge, internationale Elite der Hirnforschung, nämlich Nicolas Achúcarro, Rudolf Allers, Francesco Bonfiglio, Ugo Cerletti, Friedrich Heinrich Lewy, Fritz Lotmar, Gaetano Perusini, Stefan Rosenthal und links vorne Adele Grombach (Abb. ). Wer aber ist die selbstbewusst in der Mitte sitzende und von den Herren umringte Dame im dunklen Kleid, die auf der Fotografie üblicherweise mit einem Fragenzeichen oder als „unbekannt“ bezeichnet wird? Sie war die Koleiterin des Anatomischen Labors, Emma Wilson Mooers, geboren 1858 in Greendale, Wisconsin. Im Jahr 1884 schloss Emma W. Mooers das Medizinstudium an der University of Michigan in Ann Arbor ab. Sie war eine von mehreren Ärztinnen deren Namen zwei Jahre später in einem launigen Bericht der New York Tribune und des Ann Arbor Couriers Erwähnung finden: „Mehr als 50 Alumni der University of Michigan nahmen letzte Nacht am Dinner der New York Association … teil. Ein neuer, aber nichtsdestoweniger angenehmer Zug war die Teilnahme von Absolventinnen, zehn Damen … aus Ann Arbor, welche mit der Anmut femininer Vollendung und Errungenschaften zum Glanz des Ereignisses beitrugen“ . Dies schien offensichtlich noch der Erwähnung wert, obwohl Frauen das Medizinstudium an einigen US-amerikanischen Lehreinrichtungen bereits seit Mitte des 19. Jahrhunderts möglich war und damit 50 Jahre früher als an Universitäten in Deutschland . Danach ist Dr. Emma D. Mooers als Mitglied in den Berichten der American Public Health Association aufgeführt. 1888 wohnt sie in Arlington, Massachusetts. 1898 wird sie Assistenzärztin am Northampton Insane Hospital. 1899 wird sie in die American Medico-Psychological Association aufgenommen. Im Jahr 1902 taucht der Name Emma Wilson Mooers im Zusammenhang mit einem Skandal im Journal of the American Medical Association auf : Eine Betrügerin hatte sich mit der Behauptung, sie sei Dr. Emma W. Mooers und ihre Unterlagen wären verbrannt, beim Sekretär der University of Michigan falsche Dokumente erschlichen. Die falsche Dr. Mooers praktizierte zunächst im nördlichen Michigan, danach in Chicago und schließlich in Colorado, wo sie gefasst werden konnte. Ein Kollege hatte sich bei der University of Michigan über sie beschwert. Dort aber war bekannt, dass die richtige Dr. Mooers zu der Zeit als Pathologin am McLean Hospital, der psychiatrischen Klinik von Harvard, in Waverley, Mass., arbeitete. „Der ganze Vorgang belastete die richtige Dr. Mooers erheblich, deren Arbeit stets von höchster Qualität und deren Verhalten äußerst professionell war“ . Im Jahr 1903 veröffentlichte Emma W. Mooers eine ausführliche Studie im Boston Medical and Surgical Journal über eine vermutlich bakterielle Meningoenzephalomyelitis mit makropathologischen und histologischen Abbildungen, in der sie auch mehrere deutschsprachige Arbeiten zitierte . 1904 erschien ihre Arbeit über den amnestischen Symptomenkomplex bei Neurosyphilis. Sie war die erste Forscherin am McLean Hospital . Der Jahresbericht Michigan Alumnus vermerkt 1904 Emma Mooers sei nach Übersee gegangen und nun über die Adresse c/o Brown, Shipley & Co, 123 Pall Mall, London, zu erreichen. Im Wintersemester 1905/1906 ist sie im Gasthörerverzeichnis der Münchner Universität verzeichnet; sie beschäftige sich mit Psychiatrie und Anatomie. Im ausführlichen Jahresbericht 1906/1907 der Königlich Psychiatrischen Klinik in München schreibt Kraepelin auf der ersten Seite „in die Reihe der wissenschaftlichen Assistenten trat zunächst Dr. Plaut, dann die Herren Rüdin und Isserlin, endlich Frau Dr. Mooers … Frau Dr. Mooers unterstützte Dr. Alzheimer in der Leitung des anatomischen Laboratoriums“ . Kraepelin akzeptierte keine Mitarbeiter, die nicht ausreichend Deutsch sprachen. Zwei bedeutende kanadische Psychiater besuchten die Münchner Klinik im Sommer 1907 und erstatteten ausführlich Bericht im American Journal of Insanity . C.K Clarke, Professor für Psychiatrie an der Universität Toronto, erwähnt „Drs. Gudden, Moers , Plaut, Weiler und andere haben bereits wohlverdienten Ruhm erworben und die Arbeiten dieser enthusiastischen Bande haben die psychiatrische Wissenschaft auf bemerkenswerte Weise bereichert und Licht auf die verzwicktesten Probleme geworfen, mit denen wir uns beschäftigen müssen“ . Dr. Ryan aus Kingston, Ontario, schrieb: „Dr. Mooers, eine ÄRZTIN (‚a lady physician‘) aus Amerika, ist eine bekannte Dozentin und Meisterin der Technik“ . Auf der ersten Seite des Jahresberichts 1908/1909 erwähnt Kraepelin, von den wissenschaftlichen Assistenten sei Frau Mooers ausgeschieden, um nach Amerika zurückzugehen . Ein Vergleich der Zeiten, zu denen die fotografierten Forscher in München arbeiteten und die Sitzordnung der Kollegen um Emma W. Mooers herum, legen den Schluss nahe, es könne sich um Mooers’ Abschiedsfoto aus Alzheimers Labor handeln (Abb. ). Die Abb. zeigt Emma W. Mooers in den USA, würdevoll und noch besser gekleidet als kurz davor in München. Am 20.07.1910 wurde sie zur „Kuratorin“ („Custodian“) der Neuropathologischen Sammlung ernannt und war damit die zweite Frau an der Harvard Medical School, wenngleich ohne richtige Fakultätszugehörigkeit . Wiederholte Hinweise, dass sie einen Doktortitel besitze, fanden keine Resonanz auf Seiten der Universitätsleitung. Ihr hochgeachteter Kollege Elmer Ernest Southard, Bullard Professor für Neuropathologie, schrieb an den Universitätspräsidenten, die Bezeichnung „Kuratorin“ bringe die wahre Bedeutung der Position und Mooers’ Rolle in der Forschung nicht gebührend zum Ausdruck. Mooers selbst fand die Bezeichnung unwürdig. Am 29.09.1910 schrieb Universitätspräsident Lowell, er sei gerne bereit die Bezeichnung dergestalt zu verändern, dass Mooers zufrieden damit sei, „solange dies nicht beinhalte, dass sie – oder andere Frauen – berechtigt seien Fakultätsmitglieder zu werden“ . Am Samstag dem 13.05.1911 verletzten sich Mooers und Southard bei der Autopsie eines Mannes mit einer hochvirulenten Streptokokkentonsillitis, die damals viele Todesopfer forderte. Emma Mooers starb am 31.05.1911 mit 52 Jahren an einer Streptokokkensepsis und Meningitis. Nachrichten von ihrem Tod erschienen in Science , dem British Medical Journal und diversen Tageszeitungen, eine „Märtyrerin der Wissenschaft“. Southard erholt sich nach schwerer Krankheit und erlag 1920 einer Pneumonie. Er zitierte Mooers’ Beitrag zu den serologischen Arbeiten Plauts . Andere erwähnen ihre Arbeiten zur Neurosyphilis und einer gemeinsam mit Minkovski entwickelten Färbemethode. Auf Mooers’ Grabstein steht „Assistant in Psychiatric Clinic Munich 1905–1911, Custodian of the Harvard Neuropathological Collection 1910–1911; a devoted and discerning worker in the technic and science of neuropathology“ (Countway Library). Die Verbindung zur Münchner Klinik hatte sie also beibehalten. Sie war zu früh gekommen und zu früh gegangen, zu früh um erfolgreich antibiotisch behandelt zu werden, zu früh um als Frau die große Karriere zu machen und in Erinnerung zu bleiben. Selbst die Erinnerung an sie als zentrale Figur eines Gruppenportraits, das vermutlich ihr zu Ehren gemacht wurde, ging verloren.
Mobile Forensics: Repeatable and Non-Repeatable Technical Assessments
03b932cb-7f5f-46d6-b286-5c0d3450da9f
9505885
Forensic Medicine[mh]
It should be pointed out that an additional aspect that distinguishes digital investigations conducted in the world of mobile forensics from those similar to those of computer forensics is the absence of free software tools that make it possible to carry out penetrating and effective analyses capable of returning robust evidence from the legal point of view . Although the methodology we present is not based on relevant and innovative scientific results, it must be considered innovative and based on solid foundations because it has been designed exploiting the synergies between the technical component and the legal framework. This means that the procedures that will be presented in the following sections have been considered under two distinct priorities: first from the technical point of view and than for the impact produced on the legal operating framework. In this work, we aim to highlight the differences that characterize a process for “mobile forensics” with respect to the analogous process that is undertaken in the context of fixed digital devices. This process, according to the standards of ISO-IEC 27037/2012, can be divided into four stages: (1) identification, (2) collection; (3) analysis; (4) preservation and presentation. Differences are concentrated in particular on stages (2) and stage (3). Thus, it is possible to maintain that best practices in this sector exclude the particular case that the analysis of these devices can be realized in the place where the find was discovered. The problem is how is it possible to isolate, in the act of collection, the mobile device from the web. There are three main methodologies: Airplane mode (switching off Wi-Fi and any other communication channel such as Bluetooth as well); Faraday cage; Turning on a jammer. In the case in which it is no longer possible to isolate the device, the only possible option would be to switch the mobile off, to be performed possibly by removing the battery. It is important to clarify that a further characteristic of digital investigations in the sector of mobile forensics, with respect to the analogous ones in computer forensics,. In the context of the digital forensics , differences between a process of physical acquisition and the analogous process of logical acquisition are well known: physical acquisition is the only one that is recognized valid from the forensic viewpoint because, in the case in which data are not protected by encryption, this allows to obtain a clone of the acquired support. The latter allows to obtain an invasive analysis and thus to reconstruct not only logical contents, but also the links among single memory locations that have been allocated but not used to reconstruct files and deleted folders. Logical extraction, instead, is an activity of analysis requiring a greater stock of IT knowledge, since in order to be carried out it necessarily requires the interconnection of the device to a workstation on which there is a software devoted to the backup of the telephone, usually developed by the same manufacturer of the device (we can think for example of “Kies” for Samsung phones using the Android system and “iTunes” for devices manufactured by Apple). This software reconstructs the contents saved on the device (both internal and removable memory), which are represented both at the file system level (files or folders) and at the application level (organization of contacts, messages, audio, video, images files, etc.) . The inspiring element of the technical analysis has been the Legge n.48/2008 derived from the “Budapest Convention” and the body of laws defined in the Italian Codice di Procedura Penale (CPP), which describe two methods for acquiring digital evidence: 1. Article 359 of the CPP, entitled “Technical consultants of the public prosecutor” provides: “The public prosecutor, when carrying out investigations, reporting, descriptive or photographic surveys and any other technical operation for which specific skills are required, can appoint and make use of consultants, who cannot refuse their opera”; 2. Article 360 of the CPP, entitled “Non-repeatable technical investigations” provides: “When the investigations provided for in article 359 of the CPP concern people, things or places whose status is subject to change, the public prosecutor shall notify, without delay, the person under investigation, the person offended by the crime and the defendants of the day, time and place set for the assignment of the assignment and the right to appoint technical consultants”. Among the different types of extraction that we have presented in the previous section, one needs to select which of them provides the best compromise among multiple factors of different nature, taking into account: non-specialist skills of the judicial police officer who analyses the find; the legal rules governing its operation; the type of technical assessment ordered by the Judicial Authority (i.e., if the assessment is conducted under the constrain of unrepeatable technical assessments then the article 360 of the CPP must be considered; on the other hand, without this constrain (repeatable technical assessments), the article 359 of the CPP ) must be applied). In both cases, the goal is to preserve unaltered the device during the investigation . 3.1. Protection Measures of Accesses to Memory Present in Android E Apple Devices 3.2. The Constraint of Repeatability From a forensic viewpoint, a physical extraction process is clearly preferable to a logical process. Unfortunately, differently from computer forensics, this activity in mobile forensics is particularly complex due to numerous restrictions imposed by proprietary and non-proprietary operating systems installed on smartphones. We can start to analyze this by focusing our attention on the two most widespread operating systems in the mobile phones market: Android and iOS. With regard to Android, this does not allow full access to the partitions that are present in the internal memory of the telephone, but (the user data partition) it is possible to have access both in reading and writing only to one of them. With regard to the other partitions, the access to the memory is forbidden even for read-only operations, since this operating system, Android, is based on a Linux-type Kernel which, as with all of the countless distributions of operating systems of this type, is designed to be multi-user, i.e., it allows the sharing of resources such that multiple users can share the same machine, since they have access only to those set of resources that have been allocated to them and do not have any possibility of accessing resources owned by other users. Going back again to the Android system, the considerations we made about the permissions granted to individual users remain perfectly valid, and thus to have full access to the system memory, in order to possibly replicate it in bit-to-bit mode one needs to have root privileges. In the Linux world, it is not difficult to be granted these privileges since, if the terminal is used by a single user, it is possible from a command line to type specific instructions that allow that first password configuration to be paired with the system administrator and then to start a session of use with these credentials. Unfortunately, in the world of Android systems, this operation is much more complex. Indeed, the only way to implement the so-called system “rooting” consists of modifying some memory partitions listed above, as for example the bootloader and recovery partition. Even though rooting is strongly discouraged, since a small error during the procedure or in the selection of a custom room that is not suitable to the specific device can lead to very serious consequences with permanent damages to the system, it has also some advantages: for example, a device can become more accessible when it is obsolete and the manufacturer no longer releases updates or when there is the possibility of removing all those applications from the device that have been preinstalled by the producer but are rarely used by the user, with the possibility of freeing up additional areas of previously occupied memory. The disadvantages, however, are considerable: manufacturers do not consider the act of acquisition of root permits illicit, judging it highly dangerous and detrimental to the normal functioning of the device: the user, after having performed the rooting of the phone, automatically loses the rights associated with the guarantee. Some applications can be designed in such a way that they do not work when on the device detects a manipulation of the bootloader and recovery partition. In general, rooting makes the device less secure, since full access to all system files provides fertile ground for malware that, once installed, can communicate sensitive and confidential data stored on the device to third parties. After discussing the challenges in order to carry out forensic acquisition from Android devices, we can now discuss the same issues that arise in the context of mobile devices released by Apple and equipped with the iOS operating system. These mobile devices have an even higher protection system compared to the OS Android used by many competitors, since the former are equipped with an encryption technology that, within a few seconds, can make all content in the phone memory inaccessible, only to unlock it as soon as the user enters the unlock code. These measures make it impossible to analyze the device without knowing its unlock passwords and, at present, there are no tools, in mobile forensics, that can overcome these restrictions. Apart from encryption, iOS also adopts some restrictions that prevents access to the system partition, and even then, one needs to use techniques that allow to remove these restrictions. In the iOS world, there two principal techniques: (a) Jailbreaking; (b) Use of Device Firmware Update (DFU) mode, i.e., uploading a RAM disk on the device that allows full access to the memory and at the same time using tools to attack the passcode set on the device with brute-forcing techniques. It should be pointed out that, in recent years, Apple has invested heavily in the study of techniques that would enable rapid encryption of the whole phone: for example, the Iphone 3 model had a hardware component for AES encryption, while the Iphone 4 provided full encryption of system and data partitions. This technique is implemented through a particular organization of the internal memory of the phone: the latter is divided into blocks and, in particular, the first block, called PLOG, is used to store encryption keys and all the data necessary to wiping the device quickly if it is subjected to repeated failed attempts to type the password. There are three encryption keys in the PLOG block and they are called BAGI, Dkey, and EMF!, respectively. The EMF! key is used to encrypt the file system, and each time the device is wiped, the key is discarded and regenerated. If such a key is not available, it is clearly unthinkable to retrieve the contents of the file system. In addition, within the file system, each file is associated with an encryption key: every time the file is deleted, the encryption key is also deleted, so even if, from a physical clone analysis, we can isolate the bit sequence referred to the deleted file, it does not produce any results without a key that helps the decoding operation. Each encryption key is also encrypted with a master key, the complexity of which depends on the security class. The iOS provides several classes of protection, each identified by a master key. When the user types the passcode, it actually acts on the two master keys combined with the two main security classes, thus discovering the files encrypted with the keys encrypted with the master keys unlocked. There are also files within file systems that are matched with encryption keys that do not match any master keys, or any security class. The encryption keys of these files are stored in the Dkey block of the internal memory. This is why it is necessary to boot the device with a RAM disk, since doing so would enable the analyzer to access the keys stored in these special memory blocks, especially the EMF! block and the Dkey block. Until version 5 of the Iphone, the passcode unlocked data such as e-mail messages, internal files containing the passwords of wi-fi networks that were hooked up by the device and for which an automatic connection was set, and data files from third-party applications. These data, especially in the context of a digital forensics survey, are indispensable, but without knowledge of the passcode, unfortunately, it is inaccessible. This explains why, as far as I-Devices are concerned, it is necessary to try to circumvent the security measures present in the system in order to conduct an effective forensic analysis. Booting the device with a carefully assembled RAM disk for this purpose allows, on the one hand, access to the blocks of memory where the encryption keys are stored, and, on the other hand, allows, with brute-forcing, to attack the passcode without incurring the risk of activating the wiping of the phone following repeated insertion of a wrong unlocking code. The second technique that can be used for forensic analysis is jailbreaking. This is the procedure that removes software restrictions imposed by Apple on iOS TvOS devices. It allows us to install third-party software and packages, unsigned and authorized by Apple, as an alternative to those of the App Store. After jailbreaking the device, you can install many applications or modify the software through alternative stores such as Cydia, Icy, Rock and Installer, on the understanding that a “jailbroken” device still can launch, and update applications bought from the official store. The previous section has allowed us to understand that, for both Apple and Android family devices, although physical extraction is the one that most closely adheres to the legal constraints of any forensic analysis activity, the same extraction encounters many difficulties, unlike in the case of extraction conducted in the computer forensics. While full access to memory is indispensable for a comprehensive and legally effective analysis, it appears difficult to implement for smartphones. The considerations made thus far tend to give less weight to the hypothesis that the process of acquiring digital evidence from smartphones can be considered an act of a repeatable nature : not only is there a need to work on the phone, but more importantly, internal firmware alterations must be made that compromise the operation of the device as displayed before it is seized from the owner. Here we will try to put forward a different position, more in line with the needs of the judicial police, as to whether or not the acquisition process can be repeated . Let us start by saying that the reflections made so far about how to achieve a complete acquisition of the phone concern only techniques of common use, that is, those allocated to those who, although not equipped with specific instrumentation, intend to make a “full disk” acquisition of the internal memory chip of a phone. These limitations do not concern the work of the judicial police, which in any case has expensive instruments, certified by international standards and designed for this type of technical inspection. Certainly, we have repeatedly suggested caution, reiterating that the belief that the judicial police have “secret” tools and techniques that can overcome any obstacles and measures of protection in the computer field is wrong (a cyber investigator, for example, can only take note of the existence of an encrypted volume but does not possess techniques other than those of brute-forcing to try to discover the reserved word). With regard to the technical inspections carried out on devices of that nature, it is quite impossible to assume that a physical acquisition is an act of an unrepeatable nature for any device existing on the market. For some devices, in particular older devices, it is possible to carry out acquisition operations starting from a phone-off condition. Some instruments are able to physically acquire certain Samsung smartphone models from a switched-off phone situation that is, however, initiated through an external impulse, in a particular condition of operation. “download mode” (analogous to computer forensics techniques where the boot of the system was launched from a live Linux distribution). In any case, regardless of whether or not it is possible to carry out a physical acquisition on any device capable of returning a computer assessment and a digital test in a perfectly repeatable manner, we consider it appropriate in this section to set out some considerations on whether or not to carry out such an operation on devices of this nature. In their opinion, the protocols and “best practices” followed in the world of computer testing should not obey only legal constraints. Rather, the inspection must be more influenced by technical constraints, which are directly linked to the characteristics of the device under investigation. It is obvious that the majority of legal disputes and criminal proceedings will depend on the results of expert reports or inspections carried out on IT devices and that the techniques used will be constantly evolving and will always have different characteristics, with the result that it is impossible to provide for methods and processes for such investigations that guarantee the return of digital evidence collected always and in any event in a perfectly repeatable manner. Without going far away from contemporaneity, we imagine that in a general criminal case, a suspect confesses their guilt, stating that on a cloud of their possession he retains numerous digital evidence that would facilitate the work of the judicial police in the phase of searching for the sources of evidence, in exchange for a possible reduction in sentence. Clearly, we immediately understand that in such a scenario it is impossible to attempt a physical acquisition process, not only because the server hosting such data may be in a foreign country not available to receive international requests, but also because the server hosting the data cannot be shut down to carry out the acquisition in bit-stream of the internal memory (assuming that it has such dimensions and characteristics that it concludes the process in a reasonable time). That is why the only way to find such kind of evidence is to download it and, if so, it would be necessary to address further difficulties falling within the scope of network forensics, together with the fact that the Internet, being a “best-effort” infrastructure, offers no guarantee as to the order in which the bits assembling the data are downloaded. Most likely, the files downloaded from time to time will always have the same hash, but it cannot be established with certainty that the same download, repeated countless times, always returns a file with the same hash code. This simple example is considered here to show that the physical extraction process, although more adherent to legal constraints and aspects, is nevertheless an operating methodology successfully adopted exclusively in the computer forensics world, but the chances that the same will be successfully adopted in any future scenario are extremely low. Moreover, returning to the technical computer investigations conducted on mobile devices, a total acquisition of internal memory is an inefficient operation because it is carried out with considerable effort (especially with high risks of causing a “brick” of the phone) to collect a quantity of data of which only a small part is actually interesting for the investigations. The data of interest to the investigator coincide, in almost all the investigative scenarios, exactly with those produced by the user, as the only ones able to reliably reproduce details relating to frequencies, social relations, messaging exchanges, and so on. The only constraint that the investigator has to follow in order to turn these data into valid digital evidence is not to alter it either during the extraction or during the custody, which clearly does not occur in case of a logical acquisition and is excluded in case of a physical acquisition that requires a preliminary alteration of the device firmware or a switch on and off of the phone during the same procedure where only system files and not user data would be touched. Moreover, in the case of devices capable of interacting with the Internet, it should be noted that Legislative Decree No 109/2008 has stipulated precise timelines regarding data retention by providers providing electronic and telematic communications services. This rule is a great advantage for digital investigators, since, especially in the case of telephones, there is the possibility of conducting analyses at a more superficial level, since the ‘traces’ that the telephone has left in the network (locked cells, telephone calls, telephone identifiers in which a certain SIM has been inserted) represent information that can be easily found by means of a request made by the Judicial Authority to the telephone operator used by the suspect, so that a physical extraction would return an excessive amount of data, of which only a small part, perfectly repeatable, would actually be useful for investigations. Of course, all the considerations made so far have one common factor: it is not possible to conduct a physical extraction that is perfectly repeatable, meaning by this the possibility to obtain, from time to time, a clone of the memory from which the same hash is extrapolated. The extraction of digital evidence that is actually of interest for investigation purposes (data within the ‘user’ data partition) is a perfectly repeatable operation, since there is with absolute certainty an independence between the internal state of the analyzed device and the data contained in the user partition . Our inferences lack legal confirmation, i.e., an “imprimatur” on the part of the legislature, which also accepts our conclusions from the point of view of case law. Indeed, as can be seen with regard to Law No 48/2008, the legislature did not discuss technical matters but merely outlined aspects relating to the data collected and presented in the debate, and did not specify any technicalities: this is not only because of the sensitivity of the subject being addressed, but also because the computer world is constantly evolving. In reality, what would benefit the digital investigator is not the introduction of well-defined best practices by lawyers governing the way in which the computer search is carried out on any telephone (which is obviously impossible), but rather taking a position according to which the computer data, intended to become digital evidence, must not be assessed on the basis of the techniques used for its collection, but on the basis of its ability to represent evidence and the possibility of not being subject to unintended alterations caused by the device on which it lies. In other words, if the data always has the same evidential content and is of such a nature that it is not altered, not even accidentally, so that it can always be associated with it and in any event a unique hash code, then the principle would prevail that the evaluation of the digital evidence should not be made from a purely formal point of view (valid only if it comes from a physical extraction process that enjoys full repeatability characteristics) but, rather, from a strictly substantive point of view (the data keeps equal hash code and equal evidential content), so that the acquisition techniques adopted would pass second level, to the advantage of an evaluation made exclusively on the data of interest . Before discussing the study of the repeatability of the extraction process, we thought it appropriate to first describe in detail the techniques and tools used to conduct the acquisitions . The most important points to be resolved are: 1. One must be able to compute the hash value of each file that is obtained from a acquisition process; 2. A methodology must be developed, which makes it possible to compare, for each file extracted from the same phone with several different acquisition processes, the hash value it presents from time to time. Both issues clearly require automatic solutions, as the number of files expected to be obtained in each extraction is certainly not small . As regards the tools used for extraction, the forensic acquisition software UFED4PC v.7.12.0.14 provided to the “Computer Crimes Team” at the Public Prosecutor’s Office of the Republic in Milan has been used. That software, in essence, has the same interface as UFED Touch, except that in this case the acquisition can be carried out by directly connecting the phone to the personal computer on which UFED4PC is installed. Once the acquisition process is complete, UFED4PC produces a set of files, depending on which extraction technique is performed. The types are as follows: Image file in case of a physical acquisition process; Compressed files (.zip) in the case of a file system acquisition process; Media files and backup files in case of a logical acquisition process. Each acquisition process, independently of what it is, also produces a file in proprietary format, with the file extension .ufd or .ufdx. This file can only be read through UFED Physical Analyzer. Once the above files have been opened through the Physical Analyzer, it is possible to ‘download’ them, i.e., processing the file obtained downstream of the acquisition process in order to extrapolate from the file found on the phone. These files are also subdivided by type and stored in a folder (for example, the “Image” folder which will contain the extracted images; the “Audio” folder for audio files, etc.). Together with the extrapolated files, an executable file is also generated, a file signed with the title “UfedReader.exe” that, once launched, allows access to a rich informative heritage with the possibility also to generate a wide report in numerous formats (.xls, .pdf, .html, etc.). Once one obtains the files extracted from the phone under investigation, one needs to compute the hash of each individual file. UFED4PC is also useful in this case, as the reports that are generated allow to compute, for certain types of files, as many as two hash keys produced following the MD5 algorithm and the SHA-256 bit algorithm. In addition, the report allows to organize, in an interface of immediate understanding, also data of other nature, generating, for example, the list of incoming and outgoing phone calls, the list of text messages exchanged, the messaging conversations realized and more. The main limitation of this solution lies precisely in the fact that hashes are computed only and exclusively of files that may be of interest for investigative purposes, leaving out files of other nature instead. To remedy this problem, it was decided to use the commands available under Linux and usable through the shell. Specifically, we used two commands: the first is md5deep with the recursive option; the second is the sha1deep with the same option. In other words, through the Linux shell, selecting the folder within which all the files obtained by the acquisition process are nested, including also the reporting files (these clearly not of interest for our purposes), we launched the two commands described above which, thanks to the recursiveness option, have computed, starting from a root node (folder), all the hashes of all the files contained in all the nodes (folders) leaf. The computed hashes have been collected, using the output management options for the results, into different text files, following a format that shows, on each line, the path of the file and its hash code. The data thus collected were subsequently transferred to excel sheets specifically created for these experiments and were cross-referenced using automated tools. One of the phones tested is a Samsung-branded smartphone, Galaxy S4-I9515 with Android operating system version 5.0.1 and an internal memory capacity of 16 GB. Before starting the tests, the phone was properly configured so that the analysis of activities could be carried out without any hassle and all protection measures, consisting of unlock codes or pattern locks, were removed. The operations conducted on the phone are exactly the same as those suggested by the UFED4PC software, namely: Disabling the automatic shutdown system; Entering airplane mode; Setting the Media Transfer Protocol (MTP) as communication protocol for interfacing the phone with the workstation via an USB connection. Developer mode activation; After these configurations were made, the device was attached via a USB cable to the location where UFED4PC was installed. The software automatically recognized the device, so the analysis could start immediately. The first transaction was a physical acquisition activity. In addition, following the suggestions provided by the acquisition software, the phone was first switched off, then the yellow plug “T-133”, supplied to UFED4PC, was hooked to it. The phone therefore started in download mode, after which it was connected via USB cable to the terminal used for acquisition. After a preliminary reading of information, the capture software required a disconnection of the device, a switch on in normal mode, a switch off, a reconnection of the aforementioned yellow plug to the phone as a result of which the phone started again in download mode. At this point, the phone, always holding the yellow plug attached to it, was connected back to the acquisition computer and the extraction process started regularly. The task lasted a total of one hour and 46 min and two files were generated at the end of the task: Samsung GSM_GT-I9515 Galaxy S4.ufd, size 1 KB; Dumpdata.bin, image file corresponding to the physical cloning of the phone under consideration, size 15,388,672 KB; The two files were saved in a folder named “Physics_1”. At this point we generated through the aforementioned file with extension .ufd the reporting related to the acquisition just ended. The files assembled from the image produced were all saved in a folder called “Report”, also contained in the aforementioned folder “Physics_1”. After this first acquisition, we performed a second data extraction from the phone, always choosing a physical acquisition activity. The extraction activity lasted a total of one hour and 44 min, at the end of which we obtained two files with name and size in KB perfectly coinciding with the analogous files obtained in the previous case. These data have been saved in a folder called “Physics_2”. Again, by using the .ufd file extension, we generated the reports and files obtained from the extraction process: these files were saved in a folder called ‘Report’, which is also contained within the aforementioned directory ‘Physics_2’. We then went to file system extraction. For this activity the phone was kept on, remaining always in airplane mode, and no different settings were made than those already set for physical acquisitions. The file system extraction activity lasted 11 min and 38 s and produced the following files: Samsung GSM_GT-I9515 Galaxy S4.ufd, size 1 KB; Samsung GSM_GT-I9515 Galaxy S4.zip, containing the extracted data, of size 42,697 KB. As with previous acquisitions, these data were saved in a folder called “File_system_1” in which we also saved the report obtained through the file with the extension .ufd. Following this acquisition, we conducted another identical one, lasting 11 min and 28 s, which produced the same files as in the previous case. They were saved in the folder marked “File_system_2” where we also downloaded the files obtained from the report. Finally, two logical acquisition activities were carried out. Again, no different phone configurations were required from previous acquisitions. The first logical extraction lasted 8 min and 21 s. This procedure, being rather similar to a normal backup, is able to return for the most part multimedia files produced directly by the user. In addition, “container” files are also generated, in which content such as the call log is saved; all incoming and outgoing text messages stored on the telephone; all conversations via messaging apps; the set of contacts in the address book, etc. In this case, again, clearly, reporting files are produced in HTML format, containing an illustration of the list of the various types of data found on the device. This is not to be confused with a similar report generated through the .ufd file using the Ufed Physical Analyzer program. The set of files obtained from the logical acquisition were saved in a directory marked with the name “Logica_1” and subsequently those of reporting were saved in a folder inside the latter, in a manner entirely analogous to the previous cases. Later, a second logical acquisition was made, also of the same duration as the previous one, whose files were saved in a folder called “Logica_2” in which we also downloaded the reporting files. Once we have obtained the data of the single acquisition processes, we have computed for each of them its hash code, both with the MD5 algorithm and SHA-1, and then moved on to the subsequent comparisons thanks to the macro-internal excel files described above. Let us first point out that it is beyond any rationale that a single phone undergoes repeated acquisition processes: the police officer will assess whether only a single acquisition can be made on that device, clearly choosing the one that can produce the most information that best meets the investigative needs. To have a complete study on the repeatability or otherwise of the capture process, we chose to do more data mining to compare the obtained results. In any case the comparison of results between operations of different types is not easy, as the data obtained for each type of extraction will be different, both in quantity and type; therefore, since equal extractions will produce the same number of files, while different extractions will produce a different amount of files, we have decided to adopt the following criteria for comparing the data: for the first type of comparison, we decided to use the hashes obtained directly from the commands typed in the Linux shell, while for the second type we used the hashes contained in the reporting files automatically generated by the acquisition software. Independently on the acquisition processes adopted (physical, logical, or file system) the files we have found fall in one of these categories: (a) database files; (b) image files; (c) text file. 5.1. Comparing Physical Acquisition vs. Physical Acquisition 5.2. Comparing File System Acquisition vs File System Acquisition 5.3. Comparing Logical Acquisition vs. Logical Acquisition Let us start by comparing the results obtained in the two physical acquisition processes. The two acquisition processes generated two files respectively, one of which, the Dumpdata.bin file, is the physical image obtained from the internal memory. Before comparing the hashes of the various files, we found it interesting to also compare the hashes of the two physical images produced. As shown in the figure, the hashes of the two images do not coincide, but this aspect is neither a surprise nor an anomaly, since as we have already specified, between the two physical acquisitions the phone has been restarted, which is why the internal state (and therefore the physical image obtained) has inevitably changed. In this case we should consider whether the hash values of the single file have changed, and if so, which files have actually undergone changes due to the regular execution of the internal processes of the operating system. With the help of the capabilities of the acquisition software used, it was possible to extrapolate from the two physical images obtained a total amount of 9112 files. Only a very small percentage (less then 0.9% ) of these files, listed in , produce a different hash code. It was also noted that the files that have undergone a change belong to a few different types: we have text files that can be traced back to the applications installed on your phone, database files, log files, and XML files. We can now analyze how these files can maintain links with data that are a possible source of evidence and for which the complete repeatability of the extraction process has to be demonstrated. Let us begin to delve into the nature of the first three files in the previous table, which, as can be seen from the path, are related to the messaging application WhatsApp, from which clearly emerge numerous sources of evidence. Within these text files, there is the sequence of messages exchanged with a given contact in the address book using the above application. The observation of a different hash at these files could create alarm, because it would lead us to believe that the extraction of evidence related to the messaging produced with this application does not have repeatability properties. In fact, a more in-depth analysis has shown that the aforementioned files do not belong to WhatsApp, in the sense that they are not files saved within the phone memory but rather are automatically generated by the extraction software when the reporting is created. The purpose of these files is to illustrate the content of chat conversations in a more intelligible format, but in reality—we insist—these files do not come from the internal memory of the phone. They have the same content in terms of address book contacts with which the messaging was exchanged and, having been produced at different times, they clearly have different hashes. In order to demonstrate the repeatability of the acquisition of digital evidence belonging to WhatsApp, it must therefore be demonstrated that the internal phone files belonging to that application have the same hashes. Let us start by making it clear that there is no single standard by which this application organizes these files within the phone’s memory: this organization depends on many factors, such as the operating system installed and whether or not there is a removable memory. In the case of this phone, which does not have removable memory, the application saves messaging data in the following database files: msgstore.db contains the encryption key with which messages sent from the phone are encrypted; wa.db in which messages properly encrypted with the above key are saved. The study of repeatability must therefore focus exclusively on these files. By comparing the hash codes, we found that these hash codes are perfectly identical, therefore, we have sufficient reason to believe that the extraction of digital evidence attributable to the WhatsApp application is a perfectly repeatable operation in the perspective of a physical acquisition process. Let us now, instead, study the nature of the other files for which we have found an alteration of their hashes. We note that a particularly copious type of file, for which a hash variation has been found, is that with the .eml extension. These files are a “local” version of an e-mail message, so they can be traced to mail client applications. In the case of the phone examined, the only application present in this sense is the famous Gmail, application that allows access to Google domain mailboxes. For reasons of constitutional legislation, the issue of e-mail acquisition is rather thorny: not only because a different approach is needed depending on the virtual dimension in which the e-mail is located (e.g., saved locally because it is managed by a mail client program or saved in a webmail and therefore needs to be acquired by Internet forensics techniques), but, above all, because it alleges a violation of constitutional rights to secrecy of correspondence. Returning to the analysis we are concerned with, those messages are obviously linked to an e-mail client app, since the telephone, at the time of acquisition, was without any connection to the Internet, yet that type of message has numerous hash alterations. Again, such considerations should not be alarmist, as we have found some rather interesting details from closer examination. In fact, some .eml files do not actually contain proper e-mail, but rather some only have a description of them: This is because metadata-related information such as the type of encoding used, the subject of the mail message in an encrypted format, the MIME extension for encoding multimedia attachments, etc. The cause of the hash variation is probably due to a different value presented from time to time at the date field: we note, first of all, the total absence of textual content that can be assimilated to a real e-mail message, but above all we found a variation exclusively of the said field. As we have already stressed several times, during the various mining activities the phone was kept in airplane mode, so it was not possible to produce other emails or alter those already existing. Among the files with the .eml extension, it was also found that some of them reproduce the content of an email which, for obvious reasons, represents data validly that can be considered as computer evidence. In this case, we did not identify among these files, obtained during the two extractions of the different contents in the two versions, files such as to justify a variation of the respective hash code. As a result, we decided to analyze the metadata of these files, with particular reference to the values of the “creation date” and “last modified date” fields accessible from the properties window of the Windows environment. Further, this analysis, in fact, clarified our doubts, since the date of creation of the file did not coincide with the time moment when the email was sent/received, but rather coincided with the moment in which the mail was extrapolated from the binary file through the creation of the reporting. In other words, the change in the hash is not due to an alteration of the data, but rather to the algorithm with which Ufed Physical Analyzer produces, from the acquired image, the representation of the contents: for some of them, in fact, it simply reconstructs the file as extracted from the phone, in other cases, exploiting files traceable to some applications (such as in the case in under consideration those of e-mail) processes the contents of those files and generates new ad hoc files at the generation of the report. Indeed, it should not be forgotten that this instrument was designed to be invasive and not to alter, even accidentally, the analyzed devices, but certainly among properties there is no obligation to ensure the complete repeatability of all extracted data. The files obtained from time to time have the same content, without altering the ‘substance of evidence’ represented by the content of the mail itself, so that we have sufficient information to infer, again, a repeatability of the acquisition process. Another type of file, for which many variations have been found, is the file with the .xml extension. These files contain settings for applications, and it is from the applications that the content encapsulated in the various tags changes. Again, we cannot identify exactly what processes are responsible for managing these files, but we can be absolutely sure that the inevitable alterations to these files do not result in any pollution of the data that represent the sources of evidence. Just to give an example, we analyzed the contents of the phone-account-registar-state.xml file: within this document, it appears that a phone status is recorded at the interaction with a signal provided by a telephone operator which is the same as the one which owns the SIM inserted in the device. This file does not retain the same hash in the two acquisitions because the second version, obtained from the extraction, is not recorded in the <label> and <shor_description> tag of the telephone operator acronym. Finally, the last type of file for which variations were observed is the database type. More precisely, .db files are managed by phone internal routines to store a set of information in a structured manner. Suffice it to say that, without the acquisition of root permissions, access to these files is inhibited as they may contain strictly confidential information that is essential to ensure the integrity of the phone. Using an sqlite browser you obtain the structure of the data saved in the offline.db file. In addition, you can see a series of tables that handle information related to the functioning of the phone, for example, the active processes (all identified with a set of text fields such as account_id, storage_id), the mapping of resources, etc. In this case, too, there is complete independence between the data that are managed by these files and the internal phone data that can be used as evidence; this independence is in fact the common factor of all the considerations made in this section. In other words, we have been able to demonstrate that the physical extraction processes, although they suffer from inevitable variations due to the normal operation of the phone—which, as stated above, must be kept on and restarted to allow the correct execution of the extraction—are, however, able to realize, in a perfectly repeatable manner, a forensic acquisition of data that can be used, as digital evidence, for investigative purposes. We can now analyze the results obtained with the two extracting file system processes conducted on the phone. Both processes returned 374 files of which only 30 detected a change in their hash codes (so in percentage terms only 8.02% of the total). We note from this analysis that essentially there are no different results to those obtained in the case of physical extraction processes. Files that vary as a result of the two capture processes are of the same type as those highlighted in the previous section, which is .db files (18 out of a total of 30) and .xml files (7 out of a total of 30). Of all files, we decided to investigate in particular the nature of a database file for which a hash alteration was recorded: we’re talking about the calendar.db file. This database file is used by applications that function as a calendar or electronic calendar. In the case of the device under consideration, the application that manages that file is Google Calendar. Confirmation of this link is also given by the analyzing the file through an Sqlite browser, thanks to which within the tables contained in the said file, we observe numerous records having, as the value of the field “accounts”, the domain address Gmail combined with that phone and with which is accessed to the play store. Analyzing the database file, we were unable to isolate any discrepancy between the contents present in the two versions of the database file, obtained from the two comparative extraction processes. It is crucial to understand whether a change in the calendar.db file would somehow result in a pollution of the evidence sources extractable from the calendar application: the latter can be a valuable source of evidence as it is useful for checking, if necessary, what commitments and memoranda are recorded by the suspect. As a result, we decided to take advantage of the reporting generated by UFED Physical Analyzer, which also has a section dedicated to the memos registered in calendar applications on the phone. The comparison of the generated reporting allowed to recount the presence in the calendar application of 57 reminders, which were extracted in both acquisitions and listed in the related reporting in the same order. For these reasons, we cannot accurately identify the internal causes and processes of the phone that have touched, between an extraction and another, the aforementioned database file; We have shown, however, that the extraction of such data is, in fact, a repeatable process, since, as well as established case law expressed by the Supreme Court of Cassation, no alterations are observed that would result in a loss or alteration of the evidential content embodied in the extracted data. As regards the remaining files, we can in fact draw the same conclusions as the previous acquisition, i.e., there is no alteration other than system files managed by both the applications and the internal processes of the operating system, which is why we have no reason to believe that the acquisition of those files that can constitute valid digital evidence, through a process of acquisition of the type we are considering, is also an activity that can be repeated. Finally, in this section, we make the last comparison between acquisitions of the same type, analyzing the results obtained in two logical extractions. The two extractions returned 388 files, of which only 18 (listed in ) show a different hash value (4.63% of the total). As we can see, there are variations on files of the same type as the previous acquisitions, also finding a wide intersection with the files that in the previous processes had, between one activity and another, a different hash code. Again, there are no modifications on the files that can be taken as evidence and managed directly by the user. Thus, it is well possible to consider, in the case of a logical extraction, the perfect repeatability of the process of acquiring the same files. At the end of this paper, some reflections are made on the results presented. In the case of mobile devices such as, in this context, smartphones, the issue at stake, as we have seen, changes totally: not only do these devices have protective measures that prevent full access to the memory, but above all they do not all have the characteristics that allow a post-mortem analysis. The best practices, successfully tested in the world of computer forensics and the Law n.48/2008, which regulates the complex theme of cybercrime in Italy, have not found a full realization in the field of mobile forensics, given the impossibility of carrying out an acquisition activity that allows cloning, in its entirety, the internal memory of the phone through a perfectly repeatable process. Consequently, pending a common understanding of the choice to carry out a forensic analysis on a device under investigation by applying the regime of repeatable or non-repeatable tests, the Judicial Authority has frequently chosen the most prudent solution, that of the non-repeatable tests. The question to be addressed is what, among the countless computer data that can be extrapolated from a phone, can validly constitute digital evidence. The results achieved are an additional element in favor of an orientation that tends to restrict only to data managed directly by the user the context in which to search for valid digital evidence and in relation to which the repeatability of the related acquisition process must be satisfied. It remains to be clarified the nature of those files that have seen a change in the hash code between acquisitions. In this regard, we have identified files that have different hash codes at two physical acquisitions, and it is precisely because of these files that these operations do not constitute perfectly repeatable acts. Regrettably, with regard to the Apple world, we cannot provide detailed reflections as these devices do not allow a physical acquisition of them, even with the tools provided to the judicial police. In the Android world, however, system files with hash variations are essentially groupable in three types: Database files with a .db extension; Text files, mostly with the .xml extension; Files of various kinds, such as images and texts reproducing emails, created at the same time as they were extrapolated from the compressed file produced by Ufed downstream of the acquisition, which is why, while exhibiting the same content, they were different from time to time, including Log files, which contain logs of system events that occurred on the device. It is quite unusual that technical tests of an unrepeatable nature are carried out on the latter type of files. Thus, it is true that the contents of the log file are different depending on the moment in which they are analyzed (therefore the unrepeatability of the inspection), but if examination of such a file is considered essential for investigative purposes, then the same should be carried out following the path of urgent inspections according to article 354 of the CPP and not that of the unrepeatable technical inspections under article 360 of the CPP since, in these circumstances, there is a need to ensure a source of evidence that would otherwise be lost. Finally, it should not be forgotten that the national legal scenario seems to be more in line with our considerations, as it seems that the Legislator intends to assess the repeatability of the inspection in relation to the ‘computer data’ and not, as in the past, to the whole ‘computer support’.
Development of educational videos about bathing in bed newborns admitted to a neonatal unit
d8b9229c-b50d-41e1-98e3-43e958dde659
10642007
Patient Education as Topic[mh]
The way to perform daily hygiene care for newborns (NB) hospitalized in neonatal units depends on gestational age (GA), weight, clinical condition and presence of devices. Ocular, oral and intimate hygiene are part of this care. For other parts of the body, an interval of 96 hours or more is considered, in order to avoid skin infections, temperature variation and stress ( , ) . Immersion bath is indicated for those NB who have a GA of over 36 weeks, clinical stability and absence of devices such as peripheral or central venous access, drains, oral and nasotracheal tube and non-invasive ventilation ( ) . When NB do not fit these criteria, the bath should be performed in the bed ( ) . This procedure has been differing in hospital institutions, mainly with regard to intervals and necessary care ( ) , such as winding, handling in pairs, heat source turned on, adequate room temperature and shorter execution time ( ) . Bearing in mind the importance of paying greater attention to NB’s care needs, especially in neonatal units and at times of excessive handling, such as bathing, basing nursing care on more robust scientific findings capable of directing the best practices that encourage the safety of this clientele, it becomes necessary to adopt attractive training strategies, with a view to acquiring knowledge and behavioral changes in students and nursing professionals ( ) . Despite the lack of irrefutable scientific evidence regarding the effectiveness of using pedagogical videos for teaching and learning in nursing, especially in the context of neonatal care, this technology has been indicated for the development of clinical skills and increased self-confidence and self-efficacy, given its contemporaneity, practicality and ability to motivate learners ( ) . It is also believed that the educational video is capable of improving the understanding of the care provided by nursing and aligning the assistance provided in this context, from academic training to work practice, with facilitated access, nowadays, via cell phone, which allows the deliberate visualization of procedures before experiencing a simulation or real care practice ( , ) . Also, it strives to minimize aggravating situations for safety, such as exposure to cold, stress, excessive handling and preservation of devices ( ) . Given the importance of the theme and the absence of studies that produced audiovisual learning objects, based on the best evidence for bathing the NB’s bed in an intensive care unit, the questions arose: Are the educational videos about bathing NB in bed valid in terms of content and appearance to be used as an educational health technology for nursing staff professionals and nursing students? Is the inclusion of Brazilian Sign Language (LIBRAS) in educational videos in line with current regulations? To develop and analyze evidence of content validity of educational videos about bathing NB in bed in a neonatal unit. Ethical aspects Study design, period, and place Population or sample; inclusion and exclusion criteria Study protocol Analysis of results, and statistics Data were stored in a database in Excel® format extracted from Google forms. Then, they were imported into the Statistical Package for the Social Sciences (SPSS) program version 21.0, submitted to descriptive statistics for frequency and percentage analysis, position measurements (mean and median) and variability (standard deviation). Agreement among judges was analyzed using the Content Validity Index (CVI), considering the weights “totally agree” and “agree”, grouped as agreement, and “completely disagree” and “disagree”, grouped as disagreement. The formula used in the calculation was: CVI = agreement/total responses, with items with agreement above 0.80 being valid ( ) . Script, storyboard, and edited video reliability was analyzed by Cronbach’s alpha, which checks the internal consistency of a single multi-item construct. Values above 0.80 were considered of high reliability ( ) . In the analysis of nursing students’ assessment, in addition to descriptive statistics, the Wilcoxon test was applied, with a 95% Confidence Interval for the proportion of maximum scores (equal to 5) using a binomial distribution. “Strongly disagree” was the minimum score (1 point), and “strongly agree” was the highest score (5 points) ( ) . Suggestions for adjustments were incorporated and the instrument was forwarded to PhD nurses for further analysis, following the Delphi technique precepts. The study was conducted in accordance with national and international ethics guidelines, and was approved by the Research Ethics Committee of the Universidade Federal do Triângulo Mineiro , whose opinion is attached to this submission. The online Informed Consent Form was obtained from all individuals involved in the study. This is applied research and methodological production and technological validity, guided by the framework of quality improvement studies-SQUIRE of the EQUATOR network, developed from December 2020 to February 2022 in three phases, pre-production, production and post-production ( ) , visualized in below. Participants in script and storyboard validity were 16 PhD nurses, selected according to the inclusion criteria proposed by Fering (1987) ( ) , adapted and verified through the Brazilian National Council for Scientific and Technological Development ( Conselho Nacional de Desenvolvimento Científico e Tecnológico ) platform, considering up: master’s degree in nursing (4 points); master’s degree in nursing with dissertation in the area of interest of the study (1 point); doctoral thesis in the study area (2 points); clinical experience of at least one year in the area of interest (1 point); certificate of clinical practice (specialization) in the area of interest of the study (2 points); publication relevant to the area of interest (2 points); and publication of an article on the subject in a reference journal (2 points). For nurses to be selected, they must obtain a minimum of five points, among those who had a PhD. Sixteen PhD nursing from the first stage, three specialists in the area of social communication and 43 nursing team members as well as 23 nursing students as evaluators, participated in the validity of the edited video. To be selected as specialists in social communication, professionals should have a degree in social communication, experience with technical support, programming or networking and experience with video editing. To be selected as members of nursing team, they should be working in the maternal and child area for more than five years. For nursing students to be selected as evaluators, they should be enrolled in a higher education nursing course and have taken courses with content on pediatric nursing, gynecology and obstetrics nursing, and women’s, adolescent, and child health nursing. Selection and recruitment were carried out using the snowball technique, which consisted of nominating participants by themselves successively ( ) . LIBRAS validity was performed by three specialists in LIBRAS, selected and recruited through the snowball technique, according to the criteria: acting as a LIBRAS professor or being an interpreter in LIBRAS for more than two years. For a better description and understanding of the development process of the proposed pedagogical tools, in the first phase, pre-production, an integrative literature review was carried out a priori to list the stages of NB bed bath to be included in the script, including the following sources of information: Medical Literature Analysis and Retrieval System Online (MEDLINE), through the US National Library of Medicine National Institutes of Health (PubMed) search engine; Latin American and Caribbean Literature in Health Sciences (LILACS), through the Virtual Health Library (VHL); Cumulative Index to Nursing and Allied Health Literature (CINAHL); Web of Science. Descriptors in Health Sciences (DeCS) and Medical Subject Headings (MeSH) (baths, Infant, Newborn, Infant, Premature, Intensive Care Units, Neonatal) were used. associated by the Boolean AND operator and their respective synonyms by the OR operator. The strategy was standardized in MEDLINE/PubMed, and reproduced in other data sources according to the specific criteria of each one: “Infant, Newborn”[Mesh] OR (Infants, Newborn) OR (Newborn Infant) OR (Newborn Infants) OR (Newborns) OR (Newborn) OR (Neonate) OR (Neonates)) AND (“Intensive Care Units, Neonatal”[Mesh] OR (Newborn Intensive Care Unit) OR (Neonatal Intensive Care Unit) OR (Newborn Intensive Care Units (NICU)) OR (Neonatal ICU Newborn ICU) OR (ICU, Newborn) OR (ICUs, Newborn) OR (Newborn ICUs) OR (Newborn Intensive Care Units) OR (Neonatal Intensive Care Units) OR (ICU, Neonatal) OR (ICUs, Neonatal) OR (Neonatal ICUs). A total of 15 studies published between 2015 and 2022 were included, which addressed care with bathing NB in bed. Data were exported to an Excel® spreadsheet and refined for the construction of script scenes/steps’ content. The scripts of the two videos, constructed from bibliographic survey, considered six scenes to present the technique of bathing NB’s bed in a neonatal unit, namely: 1 - Approaching parents and/or family members; 2 - Environment and material organization; 3 - Preparing newborns; 4 - Bath; 5 - After-bath care; and 6 - Environment and nursing note organization. The storyboard for recording described, through freehand drawings made by the main researcher, the filming plans, and the plan for editing covered the texts, formatting, narration and background music. After preparing the first versions of the script and storyboard, the content validity process began with specialists in the second phase and video recording: production. Participants were contacted via email, and at each stage of the validity process, an instrument was developed in HyperText Markup Language (HTML) on Google Forms, filled out via the web, in three parts: participant personal and professional identification; script, storyboard or edited video; and general analysis based on the mentioned instruments. The response option for the items was a Likert scale with five weights (“totally agree” and “agree”, grouped as agreement, and “totally disagree” and “disagree”, grouped as disagreement). After recording, LIBRAS was included by an interpreter hired by the researchers, following the mandatory elements, in accordance with ABNT NBR 15290 norms ( ) . The scenario in the videos simulated Neonatal Intensive Care Unit (NICU) beds: one with an incubator and another heated crib, including a bedside table, multiparameter monitor and armchair. Before the official recording, rehearsals were carried out to go over the script’s and storyboard’s content with those involved and to verify the equipment and actor positioning. Adjustments were made to achieve the technique’s good quality, in addition to features such as 4K resolution and shot variations (medium, pin and close) for the same scene. The audio was recorded in a studio with acoustic insulation by one of the researchers. The recording equipment was two Sony A6500 cameras with 35 mm, 70-200 mm and 16 mm lenses, video tripod, led light and H6 zoom recorder with lapel. After recording, the scenes were edited in the post-production phase. The editing program was Final Cut Pro X, and the creation of the intro animation and inclusion of moving texts was Adobe After Effects®. Recording and editing were conducted by the researchers followed by professionals with experience in neonatology and audiovisual technicians. The soundtrack that composed the video along with the narration was the instrumental “carefree” by public domain artist Kevin MacLeod. Then, the edited video was subjected to validity and appearance assessment by specialists and nursing students. wo videos were developed that represented a bed bath in the neonatal unit, lasting seven minutes each, “Good practices: bathing newborns in the heated crib” ( ) and “Good practices: bathing newborns in the incubator” ( ) , lasting seven minutes each. The videos covered nursing interventions and specific care from preparation to bathing, such as donning, checking water temperature with a thermometer, proper management of NB, winding up, handling in pairs and encouraging the participation of a family member: the mother. The video “Good practices: bathing newborns in the heated crib” hypothetically presents a preterm NB, 35 weeks old, male, 11 days old, current weight 2,100 g, using an orogastric tube and a central catheter inserted peripherally in the limb top right. The video “Good practices: bathing newborns in the incubator” presents a preterm NB, 31 weeks old, female, 20 days old, current weight 1,400 g, using an orogastric tube, both with indication of bed bath by GA. Emphasis was placed on protecting the venous device with transparent plastic film and encouraging the post-bath kangaroo position for better thermal regulation. The final version of the videos after editing included introduction with title, institution and funding body logos, nine nursing interventions, the six scenes and the credits ( ). In terms of validity, among the 16 PhD nurse judges, 15 (93.7%) were female and one (6.3%) was male. In addition, 11 (68.7%) were from the state of Minas Gerais, two (12.4%) from São Paulo, one (6.3%) from Sergipe, one (6.3%) from Maranhão and one (6.3%) from Santa Catarina. Of these, ten (63.4%) worked in an undergraduate nursing course, three (18%) in Maternal and Child Units, two (12.4%) in teaching and research at a teaching hospital and one (6.2%) in a technical nursing course. Training time ranged from six to 35 years, with an average of 16.3 years. The three specialists in social communication were male and from the state of Minas Gerais. TOf the 43 nursing team members, all were female and from the state of Minas Gerais. In addition, 33 (76.7%) were nurses and ten (23.3%) were nursing technicians. All working in Maternal and Child Units, 30 (69.7%) in rooming-in and 13 (30.3%) in intensive care. Working time ranged from five to 20 years, with an average of 11.4 years. The three specialists in LIBRAS were female and from the state of Minas Gerais. Furthermore, two (66.6%) worked with video recording and editing in an educational institution and one (33.4%) was a professor with a PhD. In video assessment, carried out with 23 nursing students, 22 (95.7%) of them were female, with an average age of 24 years, minimum of 22 and maximum of 31 years. All were studying nursing at a public institution and, of these, 16 were in the tenth period, four in the new, one in the eighth and two in the seventh. In the first round of script and storyboard content and appearance validity by PhD nurses, the indices were above 0.93 supporting the inclusion of all items for agreement and reliability, with minor reformulations according to the suggestions. These were: include parents in the scene for a humanized assistance, complete donning, not throw water directly on the NB’s skin because it is a great stimulus and change the bed after bathing NB when skin-to-skin contact is not possible. The suggestions were accepted and, after modifications, the script and storyboard were forwarded to PhD nurses for a second round, with no new suggestions. The assessed items indicated that the script and storyboard presented coherent objectives, clear content for understanding the theme with relevance to care practice, and environment and language appropriate to the context and target audience. The CVI and Cronbach’s alpha of the first and second rounds of this stage are presented in . The indices of the first round of validity of video appearance by PhD nurses were above 0.98, attesting to the agreement and reliability of the included scenes. Relevant suggestions have been incorporated to improve audience understanding, such as adjusting text time to facilitate reading, inserting the list of necessary materials and sentence “change gloves to perform oral hygiene”. After editing, the video was forwarded to PhD nurses for a second round. There were no new suggestions and the second version was sent to social communication specialists, specialist nurses and nursing students without further consideration. The assessment indicated that the video is configured as a teaching-learning tool about NB bed bath, easy to use, with adequate duration for the number of scenes, good lighting and clear narration. The CVI and Cronbach’s alpha of the rounds in this stage are shown in . The indices of LIBRAS validity items were above the recommended level (CVI=1.00), indicating that language was adequate to content and recommended norms. The items assessed by the three specialists in LIBRAS were: LIBRAS agrees with the audio narration; the interpreter is properly positioned on the screen; the interpreter window is well positioned and in focus; and it is possible to identify all the interpreter’s movements and gestures. After the validity stage with specialists, the edited video was assessed by 23 nursing students using the same criteria in a single round. All domains had a mean score equal to 4.86±0.45 with a minimum of 3 and a maximum of 5 points, standard error of 0.95, median of 5.00 and p<0.001, indicating that the video was well assessed by students. Knowing that routine bathing in a neonatal care unit can cause changes in NB clinical stability due to excessive manipulation, exposure to low temperatures, variation in oxygen saturation and other factors, capable of negatively interfering with the safety of these highly vulnerable patients, it is important to invest in innovative teaching objects and tools that are attractive for nursing learning in this scenario ( , ) . In the video produced, the bath is performed by two people with cotton soaked in warm water and with the participation of mothers. For a premature baby, touch can be a harmful stimulus, so water should not be directly thrown on the NB’s skin ( ) . Handling in pairs provides better postural organization for NB, because, while one professional performs care, the other maintains the alignment and containment of arms and legs, reducing stress due to agitation, excessive handling and procedure time. In this context, specific care was included in the video, such as a firm touch with the hands still on NB’s chest to lower the level of the incubator or heated crib and changing the diaper with NB on its side ( ) . The winding of NB observed in the video has a positive effect on the vital parameters mentioned, on crying time and on the level of stress, pain and agitation, and is indicated in any type of bath ( ) . In the daily life of a NICU, family members present must be included in NB care to build a bond whenever possible ( , ) . Physical contact with the NB through touch and the kangaroo position and being able to be present and participate in care, even while observing, provide feelings of closeness and trust and strengthen the bond among family members. Commitment to care contributes to the development of parenting during hospitalization, breaking down the barriers generated by the NICU’s complex and technological environment. Thus, nurses should encourage the presence and participation of family members in care ( ) . A study that developed and assessed an educational video on relief of acute pain in babies pointed out that the presence of family contributes to NB development and construction of bonds and affection among members and that using videos before procedures can bring them closer to content ( ) . The video developed in this study contemplates these precautions and visually demonstrates how to do them. This tool facilitates the teaching-learning process and contributes to acquisition of knowledge, as long as the methodological paths for its elaboration are respected, as a survey of the best scientific evidence on a given care and validity process by specialists, which allows assessing whether content is adequate for what is intended. In this regard, nursing has engaged in the production and validity of educational videos on various topics to be applied during continuing education, training and professional training actions ( ) . To instigate viewers, the video must be dynamic, have attractive images consistent with reality and be short in duration. Realism provokes feelings and emotions, and brings it closer to everyday practice, and the step-by-step details contribute to the development of skill ( ) . Scene duration and dynamics influence the interest of those who watch long videos longer than eight to 12 minutes, dispersing attention ( ) . Therefore, the videos developed in this study fit the recommendation. In the video developed in this study, the scenes were filmed in several planes to attract attention, and the scenes were viewed from above convey the feeling of bathing, bringing viewers closer to the scene. A study that assessed videos for clinical teaching pointed out that quality influences viewers’ understanding ( ) . Incorporating videos into the teaching practices of students and professionals makes it possible to explore various low and high complexity topics and disseminate them on free online platforms that maximize reach without territorial limits ( ) . The inclusion of LIBRAS is a differential resource in the video, which allows accessibility for teaching the deaf. This is a limitation cited by other studies. It may improve communication among professionals and deaf parents/relatives about this procedure. It is difficult to find health professionals and professors trained to work with this public in teaching and care practice, creating barriers in the transmission of knowledge, which is why audiovisual technologies with translations can facilitate ( ) . Contributions to nursing and health Study limitations The limitation of this study can be attributed to the representation of a specific culture of technique and humanization in care regarding bathing in a neonatal unit, which may differ in other countries. Moreover, the material and human resources presented in the videos may be restricted in some units. The videos developed are presented as contemporary learning tools, subsidizing neonatal safety with standardization of the technique that includes specific care to prevent hypothermia, reduce stress and excessive handling and encourage the presence and participation of parents. In nursing, they will be able to mediate students’ and professionals’ teaching-learning process. As they are attractive and dynamic, arouse the interest of those who watch them, favoring actions. We highlight the inclusion of LIBRAS to expand content accessibility. Also, by presenting the material and human resources necessary for a safe, scientifically based bath, the videos can help managers in planning available and necessary resources for actually providing assistance based on scientific evidence. is study allowed the development and validity of educational videos about the NB bath in bed in neonatal units by PhD nurses specialist in social communication, nursing staff and nursing students. Two videos were developed that could contribute to academic training, permanent education and professional training in the field of neonatology. The scenes were divided into six precautions and LIBRAS was included, and all items assessed had CVI and Cronbach’s alpha above 0.90. It is recommended that future research be carried out to assess the effectiveness and applicability of the video during these training activities. The videos can be watched online and contribute to health education processes about bathing NB in a neonatal unit and the path to development can provide subsidies for the creation of new videos. Therefore, videos’ ability to translate scientific evidence into practice in an applicable, playful and accessible way can create means for the translation of scientific knowledge with potential for the real improvement of nursing care practice.
A comparative analysis of soil physicochemical properties and microbial community structure among four shelterbelt species in the northeast China plain
34f17fd4-29dc-45a4-afb2-529760271867
10986494
Microbiology[mh]
Mollisols are the most fertile soil in the world, and their productivity is much higher than loess and red soil . The Mollisols located in the northeastern plains of China is one of the four Mollisols regions in the world, and it is also an essential commercial grain base in China. However, as human demand for nature far exceeds the protection of the environment, the problem of soil degradation is becoming increasingly severe, seriously threatening the region’s agricultural production and ecological environment . To protect food and ecological security, China has developed a series of policy documents to provide legal solid measures for Mollisols conservation, including the Mollisols conservation and utilization tasks proposed during the 14th Five-Year Plan and the “Law of the People’s Republic of China on Mollisols Conservation” published in 2022 . Globally, there are two mainstream approaches to soil conservation and utilization, i.e., conservation tillage and fallow crop rotation systems . Among them, the farmland shelterbelt system (FSS) in conservation tillage is widely used in China. The establishment of FSS can improve the soil properties and increase the species diversity in the area, thus effectively restoring the ecological quality of the area . The tree species mainly determine the structural characteristics of shelterbelts but will change as the vegetation grows. Poplar is the most dominant protective forest species in northeast China due to its fast-growing features, and several studies have reported that poplar has significant improvement effects on soil and water conservation and ecosystems in agricultural fields . However, with the development of poplar monoculture, soil problems such as sloughing and acidification occur ; thus, studying the effect of other protection forests on soil improvement is urgent. In the context of China’s Three North Protective Forest System construction project, 36 major afforestation species were released in the northeast region, including Juglans mandshurica (Jm), Fraxinus mandschurica (Fm), Acer mono (Am), and Betula platyphylla (Bp), in the current study. Broadleaf forests have been reported to have higher soil quality than coniferous and mixed coniferous and broadleaf forests, with Jm possessing higher soil fertility . In addition, Bp has also been shown to affect soil nutrients and enzyme activities positively . Interestingly, compared to Jm and Am, Fm is more favorable to soil properties due to its high-quality fine roots . Therefore, studying the four shelterbelt soils’ characters at the same field condition is crucial for FSS in the northeast China plain. With the development of sequencing technologies, the driving role of the microbiome in soil function has been gradually recognized, and research approaches have shifted from material cycle characterization to microbiota response . For example, microbes can respond to soil degradation by regulating the abundance of taxa that react to the carbon cycle , and mixed microbial communities strongly degrade organic compound-contaminated soil . In addition, many nitrogen-fixing, mycorrhizal fungi promote plant growth by increasing the nutrient uptake capacity of the root system . Vegetation type has been found to play an essential role in soil microbial community construction at small scales , and driven community structure has a lasting impact on subsequent farming . Differences in plant apoplastic nutrients and root secretions lead to different microbial communities shaped by the energy provided by plants . Therefore, studying the response of microbial communities to different tree species helps us to understand soil microecology. However, microbial communities are highly sensitive and intensely dynamic biological taxa influenced by geo-environmental factors such as temperature, moisture, and elevation . In the experimental design of this study, four tree species were planted in the same farmland at the same time, effectively eliminating the influence of geography, planting density, forest age, and other factors, which is a rarely controlled experiment under field conditions. Accordingly, this study selected four species of shelterbelts planted for 3 years in the northeastern Mollisols area to study the composition and differences of soil microorganisms using sequencing. Meanwhile, soil chemical properties were determined to elucidate the effects of environmental factors on microbial diversity. Finally, enzyme activities were used to characterize the amelioration effect of tree species on Mollisols. In light of the preceding discourse, we have developed three hypotheses that arise from variances in tree species: (i) the soil nutrient content exhibits variation; (ii) the composition of the soil core microbiome diverges; and (iii) the predominant flora manifest dissimilar functions, thereby resulting in disparities in their impact on soil enhancement. This study enriches the study of soil microorganisms in the Mollisols area under controlled conditions and provides technical guidance for selecting tree species for constructing farmland shelterbelts in northeast China. Site description and soil sampling Determination of soil physicochemical and enzymatic activity DNA extraction, high-throughput sequencing, and data processing Statistical analysis The research site, situated at Keshan Farm in the Sonnen plain (48°17′N, 125°22′E, 156 m above sea level), exhibits a topography marked by hills and scattered land, a cold-temperate subhumid monsoon climate, and the presence of typical Mollisols in the cultivated layer. The average annual precipitation and temperature recorded at this location are approximately 500 mm and 1.9°C, respectively. In April 2019, four sample plots measuring 25 m × 100 m were established on a one-hectare cropland area. Each plot was planted with identical plants spaced 3 m apart (Jm, Fm, Am, and Bp). Subsequently, in October 2022, each sample plot was randomly divided into three smaller sample squares measuring 5 m × 5 m. A five-point sampling method was employed in each sample plot to collect the top layer of bulk soil (0–10 cm) after removing surface litter and debris. A subset of the samples was then sieved and immediately frozen for DNA extraction, while the remaining soil samples were air-dried and analyzed for chemical properties and enzymatic activity. In addition, tree height and diameter at breast height (DBH) were measured as indicators of plant growth. Soil bulk density was determined using the ring knife method. Soil pH was measured from a 1:2.5 soil-to-water ratio using a pH meter. Total carbon (TC) was determined by the TOC analyzer (Model multi N/C 2100S, Analytik Jena). Soil organic matter (SOM) was measured using the potassium dichromate-external heating method. Total nitrogen (TN) was determined using the Kjeldahl method, and available nitrogen (AN) was determined using the alkaline dissolved diffusion method. Total phosphorus (TP) was measured using the concentrated sulfuric acid and perchloric acid digestion-molybdenum antimony colorimetry method. Available phosphorus (AP) was measured using hydrochloric acid and the ammonium fluoride extraction-molybdenum antimony colorimetry method. Total potassium (TK) was determined using the concentrated sulfuric acid, perchloric acid digestion-flame photometric method. Available potassium (AK) by ammonium acetate leaching-flame photometric method. Additionally, five soil enzymes associated with organic carbon, nitrogen, and phosphorus were quantified using the Soil-Urease kit, Soil Acid Phosphatase kit, Soil Dehydrogenase kit, Soli-Cellulase (S-CL) kit, and Soil-Sucrase kit. The kits were procured from Nanjing Jiancheng Bioengineering Institute in China. Enzyme activity was assessed at wavelengths of 578, 405, 540, 540, and 485 nm using a microplate reader (SpectraMax iD3, Molecular Devices). Genomic DNA extraction was performed utilizing a commercially available kit (ALFA-SEQ Advanced Soil DNA) following the manufacturer’s instructions. DNA concentration and purity were determined using Qubit 3.0 and Nanodrop One instruments (Thermo Fisher Scientific, Waltham, USA). The V3–V4 region of the bacterial 16S rRNA gene and the ITS1 region of the fungal ITS gene were amplified using the TaKaRa Premix Taq Version 2.0 (TaKaRa Biotechnology Co., Dalian, China) kit with genomic DNA as a template and specific primers 338F-806R and ITS1F-2043R, respectively. The length and concentration of the PCR product were determined through 1% agarose gel electrophoresis. Samples displaying a prominent band within the targeted regions were selected for further experimentation. The PCR products were combined in equal density ratios based on the GeneTools Analysis Software (Version 4.0, SynGene). Next, the PCR product mixture was purified using the EZNA Gel Extraction Kit (Omega, USA). Library construction was conducted by generating sequencing libraries with the NEBNext Ultra II DNA Library Prep Kit for Illumina (New England Biolabs, MA, USA), following the manufacturer’s guidelines and incorporating index codes. The library’s quality was evaluated using the Qubit@ 2.0 Fluorometer (Thermo Fisher Scientific, MA, USA). Finally, the library was sequenced on an Illumina Nova6000 platform, generating 250 bp paired-end reads (Guangdong Magigene Biotechnology Co., Ltd. Guangzhou, China). The quality control of the Raw Reads was performed using Fastp (version 0.14.1) with a sliding window (-W 4 -M 20). The removal of primers was carried out using Cutadapt software, taking into account the primer information located at the beginning and end of the sequence. This process resulted in obtaining the paired-end Clean Reads. Considering the overlapping relationship between the paired-end reads, the paired-end clean reads were merged using usearch—fastq_mergepairs (V10). Specifically, when there was an overlap of at least 16 bp, the read generated from the opposite end of the same DNA fragment was merged. The maximum allowed mismatch in the overlap region was 5 bp, and the spliced sequences were called Raw Tags. Raw Tags are again passed through Fastp’s quality control to obtain the paired-end Clean Tags . The whole genome sequencing data generated in this study have been submitted to the NCBI SRA database ( https://www.ncbi.nlm.nih.gov/bioproject/ ) under accession number PRJNA1047323 . We employed UPARSE, a clustering method with a 97% identity threshold, and DADA2, a denoising method, to process the 16S rRNA and ITS gene amplicon fragment data sets . This allowed us to generate operational taxonomic units (OTUs) and amplified sequence variants (ASVs) for each data set. Taxonomic information for each representative sequence was annotated using the Silva database (for 16S rRNA) and the Unite database (for ITS) through the use of the usearch—sintax tool, with a confidence threshold set to default (≥0.8). The species annotation taxonomy was categorized into seven levels: kingdom (L1), phylum (L2), class (L3), order (L4), family (L5), genus (L6), and species (L7). During the clustering process, usearch was employed to eliminate both the chimera sequence and singleton OTUs simultaneously. Furthermore, OTUs annotated as chloroplasts or mitochondria (16S rRNA amplicons) and those that could not be assigned to the kingdom level were excluded. Subsequently, the OTU and ASV taxonomy synthesis information table was obtained for subsequent analysis. The microbial diversity assessment was conducted by utilizing the Simpson index and nonmetric multidimensional scaling (NMDS) in conjunction with the ANOSIM test (bray_curtis) . The relative abundance of phylum and genus levels was employed to depict the composition of the microbial community. To examine differences in biomarker abundance between multiple groups, linear discriminant analysis effect size (LEfSe) was employed, with a significance threshold set at LDA > 3, using the Kruskal‒Wallis rank sum test . Additionally, redundancy analysis (RDA) was conducted to elucidate the impact of soil physicochemical properties on the microbial community . The significance of this analysis was determined through Monte Carlo permutation (permu = 999). This approach further explored the interrelationship between environmental factors and microbial flora. The Spearman correlation coefficients between environmental factors and species abundance were computed and visually represented through a heat map . All statistical analyses were performed utilizing SPSS version 22.0, and the significance of differences was analyzed by one-way ANOVA followed LSD post hoc mulitiple comparison tests ( P < 0.05). Shelterbelt and soil biogeochemical parameters Comprehensive characterization of microbial community composition Correlation analysis of microbial and environmental factors After characterizing environmental factors and microbial composition structure, we used RDA and correlation methods for data analysis to further find the intrinsic link between the two. RDA results showed that environmental factors were significantly positively correlated with differences in community structure, with bacterial structure variation on both ordination axes modeled at 80.9% and 10.5% of the explanatory degree . Bacterial communities clustered between Fm and Bp on the first RDA axis, while Fm and Am clustered on the second RDA axis. pH, TK, and AK had a more significant effect on bacterial community differences than other environmental factors. The environmental factors explained 64.7% of the cumulative structural differences of fungal communities on both axes, which was significantly lower than that of bacteria, and the distribution of the four groups of samples on the graph was also more dispersed than that of bacteria. In addition to TK and AK, TC was the main environmental factor that influenced the differentiation of fungal communities . The correlation results showed that environmental factors were more strongly associated with bacterial genera than fungi. Specifically, K showed highly significant positive correlations with Bradyrhizobium , Gemmatimonas , and Sphingomonas ( P < 0.01) and highly significant negative correlations with Candidatus_Udaeobacter and RB41 ( P < 0.01), while pH showed an opposite pattern of correlation with these five bacterial genera than K. In addition, the bacteria that showed a significant positive correlation with environmental factors was Acidothermus , and a significant negative correlation was Massilia . Interestingly, a limited number of fungi exhibited substantial correlations with environmental factors. Likewise, element K displayed highly significant negative correlations within the fungal community with Ramophialophora, Plectosphaerella, Cadophora, and Fusarium ( P < 0.05). Conversely, Plectosphaerella , Cadophora , and Fusarium demonstrated significant positive correlations with pH, TC, SOM, and TN ( P < 0.05) . Ultimately, it can be concluded that the alteration of soil chemistry properties had a more significant impact on the bacterial community compared to the fungi. The four shelterbelts in this study were planted on the same cultivated area (48°17′N, 125°22′E), effectively controlling for differences in soil properties brought about by geographical location and environmental factors, which prompted us to focus more attention on the effects of stand differences on the soil . The results for DBH and tree height showed that Jm had higher stand volume. In contrast, the Am possessed the lowest stand volume under the same conditions of stand age, and there were significant differences among the four stands ( P < 0.05; ). The physicochemical properties of the samples collected from different shelterbelts differed significantly ( P < 0.05; ). Regarding physical structure, there were no significant differences between the three groups, except for the slightly lower bulk weight of Bp (1.27 g/cm 3 ). Another important indicator of soil pH is that all measurements were somewhat acidic (6.07–6.39). Regarding the soil chemical properties, the results showed that the content of Fm was significantly higher than the other groups, except for the data related to K content. The Jm group had the best above-ground growth and had the highest K content in the understory soil (TK = 1.97 g/kg, AK = 26.49 mg/kg). In addition, Bp, which had the weakest above-ground growth, had the lowest soil N, P, and K contents, except for a significantly lower C content (TC = 44.29 g/kg, SOM = 52 g/kg) than the other groups. This study further analyzed the soil’s biological activity and nutrient transformation by measuring the five enzyme activities. Fm was significantly higher in the five enzymes than the other groups, and the most prominent cellulase content (523.99 µg/d/g) was about 10 times higher than Bp (48.28 µg/d/g) and 100 times higher than Jm (5.28 µg/d/g) . Bp had the most elevated acid phosphatase and dehydrogenase among the four shelterbelts with 779.25 nmol/h/g and 38.89 µg/d/g, respectively, but the lowest sucrase (100.52 mg/d/g). Jm had the lowest urease and dehydrogenase with 1174.80 and 11.67 µg/d/g, respectively. Soil enzyme activity of Am, similar to above-ground growth, was in the middle of the four shelterbelts, possessing only the lowest acid phosphatase (589.27 nmol/h/g) . To summarize, it can be observed that Fm exhibited the most prominent soil enzyme activity and chemical nutrient levels compared to the other shelterbelt species. To explore the comprehensive characterization of the soil microbiota, this study included 24 microbial diversities sampled from four soil habitats (Jm, Fm, Am, and Bp) (Table S1). After quality control, 16S rRNA and ITS gene sequencing obtained 1,385,941 and 945,934 clean tags, respectively. A total of 9,986 bacterial OTUs and 2,352 fungal ASVs were annotated. Venn diagrams calculated based on OTU abundance showed that 31.96% (3,192/9,986) of bacterial OTUs and 9.52% (224/2,352) of fungal ASVs were common in four shelterbelt soils . Among the bacterial communities within the soil samples, it was observed that Jm exhibited the highest number of unique OTUs (1,401), whereas Bp displayed the lowest number (751). Regarding fungi, Fm-specific ASVs were found to be the most abundant (488), while Am’s ASVs were the least abundant (354). The α-diversity was related to the diversity and richness of the microbial community. The results showed that the Simpson index of Bp was significantly higher in bacteria than in the other three species . At the same time, in fungi, there was no significant difference between the four species . NMDS analysis reflecting microbial community structure showed that the microbial composition was significantly different among soils in four shelterbelts ( P < 0.01), with a more significant difference within the bacterial group ( R = 0.605) than in the fungal ( R = 0.433). Additionally, the relative abundance of microbial composition was analyzed at both the phylum and genus levels. The dominant phyla among the bacterial communities in all soil samples were identified as Acidobacteria (25.68%), Proteobacteria (21.67%), Actinobacteria (13.50%), Verrucomicrobia (10.86%), Bacteroidetes (8.64%), Chloroflexi (6.02%), and Gemmatimonadetes (5.60%). These phyla collectively accounted for approximately 92% of all bacterial species ( ; Table S2). Other taxa with an abundance greater than 1% included Rokubacteria (2.30%), Planctomycetes (1.10%), and Nitrospirae (1.04%). Acidobacteria (29.92%), Verrucomicrobia (17.04%), Gemmatimonadetes (6.80%), and Rokubacteria (3.27%) were found to be significantly more abundant in Bp compared to the other groups, while Jm’s Proteobacteria (27.35%), Actinobacteria (16.21%), and Chloroflexi (7.92%) exhibited significantly higher abundance than the other groups. At the genus level, RB41 (9.95%), Candidatus_Udaeobacter (8.31%), Candidatus_Solibacter (3.32%), Bryobacter (1.52%), Nitrospira (1.04%), Gemmatimonas (1.06%), Ellin6067 (1.05%), Achromobacter (1.00%), Haliangium (1.00%), and Acidothermus (0.92%) were the top 10 bacterial taxa, collectively representing approximately 29% of all sequences . Furthermore, notable variations in the relative abundance of the top 10 genera were observed among the four shelterbelts soils (Table S3). In the fungal community, most sequences at the phylum level were identified as Ascomycota (64.43%). Additionally, the top three most abundant taxa included Basidiomycota (18.72%) and Mortierellomycota (13.09%), collectively representing approximately 96.24% of all fungal species ( ; Table S2). Interestingly, the abundance of Ascomycota was significantly higher in Am (66.77%) and Bp (73.07%) compared to Jm (55.50%) and Fm (62.37%), while the abundance of Mortierellomycota was significantly lower in Am (9.31%) and Bp (10.16%) compared to Jm (16.55%) and Fm (16.35%). At the genus level, the top 10 fungal taxa were Unassigned (17.97%), Mortierella (13.09%), Schizothecium (7.50%), Plectosphaerella (7.76%), Mrakia (7.12%), Didymella (5.84%), Fusarium (5.15%), Laetinaevia (4.53%), Waitea (3.32%), and Lasiosphaeris (2.94%) , accounting for approximately 75.14% of all sequences. Notably, Schizothecium exhibited the most significant variation in abundance among the four samples, ranging from 0.96% to 22.58% (Table S3). A total of 46 bacterial biomarkers and 24 fungal biomarkers (LDA3, > P < 0.05) were screened by LEfSe analysis from phylum to genus level . Among them, no bacterial biomarker was identified in Fm. The findings of the study indicate that Proteobacteria was the sole bacterial phylum observed in Jm, with Pseudomonas , Pseudarthrobacter , Acidibacter , Niastella , and Leptolyngbya_EcFYyyy identified as biomarkers at the genus level. Similarly, Verrucomicrobia was identified as the only biomarker bacterial phylum in Bp, while Candidatus_Udaeobacter and Waddlia were identified as biomarkers at the genus level. In contrast, no phylum-level biomarker was identified in Am, but six biomarkers were recognized at the genus level, including Bryobacter , Ellin6067 , Gemmatimonas , Massilia , Sphingomonas , and MND1 ( ; Table S4). In addition, no biomarker at the fungal phylum level was identified in any of the four samples. However, at the genus level, Volutella and Entoloma were identified as biomarkers for Jm, Metarhizium , Vishniacozyma , Tylospora , Phialocephala , Clitopilus , and Amanita were identified as biomarkers for Fm, and Plectosphaerella , Ramophialophora , Leucoagaricus , and Mallocybe were identified as biomarkers for Bp. Lastly, Am had only one biomarker identified as Schizothecium ( ; Table S5). In a word, Jm has the most numerous and diverse biomarkers. growth process of shelterbelts affects the soil in many ways, including physical, chemical, and biological. With the advancement of sequencing technology, more and more reports show that environmental microorganisms are central to soil function. Therefore, this study focused on elucidating the composition and differences of understory microorganisms under different tree species under controlled conditions. In addition, soil nutrients and enzyme activities were determined in this study, and these changes were linked to microbial by correlation analysis to find the potential relationship of Mollisols recovery by different shelterbelts mediating biomarkers. Response of soil properties to tree species Response of soil microbial community to tree species Response pattern of soil microbiome to tree species Conclusion Soil is an essential foundation of ecosystems, and above-ground vegetation types are closely related to below-ground soil fertility . Soil capacity is a crucial indicator of soil compactness, and its porosity is influenced by microbial activity and root growth . Bp exhibits a whisker root system and a rapid growth rate, with its abundant root system contributing to enhanced soil porosity . Consequently, Bp’s soil capacity is marginally lower than other forest stands. However, given the relatively juvenile state of the shelterbelts and the constrained root development in the subterranean component of the plantation, the variation in soil capacity among the other three tree species did not exhibit statistical significance. It is postulated that this disparity may become more pronounced as the plantation matures and will persist as an observable phenomenon in forthcoming investigations. Plants alter soil properties mainly through root secretions and tree litter , while soil nutrients are the basis for maintaining the productivity of natural ecosystems and can directly affect plant growth and development . The four tree species selected in this study, all of which are common deciduous trees in the northeast, mitigate the effects of tree litter on soil properties . Different tree species significantly affected soil C/N/P content, with the most pronounced changes in P . Studies have shown that phosphorus accelerates cell division and promotes faster growth of the root system and above-ground parts , of which Jm has the best growth among the four tree species. We suggested that the P content of Jm was significantly lower than that of the other plots, probably due to the dominance of the above-ground part in the nutrient cycling process, which resulted in more P flowing from the underground to the above-ground. Enzyme activity is a sensitive indicator of the metabolic activity of soil microorganisms, which is not only a significant participant in the cycling (carbon, nitrogen, phosphorus, and sulfur) of soil and vegetative matter but also promotes growth by altering nutrient status and increasing the effectiveness of nutrient uptake by plants . Like the soil nutrient profile, soil enzyme activities were significantly higher in the Fm than in other species. This finding further confirms that soil enzyme activities are closely related to factors such as the form content of soil nutrients and an essential indicator for evaluating soil quality . For arable land, the core of soil degradation is reducing organic matter, and glucose catalyzed by S-CL is the primary carbon source in the C cycle . Interestingly, S-CL activity in Fm was hundreds of times higher than in the other sample plots , and such highly significant differences are intriguing. We proposed that Fm plays a vital role in repairing the Mollisols carbon cycle, possibly due to its unique root secretion and microbiome, whereas the other four enzyme activities fluctuated by S-CL . In summary, this study found significant differences in the effects of different tree species on soil physicochemical properties and enzyme activities in the understory, a finding that supports hypothesis 1. The biodiversity of soil microorganisms is essential for maintaining soil quality, productivity, and ecological balance of agroecosystems . Based on the microbial diversity research method, it was found that the bacterial richness and diversity of Bp were significantly higher than other species, and the soil microbial structures of the four tree species were significantly different . This result supports hypothesis 2 that the richness and diversity of microbial communities in the understory of varying tree species also differ . Soil fungi have been reported to have a more stable topological network than bacteria , which explains the lesser variation in fungal community structure by tree species . Acidobacteria has been reported to be the most abundant taxa in Bp , and the results of this study are consistent with them. However, the content of Acidobacteria in Bp was significantly higher than the other samples, which may be due to the unique root secretion of Bp that alters the root environment, which in turn recruits Acidobacteria selectively . In addition, RB41 and Candidatus_Udaeobacter , which are abundantly contained in Bp taxa, have been shown to have a positive role in carbon cycling and soil remediation . Interestingly, between the four species, Jm contained the most Proteobacteria and Chloroflexi and the least Acidobacteria, Verrucomicrobia, and Gemmatimonadetes, while the opposite was true for Bp (Table S2). In addition to bacteria, fungi are essential in soil geochemical cycles . As in most studies, Ascomycota, Mortierellomycota, and Basidiomycota were the soil’s most dominant fungi phyla . The structure of the fungal community varied under different tree species , and this difference was more evident at the genus level . The vast majority of dominant fungal genera belonged to Ascomycota and Basidiomycota and were mostly saprophytic, with mutualistic solid relationships with plants . In addition, Mortierella , the only one of the dominant genera under Mortierellomycota, some species promotes plant growth in agricultural soils . Screening of biomarkers of the four plots by using LEfSe analysis showed that bacteria were more sensitive to changes in tree species than fungi . It is noteworthy that Fm, which has the best nutrient status among the four tree species, possesses the least number of biomarkers, even has no in bacteria. This result was consistent with our previous study in the Mollisols region, where high-quality soils possessed fewer but beneficial microbiomes . Key taxa play unique and critical roles in microbial communities and are central drivers of soil metabolic activities and nutrient cycling . The abundance of microbial taxa in the Jm also hints at the complexity of soil functions. For example, Pseudomonas as pathogens can also promote plant growth ; Sphingomonadaceae can remediate soils by degrading Bisphenol A and thus repairing them ; Chloroflexia and Leptolyngbya_ EcFYyyy ’s photosynthetic autotrophic capacity contributes to material cycling in the soil microenvironment . In addition, actinomycetes enriched in Jm can promote plant growth by increasing nutrient utilization and improving the efficiency of the material cycle in the ecosystem . Candidatus_Udaeobacter and Methylomicrobium , Bp’s biomarkers, predominantly function in the carbon cycling pathway . Am’s biomarkers belonged to α/β/γ-proteobacterial, which are Gram-negative bacteria that not only participate in the process of nitrification/denitrification in the soil but also fix molecular nitrogen in the air as a source of nitrogen for their growth, which is essential for agriculture . Compared to fungi, bacterial communities are more widely studied and applied in soil remediation. However, fungi also significantly function in soil proventing and plant nutrient cycling. Schizothecium , a coprophilous fungus, is the sole biomarker in Am and can be found in diverse fecal matter types. It is widely recognized as a biocontrol agent against soil-borne pathogens . The intricate and varied nature of microbial networks often makes it challenging to categorize the role of soil microbial communities as solely beneficial or harmful. For instance, Volutella , a biomarker for Jm, exhibits distinct functions in different environments. In the context of soil, Volutella buxi demonstrates a significant capacity for pathogenicity, causing extensive destruction . Conversely, Volutella ciliata is a saprophyte and decomposer, contributing to the accumulation of organic matter in the soil . Furthermore, a separate investigation has documented the heightened predatory nematode activity of Volutella citrinella . A similar scenario unfolds in the case of Bp and Fm. Fusarium and Plectosphaerella aggregate in Bp and encompass numerous plant pathogens, including Fusarium verticillioides and Plectosphaerella citrulli . These pathogens inflict substantial harm on a wide array of crops through their pathogenicity and virulence factors . Moreover, Leucoagaricus , certain fungal genera have been recognized for their ecological functional benefits. These genera not only facilitate lignin degradation through cellulase production but also contribute to the detoxification of secondary metabolites in plant tissues via oxidoreductase production . Consequently, they play a crucial role in the cycling of nutrients within ecosystems and the turnover of carbon. In this study, it was also observed that under identical screening conditions, the biomarkers of Fm only consisted of fungi, with a significant proportion being probiotics that possess well-defined functionalities. For instance, the fungi Clitopilus and Amanita were found to enhance nitrogen uptake in plants through symbiotic associations with etomycorrhizal mycorrhizal . Additionally, Tylospora was observed to facilitate atmospheric nitrogen cycling by generating N 2 O, while the yeast Vishniacozyma exhibited a positive impact on land improvement . Furthermore, Metarhizium , enriched in Fm, is recognized as a biocontrol fungus due to its potent efficacy against pests and environmentally friendly nature. In conclusion, our findings indicate that the biomarkers present in Fm primarily consist of fungi that exhibit symbiotic relationships with plants and soil, thereby significantly contributing to nutrient cycling and soil biocontrol mechanisms. Environmental factors’ importance for microbial communities cannot be ignored. It has been shown that environmental factors such as pH, AK, and AP determine the composition and structure of soil microbial communities in different ecosystems . A nutrient-rich substrate favors microbial growth, while microbial diversity is suppressed in adverse environments such as, for example, drought and salinity . In this study, the analysis of soil physicochemical properties showed significant differences in soil properties between different understories, and the content of soil P and K changed significantly among the four plots, affecting the uptake of nutrients required for plant and microbial growth. In addition, the conditional effect of environmental factors on the structure of bacterial communities was significantly more substantial than that of fungi by RDA and heatmap analysis . Researchers found that soil fertility is the most critical soil property that regulates microbial abundance, e.g., rich organic matter can provide more nutrients, which is favorable for bacterial enrichment . The positive relationships with Bradyrhizobium , Gemmatimonas , and Sphingomonas in C/N/P cycling were also in this study further confirmed . Furthermore, alterations in the composition of fungal communities are influenced by environmental factors. Most fungal genera that hold significance in nutrient relevance in this investigation serve as biomarkers for Bp and have been demonstrated to be pathogenic to plants . We propose that substantial increases in the concentrations of Fusarium , Cadophora , and Plectosphaerella contribute to the vulnerability of plants to disease as they compete with potassium within the soil. Nevertheless, microbial functions tend to be intricate and varied. While no study has established a direct association between these functions and soil fertility, they exhibited a notable and positive correlation with TC, SOM, and TN content . This suggests that a reciprocal relationship between the enrichment of Ramophialophora and Plectosphaerella and the levels of soil carbon and nitrogen. Based on the preceding discourse regarding the roles played by soil core microorganisms in diverse ecosystems, we present a synthesis of how microorganisms facilitate soil remediation in various tree species. Notably, Jm might eliminate detrimental substances from the soil via the abundant presence of Sphingomonadaceae while simultaneously harnessing the autotrophic potential of Leptolyngbya_EcFYyyy to sequester carbon within the soil. Consequently, the cultivation of Jm warrants consideration in instances of nutrient deficiency and ecological rehabilitation following heavy metal contamination. The primary microorganisms found in Fm facilitate the cycling and utilization of nutrients between vegetation and soil via mycorrhizal symbiosis with plants. Am, on the other hand, could enhances the carbon and nitrogen levels in the soil through the presence of nitrogen-fixing microorganisms such as Gemmatimonas and Sphingomonas . However, it is essential to note that not all tree species contribute positively to soil fertility restoration. A significant abundance of plant pathogen biomarkers enriched in Bp exhibited a robust inverse relationship with potash levels, suggesting that their microbial activities extensively depleted soil potassium content, thereby hindering the effective accumulation of potassium in the soil. While all four tree species investigated in this study possess their respective core microorganisms with potentially beneficial functions, it is vital to prioritize the objective of this research, which aims to identify shelterbelt tree species that enhance soil fertility in the Mollisols region. Based on the comprehensive synthesis of soil nutrient content and enzyme activity findings, it is strongly recommended that Fm be extensively employed as a shelterwood species to restore soil fertility in the northeast plain. Determining the response of soil microbes to different shelterbelt tree species can help to understand plant-driven soil functioning at the micro-scale and provide information on the role of vegetation restoration in regulating soil nutrient cycling. This study analyzed the differences between soil physicochemical properties, enzyme activities, microbial communities, and the linkages among four shelterbelts through controlled experiments. We clarified that the mechanisms of soil restoration through microbial-mediated restoration differ among tree species. The results showed significant differences in soil properties among different tree species in the understory, with Fm being the most effective in ameliorating soil nutrients. The microbial community structure also changed with varying tree species, with the bacteria being more sensitive than fungi. In addition, the microbial communities recruited and colonized under the soil of each stand differed significantly due to tree species. Based on the findings above, we propose the initial selection of the Fm as the optimal tree species for establishing protection forests in northeast China. This recommendation is justified by the fact that the Fm exhibits the most comprehensive improvement in soil nutrient levels. At the same time, its core microbiota predominantly comprises probiotics that are less susceptible to environmental influences, aligning with the principles of natural ecological restoration. Furthermore, Fm demonstrates superior timber growth and possesses specific economic value, yielding both ecological and economic benefits. However, the applicability of this conclusion to long-term experiments remains to be demonstrated, and time-scale factors are critical to future research. In conclusion, this study refines the effects of tree species on microbiota and provides technical guidance for tree species selection and Mollisols restoration strategies in northeast shelterbelts. Reviewer comments
Random mechanisms govern bacterial succession in bioinoculated beet plants
86da9e32-598b-4ab0-ae54-35dd139c5d2b
11953351
Microbiology[mh]
Bioinoculation, defined as the introduction of beneficial endophytic microbes into plant tissues, effectively enhances nutrient access and increases resistance to pathogens, leading to higher crop yields, especially in plants grown under environmental stress – . For instance, the bioinoculation of Arabidopsis thaliana with a synthetic microbial community has been shown to restore growth under insufficient light conditions . Our previous research revealed that beet genotype influences both endophytic and rhizosphere bacterial communities, and lyophilized beet roots can serve as a source of viable microorganisms for inoculation . From an ecological standpoint, inoculation influences the structure of endophytic microbial communities, potentially altering the process of community assembly , . Research on the dynamics of plant microbial colonization has demonstrated that this process occurs rapidly , and is driven by selection caused by interactions with a host plant , as well as by microbe‒microbe interactions . Consequently, the host genotype is one of the key factors influencing endophytic community structure , meaning that different varieties of the same plant species may exhibit varying responses to a given bioinoculant. Although soil has been identified as the primary source of endophytes, seeds have been found to be more important in certain cases , , and the phyllosphere can also serve as an additional source . Therefore, the success or failure of a bioinoculant may also be influenced by soil conditions. Endophytic communities evolve over time – and respond to changes in host developmental stages , and to environmental conditions . Primary succession is a special case of such an evolution. Microbial primary succession has shown similarities to plant succession in different environments and follows similar phases . In fact, in the case of axenic plants, their colonization might be considered an instance of primary microbial succession. However, plant colonization differs from other instances of succession due to the additional layer of complexity introduced by plants, including disturbances and plant development, which to some extent govern this process. Many ecological sets of communities were found to be nested; that is, species-poor communities are proper subsets of species-rich communities. Nestedness analysis is a common tool for disentangling richness and structural effects on changes in community composition . However, nestedness alone does not convey information on the processes governing the assembly , and other tools, such as βNTI or βNRI coupled with Raup–Crick dissimilarity based on the Bray–Curtis index , are needed to paint a full picture of community assembly. In studies on bioinoculation, tracing the entire lifespan of a plant is often impractical, particularly for large or perennial species. Moreover, the application and assessment of bioinoculants require determining the optimal timing for their use . Consequently, questions arise regarding whether and, if so, when endophytic communities reach compositional stability and when to examine the influence of inoculants on host plants. However, it may be convenient to analyse the earliest stages, such as the seedling stage, for practical reasons, but this might not be the best approach if colonization requires more time. Additionally, it is interesting to investigate how bioinoculants influence rhizosphere and endophytic communities and whether this influence is dependent on soil type and plant genotype. Such data could be valuable in engineering novel bioinoculants. Common beet ( Beta vulgaris ssp. vulgaris ) is an important crop and exceptional plant—one of those whose undomesticated ancestor still grows in the wild. The ancestor, sea beet ( B. vulgaris ssp. maritima ), is genetically very similar to domesticated beets but strongly differs in ecology , . Beets are cultivated for their high sucrose content (sugar beet), as a root (red beet) or leafy (chard) vegetable or as fodder. According to recent FAO data, approximately 30% of the world’s sucrose production comes from sugar beet . The beet taproot forms early in plant development, and its growth can be divided into three phases differing physiologically and biochemically: pre-storage, transition/secondary growth and sucrose accumulation . These phases presumably also differ in terms of the quantity and quality of root exudates , . The sugar beet microbiome has been extensively studied (reviewed in ), and many bioinoculants have been proposed (e.g., , ), while data on the wild beet microbiome are scarce. The beet microbiome was studied either at very early stages of plant development or, if a study involved the whole plant lifespan, it was based on very short reads (V3 region, 2 × 150 nt reads) and a limited number of replicates . To bridge these gaps, we decided to include sea beet in our study and address three questions: (i) to determine the time required for the establishment of stable endophytic communities in beet plants—given that the analysed growth period encompassed all three phases of root development—we hypothesized that stability would be achieved within six weeks after planting in all genotypes; (ii) to assess whether community assembly and the degree of nestedness vary over time and among different genotypes; and (iii) to examine the extent and manner in which inoculation with lyophilized wild beet roots influences bacterial communities and their predicted metabolic potential in the soils, as well as in bacteria-free sugar beet and sea beet plants. To answer these questions, we analysed bacterial communities through 16S rRNA gene fragment sequencing at five time points in the rhizosphere, roots, and leaves of three beet genotypes cultivated in two soils with contrasting edaphic properties, with or without inoculation using lyophilized sea beet roots. Soils Plants Inoculant preparation Experimental design Culturable bacteria density assessment Plant growth conditions Inoculation Sampling DNA extraction Library preparation and sequencing qPCR Bioinformatics and statistics We selected two soils obtained from commercial sugar beet plantations. Beets were cultivated in the sampled fields for at least three successive years prior to sampling according to standard agricultural practices recommended by sugar-producing companies. Soils (0–40 cm depth) were sampled during the fall of 2017 before harvest. The samples were stored in sealed plastic bags at ~ 15 °C until use. The specific physicochemical characteristics of the soils are provided in Table SR1. The soils differed also in their microbiomes (Fig. SR8 in Supplementary Results and other details therein). To minimize the impact of their native microbiomes on the plants, the soils underwent a pasteurization process. This involved subjecting them to two rounds of autoclaving, each lasting for one hour at a temperature of 121 °C, over a week. In our study, we utilized two varieties of sugar beet, namely, B. vulgaris ssp. vulgaris cv. ‘Bravo’ and cv. 'Casino,' along with sea beet ( B. vulgaris ssp. maritima ). The plant material was obtained from two different sources. The sugar beet plants were obtained from commercial seeds (KHBC, Kutno, Poland), while the seeds of sea beet were obtained from the National Germplasm Resources Laboratory in Beltsville, MD. Prior to use, the seeds (with the coating removed when necessary) were surface-sterilized. The efficiency of sterilization was evaluated through plating, as detailed in Supplementary Methods SM1. After germination, the seedlings were carefully dissected and subjected to micropropagation in the presence of cefotaxime and vancomycin. The resulting plantlets were further subjected to three rounds of micropropagation on antibiotic-free media. Subsequently, the explants were rooted and acclimated to ex vitro conditions (for further details, refer to Supplementary Method SM2) before being planted in their respective soils. The process of generating plant material is schematically depicted on Fig. A. Lyophilized roots of sea beet growing in the wild were used as an inoculant in this study. In August 2017, sea beet plants growing along the northern Adriatic coast of Croatia were collected for our research purposes. The plants were identified by Dr. Jaroslaw Tyburski and a photographic voucher specimen was deposited in the iNaturalist database (ID: Bvm_20170806/1; https://www.inaturalist.org/observations/239367873 ). Since the collection was conducted in non-protected areas of public grounds and solely for scientific investigation, specific permission was not required in compliance with the Nagoya Protocol and applicable EU, Croatian, and Polish laws. All methods were carried out in accordance with relevant guidelines. The collected plants were promptly refrigerated in styrofoam boxes containing cooling pads that had been frozen at − 80 °C prior to the sampling campaign. This procedure ensured that the temperature was close to 0 °C until the plants were processed. Upon arrival at the laboratory, roots were separated from the aboveground parts, and were subjected to surface sterilization according to the method described in Supplementary Method SM3. Subsequently, they were homogenized using a Warring blender, combined with trehalose (at a concentration of 1 mg/g), and then lyophilized following a previously described method . The obtained inoculant in the form of fine powder was stored at − 80 °C. We cultivated three beet varieties in two different soils, and each combination of a variety (genotype) and soil either was inoculated with the native microbiome of B. vulgaris ssp. maritima sourced from the wild or left non-inoculated. Our analysis included the examination of soils, roots, and leaves at five time points: T0, four weeks after planting in the soil and immediately prior to inoculation; T1, 29 days after planting (1 day post-inoculation); T2, 35 days after planting (7 days post-inoculation); T3, 56 days after planting (28 days post-inoculation); and T4, 86 days after planting (56 days post-inoculation). At the beginning of the experiment, twenty-five plants were grown in a 14 cm (h) × 15 cm (w) × 37 cm (l) pot filled with approximately 7 L of soil. During each sampling step, we carefully removed five plants from each pot (Fig. B). A single plant was considered a technical replicate, and five plants from the same pot were considered a biological replicate. We employed five biological replicates for each soil × genotype × inoculation variant. A schematic representation of the experimental design is depicted in Fig. C. Bacterial density assessment in the inoculant was performed on Luria–Bertani (LB) agar obtained from BD Difco, Poland, supplemented with 100 µg/ml nystatin from Sigma Aldrich, Poland, to inhibit fungal growth. The inoculant was suspended in sterile 0.9% NaCl, and dilutions were prepared in the same medium. Agar plates were incubated for 7 days at 26 °C. The plants were cultivated in a controlled environment using an artificially lit growth chamber. To ensure a clean and sterile growth environment, the ventilation outlets were equipped with HEPA filters. The air in the chamber was constantly subjected to sterilization using UV radiation emitted by flow lamps from ULTRAVIOL, Poland. Throughout the experiment, the temperature in the growth chamber was maintained at a constant 20 °C. The photoperiod followed a cycle of 16 h of light and 8 h of darkness, simulating a day/night cycle. LED lighting panels emitting white light were configured to provide a photosynthetically active radiation (PAR) intensity of 100 µmol × m −2 × s −1 at the soil level. Watering of the plants was carried out as per their specific requirements using sterile deionized water. For plant inoculation, 100 mg of bioinoculant was used, which was an equivalent of ~ 1 g of fresh roots. The bioinoculant was thoroughly mixed with the surface soil located in immediate vicinity of the plants, covering an approximate radius of 1 cm. To facilitate the inoculation process, the plants were watered twice on the same day: once before inoculation and once after inoculation. On each occasion, half of the typical amount of water was administered. All samples were collected using sterile tools in a UV-sterilized room equipped with continuous flow UV-lamp to maintain aseptic conditions. Plants were gently uprooted, and soil samples were collected with a spatula from holes left by removed plants, placed in Falcon tubes, and promptly snap-frozen in liquid nitrogen. The plants were dissected with a sterile scalpel blade, and roots were subjected to surface sterilization, whereas the leaves were not sterilized. Plant samples were packaged in sterile aluminium foil bags and frozen in liquid nitrogen. All samples were stored at − 80 °C until use. Additional information regarding the surface sterilization procedure can be found in the Supplementary Methods section (SM3). DNA extraction was performed on all sample types using a combination of bead beating and flocculation, followed by purification using silica columns. The detailed protocols outlining the specific steps can be found in the Supplementary Methods section SM4. To generate V3-V4 16S rRNA gene fragment libraries, we followed the established protocol and made necessary modifications to reduce host rRNA amplification. The libraries were subsequently sequenced using PE strategy with 600 cycles v.3 kit on MiSeq (Illumina, CA) and custom sequencing primers at CMIT NCU, as previously described . Further details regarding the modifications made for decreasing host rRNA amplification can be found in the Supplementary Methods section SM5. Real-time PCR was conducted using a LightCycler 480 machine (Roche, Switzerland) along with a LightCycler 480 SYBR I Master kit (Roche, Switzerland) and Roche consumables. The reactions were carried out in a 10 µl volume, utilizing 1 ng of template DNA and 5 pmol of each primer per reaction. The specific primer sequences and cycling conditions can be found in Supplementary Methods SM6 and Table S1. Each reaction was performed in four technical replicates, and purified amplicons were used to generate standard curves. The C t values, determined using the second derivative algorithm of the LightCycler software, were exported into CSV files and further analysed using R as described in the SM. We denoised, merged the sequencing reads, and removed chimeras with DADA2 and used the resulting amplicon sequence variants (ASVs) for downstream analyses. The sequences were classified with the assignTaxonomy function of DADA2 using SILVA v.132 as a reference database. A Relaxed Neighbor-Joining tree was constructed using clearcut , based on an alignment computed using Mothur v.1.44.3 with SILVA v.132 as a reference . Alpha diversity was assessed as Shannon’s H’, species richness (S) as the observed number of ASVs, while evenness was assessed as Shannon’s E (E = H’/ln(S)). For beta diversity analysis, we computed generalized UniFrac distance matrices based on the tree and ASV table using the GUniFrac package . To ensure the comparability of alpha and beta diversity indices across samples with varying sequencing depths, we subsampled the ASV table 100 times to 900 sequences per sample. The averaged values, rounded to the nearest integer, were used in downstream analyses. Primers specific for particular ASVs were designed using the DECIPHER package . We reconstructed the metabolic potential of the bacterial communities using PICRUSt2 with default parameters. Core microbiomes were identified in R as sets of ASVs whose prevalence (i.e. abundance expressed as percentage of all sequences) exceeded a threshold of 0.1, and detection rate (i.e. percentage of samples an ASV exceeded the prevalence threshold in) was greater than 90. Inoculant influence was calculated as a fraction (percentage) of sequences coming from the inoculant as assessed with sourcetracker2 run on the non-rarefied dataset. We calculated ‘reduced values’ for inoculated samples by subtracting the mean estimated percentage for the respective non-inoculated samples (i.e. samples from the same experimental variant (material × soil × genotype × status)). It was necessary due to non-zero ‘influence’ on non-inoculated samples. The relevant R and Mothur scripts can be found in Supplementary Methods SM7. We compared the sample means with Kruskal–Wallis or Wilcoxon test using standard R functions or with robust ANOVA implemented in the Rfit package. When applicable Benjamini–Hochberg correction (FDR) was used. The weighted NODF (Nestedness by Overlap and Decreasing Fill) metric was used to assess the degree of nestedness, and the analysis was carried out using NODF software run on Windows 7. We used the vegan R package for ordinations, variance partitioning, and testing of grouping significance using PERMANOVA or permutational test of dbRDA models. PERMANOVA analysis of group pairs implemented in the pairwiseAdonis package were used to check which groups actually differ if overall PERMANOVA model was significant. Differences in community assembly process shares were assessed with the βNTI and Raup–Crick indices based on Bray–Curtis dissimilarity using the iCAMP package . Differentially abundant ASVs, taxa and PICRUSt2-predicted traits were identified with DESeq2 and, only for ASVs, with ALDEx2 , while signature ASVs were identified with the biosigner package . The code for performing the computations can be found in Supplementary Methods SM7. Beet plants generated by micropropagation of seedlings emerging from surface-sterilized seeds are nearly axenic The communities in the soils, roots, and leaves differ significantly The first weeks of axenic beet growth in soil can be divided into two stages differing in community structure, diversity, bacterial load, predicted metabolic capabilities and nestedness Inoculation with lyophilized wild beet roots influences bacterial communities in soils and plants The community structure in inoculated and non-inoculated samples differs significantly, and the characteristic ASVs differ depending on compartment, soil and genotype The beet seeds were virtually devoid of bacteria, regardless of genotype. Surface sterilization caused a decrease in bacterial 16S rRNA gene counts below the detection limit (Fig. SR2A). Seedlings emerging from sterilized seeds and propagated once on Murashige and Skoog media supplemented with cefotaxime and vancomycin (see Supplementary Method SM2) proved to be axenic (Fig. SR2B). The bacterial communities in the roots and leaves of axenic beet plants grown in pasteurized soils differed significantly in terms of structure (9.57% of variance explained, Fig. A), alpha diversity, where the diversity and richness of ASVs followed a pattern of soils > roots > leaves (Shannon’s H’: robust ANOVA F 2,1495 = 583.60, p < 0.001; Shannon’s E: F 2,1495 = 39.97, p < 0.001; richness: F 2,1495 = 583.60, p < 0.001), and bacterial load, which was lower in the leaves than in the roots (Wilcoxon test W = 259 411, p < 0.001) (Fig. B). Globally, ASV80, classified as Achromobacter , and ASV23, classified as Chryseobacterium , were found to be characteristic of (i.e., significantly more abundant in) plant tissues, while ASV3, ASV21 and ASV27 ( Pseudoxanthomonas , Sphingopyxis and Pedobacter , respectively) were characteristic of soils. ASV7 ( Cellvibrio ) was typical for roots, and ASV13 ( Sphigobacterium ) was typical for leaves. Generally, differentially abundant ASVs affiliated with Gammaproteobacteria were characteristic of roots and soils, while Bacteroidia -associated ASVs were typical of leaves (Fig. E, Table SR3). At the genus level, Bacillus , Brevundimonas , Pedobacter , Pseudoxanthomonas and Stenotrophomonas were characteristic of soils, while Cellvibrio and Flavobacterium were more abundant in roots, and Sphingobacterium was more abundant in leaves (Fig. G, Supplementary ResultsF1). The core microbiome in the three compartments was limited to a few ASVs (8 in soils, 3 in roots and 1 in leaves; Table ), which were mainly members of Alphaproteobacteria , Gammaproteobacteria and Bacteroidia. The differences, albeit smaller, were also visible at the level of PICRUSt2-predicted metabolic capabilities, which were also grouped according to material (Fig. C, 6.70% variance explained), and the alpha diversity of PICRUSt2-predicted functions followed the pattern observed for ASVs (Fig. D, Shannon’s H’: robust ANOVA F 2,1495 = 104.44, p < 0.001; Shannon’s E: F 2,1495 = 173.20, p < 0.001; richness: F 2,1495 = 851.62, p < 0.001). Predicted functions related to competition between microorganisms (antibiotic resistance and biosynthesis, quorum sensing) appeared to be characteristic of soils and roots, while carbohydrate metabolism-related predicted functions were predicted to be more frequent in the genomes of soil- and leaf-dwelling bacteria (Fig. F; Table SR4). As material explained a far greater fraction of the variance than any other variable (Table SR5), to determine the influence of other variables, further analyses were carried out on the data divided into soil, root, and leaf sets. The time point was the second most important grouping variable, regardless of the material (5.82% of the variance explained in the whole dataset). Three ‘early’ time-points (T0, T1 and T2) clustered together and were significantly different from the ‘late’ time-points (T3 and T4). The percentage of variance explained by this grouping was 15.64% in the leaves, 3.05% in the roots and 9.67% in the soils (Fig. B). Henceforth, the belonging of a sample to the ‘early’ or ‘late’ cluster will be called its ‘status’. Differences between the early and late clusters were observed regardless of the material, soil and genotype (Figs. SR9-11), and were also visible in the alpha diversity measurements, which were greater in the late samples. Moreover, bacterial load in plant tissues was greater in the early samples. The effect on alpha diversity was most pronounced in leaves and least pronounced in soils, while the decrease in the number of bacterial 16S rRNA gene sequences was greater in roots than in leaves (Fig. A). Different organisms were characteristic of early and late samples in soils, roots, and leaves. In soils Pedobacter and Pseudoxanthomonas were characteristic of early samples, while Chryseobacterium , Flavobacterium , and Sphingobacterium were more abundant in late ones. In roots there were only genera characteristic of late samples: Brevundimonas , Caulobacter , Cellvibrio , Chryseobacterium , Delftia , Flavobacterium , Novosphingobium , Pseudomonas , Rhizobium , and Stenotrophomonas . In leaves Caulobacter , Chryseobacterium , Delftia , Flavobacterium , Pseudomonas , and Rhizobium were more abundant in early samples, while Pseudoxanthomonas was characteristic of late ones (Fig. H, SupplementaryResultsF1). The organisms characteristic of soils and roots were mostly of low abundance (Fig. SR12ACE; SupplementaryResultsF1). Similarly, core microbiomes of early and late samples were different and limited to only a handful of ASVs (Table ). The traits characteristic of the genomes of organisms thriving in late and early samples differed among the soils, roots, and leaves (Fig. C; SupplementaryResultsF2). The differences were significant regardless of material, soil and genotype, but variance explained by status was lower than in case of ASVs (Figs. SR13-15). Early soils harboured organisms whose genomes were predicted to be enriched in genes involved in diverse array of functions, among which genes related to methane metabolism, protein and nucleotide rescue from glyoxal glycation, and heavy metal resistance were most prominent. On the other hand, the metabolism of aromatic compounds was characteristic of the genomes of organisms dwelling in late soil samples (Fig. SR12F). Genes involved in root biofilm formation, exopolysaccharide synthesis and the regulation of the amino acid pool were characteristic of early samples, and the metabolism of aromatic compounds was characteristic of late samples (Fig. SR12D). Toxin/antitoxin systems were characteristic of leaves in general, polysaccharide (chitin, pectin) utilization was of greater abundance in early leaf samples, while aromatic compound metabolism was typical of late leaf samples (Fig. SR12B). The predicted functional diversity was significantly greater in the late leaf and soil samples than in the root samples (Fig. D). The degree of nestedness calculated for the soil–root–leaf matrices (for each plant (technical replicate) separately) was very low, essentially did not deviate from the expected values derived from a null model (Supplementary File 3), and decreased with time. The difference between the early and late samples was significant (Fig. F; Table SR6). Dispersal limitation (DL) dominated mechanisms governing the entry of bacteria into roots and their transfer to leaves in early samples, while drift (an umbrella term covering all stochastic processes but DL) was more pronounced in late samples. In these samples the share of DL was greater in the case of soil → root transfer than in the case of root → leaf transfer. Interestingly, the levels of selection, albeit generally low, were greater in the early samples than in the late samples (Fig. G). A similar pattern was observed when maintenance of the soil, root, and leaf communities was assessed (i.e., samples from the same biological replicate and the same material were compared); however, in the case of leaf communities, DL was replaced with homogenizing dispersal (Fig. E). Inoculant characterization Reads affiliated with Pseudomonadota (formerly Proteobacteria ), Bacteroidota (formerly Bacteroidetes ) and Bacillota (formerly Firmicutes ) were found in libraries prepared from DNA isolated from inoculant samples. The most abundant genera were Pseudoxanthomonas and Brevundimonas (> 5% each), while Pedobacter , Devosia, Caulobacter, Flavobacterium, Rhizobium, Sphingobacterium, Pseudomonas , Cellvibrio, Thermomonas, and Dyadobacter were less abundant (~ 2–5%; Fig. ). A total of 55% of the reads were rare genera (< 2% abundance). The diversity, measured as Shannon’s H’, was 4.40 ± 0.75, the evenness was 0.93 ± 0.003, and 144 ± 125 ASVs were detected in the inoculated samples (rarefied data, n = 3), while 437 ASVs were detected in the non-rarefied dataset. The cultivable bacterial density was 4.0 ± 0.09 · 10 5 cfu/g (n = 6), while the bacterial 16S rRNA count was 2.0 ± 0.5 · 10 4 copies/ng of DNA, which translates to 1.4 ± 0.587 · 10 8 copies/g of inoculant (n = 8). . Inoculation had no influence on bacterial alpha diversity (Fig. A), and its effect on bacterial community structure was small but significant in each of the studied compartments and greater in soils than in roots or leaves (0.48, 0.17 and 0.09% of explained variance, respectively, Fig. B). The bacterial load in the plant samples did not differ between the inoculated and non-inoculated plants, regardless of material and status of samples as well as soil × genotype variant (Fig. A and Fig. SR35D). Reduced inoculation influence calculated using sourcetracker2 was consistently very low and there were no differences between material × soil × genotype variants (Fig. E). Further analyses showed that inoculation significantly impacted the bacterial community structure (i.e. explained significant fraction of variance) in all soil samples, regardless of their status, but only in certain early samples in the case of roots and leaves. In spite of being insignificant, variance explained by inoculation in late plant samples was generally greater than in early ones (Figs. SR36-38, Table SR7). The mean d05 generalized UniFrac distance between the inoculated samples and the inoculant was surprisingly slightly greater (0.3810 ± 0.0547) than that between the inoculant and the non-inoculated samples (0.3699 ± 0.0554), and the difference was significant (Wilcoxon test, W = 3.3482e+10, p < 0.001). The effect of inoculation was even smaller for the predicted functional potential, both for alpha-diversity (Fig. D) and beta-diversity (Fig. C), and in the plant samples it was not significant. Inoculation significantly influenced only soil samples, early ones of genotypes C and M, and late of genotype B (Figs. SR36-38 and Table SR7). The nestedness level did not change in response to inoculation (Fig. G). Differences in the proportions of community assembly processes were visible only in the case of late leaves, where the percentage of homogenizing dispersal was lower in inoculated samples (Fig. F and H). Taxonomic composition at the genus level was similar in inoculated and non-inoculated samples, and only three non-rare genera were differentially abundant in soils, while one was differentially abundant in roots (F g. I, Supplementary Results F1). The sets of ASVs and predicted KO functions characteristic of the inoculated and non-inoculated samples were different in the early and late samples as well as in each material × soil × genotype variant (Figs. SR44 and SR46-51, SR45 and SR52-57, respectively, as well as SupplementaryResultsF1 and SupplementaryResultsF2), and the same applied to core microbiomes (Table ). Of the 437 ASVs detected in the inoculant, 268 were found exclusively in the inoculated samples, although they were rare (i.e. of low abundance). However, when rarefied data were used, only fifteen such ASVs were found (Table SR9). No ASV present in the inoculant was found only in the non-inoculated samples. Globally, 29 ASVs were identified with a biosigner as a signature for both inoculated and non-inoculated samples. The ASVs were classified mainly as Alpha - and Gammaproteobacteria as well as Flavobacteria and Chitinophaga (Table SR8). Characteristic ASVs could be found mainly in soils; in the case of plant samples, they were detected only in certain soil × genotype variants. The organisms that differentiated inoculated late samples from non-inoculated ones were different for each combination of material, soil and genotype. The influence of inoculation was most visible in soil (the greatest number of ASVs differentiating between the inoculated and non-inoculated samples), while in the roots and leaves, there were only single ASVs in certain soil × genotype variants. These bacteria belonged mainly to Proteobacteria and Firmicutes . Differences in predicted functional potential comprised diverse functions, and genes involved in antibiotic biosynthesis and resistance were frequently found to be more highly represented in inoculated samples than in non-inoculated samples, potentially indicating an increased level of competition. How much time is needed to establish stable endophytic communities in axenic plants of different genotypes grown in various soils? Do community assembly processes and nestedness differ over time and for various genotypes? How does inoculation of the wild sea beet root community influence bacterial communities in axenic beets? Inoculation influenced beet-associated bacterial communities only slightly but significantly. Because we used lyophilized roots as an inoculant, its effect may stem not only from bacteria being introduced to soil but also from organic matter (particularly nitrogen compounds and carbohydrates). Typically, beet roots contain 2–3% of nitrogen in dry weight , therefore we expected that the influence of the inoculant would be greater in less N- and OM-rich soil (S1) due to a fertilization effect. Conversely, the influence was smaller in S1 than in S2, and we concluded that this difference was mainly caused by the organisms added to the soil. As we expected the influence of inoculation to be small; we grew axenic plants in pasteurized soils. Indeed, the influence proved to be only slight. Nevertheless, the presence of a handful of ASVs from inoculant in inoculated plant samples and their absence from non-inoculated ones suggested that organisms thriving in the inoculant entered the plants. However, it is possible that they were present in non-inoculated samples below the detection threshold—as always, it is impossible to prove that an entity is not present in a given sample. As we used abundance based metric (d05 generalized UniFrac), the low abundance of inoculant-derived endophytes showed that they do not increase community dissimilarities directly and suggested that microbe‒microbe interactions and/or fertilization effects were responsible for the majority of differences in bacterial community structure observed between inoculated and non-inoculated samples. Soils were consistently more influenced by inoculation, which suggested that the selective pressure in soils is weaker than that in plants. We observed greater inoculation influence in late samples, although in case of roots and leaves the explained variance was not significant. This may be explained by low number of late plant samples caused by their elimination due to large number of mitochondrial and plastidic sequences. In case of soils, decaying DNA from dead bacteria present in pasteurized soil might have masked inoculation in early samples. We expected that inoculation would cause homogenization of communities (i.e., it would make the mean distance between inoculated communities smaller than that between non-inoculated communities), but this did not prove to be true. It seems that the inoculant increases the number of organisms that may be recruited by plants, causing a decrease in similarity. Notably, inoculation did not change time- or genotype-driven beta diversity patterns, which agreed with the low values of variance explained by this variable. The fact that the bacterial load was lower in the late samples suggested that the appropriate inoculation time might be crucial for the successful application of biofertilizers, at least in beet. However, a definite answer to the question of the right time to inoculate would require a carefully designed experiment. It is also plausible that bacteria introduced to soil during the late phase of colonization might have a better chance of entering plants due to a lower level of selection and dispersal limitation in this phase. Changes in endophytic communities over time due to developmental stage and seasonality have been demonstrated in various plants, both perennial and annual ones , and might be considered an instance of succession consisting of stages at which different community assembly mechanisms are important . In the case of plant colonization by microbes, succession should be modulated by plant development. Conditions in the plant interior depend on developmental stage, which has been demonstrated, e.g., for beet roots , and could be influenced by host genotype. Indeed, both soil type and genotype influenced both rhizosphere and endophytic community structure but not alpha diversity. We expected that sea beet, as a wild plant, would recruit more diverse bacterial communities than sugar beet cultivars, as was the case for other plants, e.g., wheat , both at the level of taxonomy and predicted function. As wild plants need to cope with a broader spectrum of environmental conditions, we assumed that they would need greater microbiome functional potential. However, there was no clear trend in alpha diversity, suggesting that, under the conditions used here, there was no such need. Revealing the greater plasticity of wild beet compared to cultivars would probably require more adverse conditions, such as drought, salinity or infertile soil. A relatively small genotype effect was also found in other studies, e.g., on the cotton rhizosphere or in the willow rhizosphere and root communities . Despite differences in community composition caused by soil and beet genotype, identical patterns were found in each experimental variant: samples collected until the 35th day after planting in soil (early ones) were similar in alpha- and beta-diversity as well as bacterial load, and the same was true for the late (collected after day 35) samples. When diversity is considered, such a situation seems to be common in microbial succession (e.g. ) and is similar to the classical plant succession model of Cowles . Interestingly, the evolution of the rhizosphere communities followed the pattern described above. On the one hand, this similarity might be interpreted as a result of the rhizosphere community being driven by plant developmental stage, possibly via changes in root exudation (reviewed, e.g., in ). On the other hand, it was found that time is a stronger driver of rhizosphere bacterial community structure than plant development . With regard to bacterial load, the lower load in late samples could be due to several factors: greater selection leading to the elimination of certain organisms, dilution caused by increase in plant tissue volume and weight as well as bacterial cell division arrest, which may result from plant- or microbe-derived compounds e.g. those responsible for quorum sensing. Alternatively, bacteriostatic compounds might be produced in the late phase by plants or bacteria. Interestingly, an increase of bacterial load over time was found in soybean roots . This effect might have been caused by Rhizobial growth in root nodules of the legume, and the lack of nodules in beet may explain the difference. The succession phases observed here seem to be in line with beet taproot growth phases, with our early phase corresponding to transition/secondary growth onset and the late phase corresponding to the beginning of the sucrose storage phase . Therefore, it is plausible that under stable environmental conditions, the late phase could last at least until flowering (in the case of a perennial—sea beet) or overwintering (in the case of a biannual—sugar beet), which are the next major events in beet life. In this sense, the beet endophytic and rhizosphere communities became stable three weeks after the axenic plants were planted in soil. The fact that alpha diversity followed a pattern of soils > roots > leaves prompted us to perform nestedness analyses on soil, root and leaf triplets, which allowed us to speculate on possible community assembly mechanisms . The NODF index was consistently low in all variants and, in most cases, did not deviate from that expected by chance, which, together with little overlap between the soil, root and leaf communities, points to random colonization as the prevailing community assembly mechanism . Nestedness might be influenced both by random (e.g., random sampling (colonization) or incidental death) or deterministic (environmental filtering, selection or extinction) processes. The significant difference in the degree of nestedness between the early and late phases and between the sea beet and sugar beet varieties suggested that the proportions of stochastic/deterministic processes might also differ over time. To corroborate these results, we predicted the shares of assembly processes using the iCAMP package. As in other systems (e.g. glacier forefront or field after nudation ), stochastic community assembly processes dominated both in the early and late phases, explaining the low level of nestedness. High shares of stochastic community assembly mechanisms may be caused by the fact that soil bacterial communities, from which endophytic organisms are recruited, are highly functionally redundant (i.e., there are many organisms with a given trait/set of traits). This view is supported by predicted gene content sets being closer to each other (less separated – smaller spread) than sets of ASVs (more separated – greater spread). However, on the one hand, high similarity of marker sequences does not necessarily mean highly similar gene content (e.g., due to horizontal gene transfer), and on the other hand, even genomes dissimilar in terms of a marker sequence encode a set of core functions and may share non-core traits, as indicated by pangenome conception . Therefore, taking into account high NSTI values (see Fig. SR7 in Supplementary Results), we treat the PICRUSt2 predictions only as a hint at how functional potential of examined communities might look like. The high level of randomness observed in our system may also be attributed to relatively low coverage, as the abundance of some organisms might have fallen below the detection level, thus decreasing community similarity. This problem might have been particularly pronounced for soil samples, as they tended to be undersampled (see Supplementary Results Figs. SR3-5). Dispersal limitation was commonly thought to be the most important mechanism shaping communities at early stages of succession, which was also observed in other systems (reviewed in ). However, there are also studies reporting selection as the main microbial community assembly mechanism during plant development . All this said, our predictions might be inaccurate due to inherent limitations of the iCAMP methodology and low sequencing depth (as discussed earlier). In particular, dispersal limitation may be considered both a deterministic process that is difficult to distinguish from selection and a random process . Currently, there is no means of partitioning DL to deterministic and random components. It seems plausible that dispersal limitation and selection act in concert during bacterial colonization of axenic plants and that the proportions of these two mechanisms might depend on environmental conditions. The low level of selection detected in our study suggests that under optimal growth conditions, plants do not exert high selective pressure on bacteria; however, the apparent low selection might stem from high functional redundancy in the soil bacterial community. The low selection level also points to dilution caused by plant growth being the mechanism causing a decrease in the bacterial load in late plant samples. Regardless of the soil type and genotype, the colonization of axenic beet plants occurred in at least two phases—up to the 35th day of growth in soil—and lasted until the end of the experiment, akin to microbial primary succession in other environments. Plants govern bacterial succession in rhizosphere soil, as they follow the same temporal pattern as succession in the endosphere. Bacterial communities are compositionally stable, and the bacterial load and nestedness are much lower in the late phase than in the early phase. Colonization is largely random at the taxonomic level—various strains (ASVs) predicted to encode similar functions are recruited from a pool of functionally redundant soil bacteria. Regardless of the soil type and genotype, inoculation slightly influenced the bacterial communities, but significantly fewer organisms were recruited by the plants. The scarcity of inoculant-derived strains in the endosphere suggests that inoculation acts mostly indirectly, probably via microbe‒microbe interactions. As both bacterial cell entry to the endosphere and bacterial division seem to be arrested in the late phase, early application of bioinoculants seems to be the right choice. Declaration of generative AI and AI-assisted technologies in the writing process During the preparation of this work the authors used the Curie service ( https://www.aje.com/curie/ ) in order to improve language. After using this tool/service, the authors reviewed and edited the content as needed and take full responsibility for the content of the publication. Supplementary Information 5.
Effect of Global Longitudinal Strain at Discharge Period on Predicting Cardiac Defibrillator Implantation in STEMİ Patients with Impaired Left Ventricle Systolic Functions
098983e7-1de5-4f46-93ba-03eb6b2506c6
11943911
Pathologic Processes[mh]
Nearly half of patients with ischemic heart failure following STEMI (ST-Elevation Myocardial Infarction) succumb within the first five years post-index event, primarily due to progressive pump failure or arrhythmic complications. Contemporary clinical studies on patients with ischemic heart failure have demonstrated that low ejection fraction is one of the most significant predictors of sudden cardiac death . Therefore, current treatment guidelines recommend the use of implantable cardioverter defibrillator (ICD) therapy for the primary prevention of sudden death in selected patient groups who develop heart failure due to ischemic heart disease . According to evidence-based guidelines that inform our current treatment practices, ICD implantation for the primary prevention of sudden death is recommended in ischemic heart failure patients based on assessments conducted in the third month post-discharge. Specifically, patients with an ejection fraction (EF) ≤ 35% and functional capacity of class 2 or 3 according to the New York Heart Association (NYHA) classification, as well as those with an EF ≤ 30% and NYHA class 1 functional capacity, are advised to receive ICD therapy for primary prevention . In fact, the guidelines also recommend the use of a wearable cardioverter defibrillator (WCD) with a Class 2B indication in selected patients after myocardial infarction . The guidelines recommend measuring EF values at an optimal time of at least 40 days post-STEMI and 3 months following revascularization through echocardiographic evaluation. This timing is suggested primarily because in some patients with an early EF measurement of ≤35%, optimal medical treatment after discharge can reverse adverse left ventricular remodeling and improve EF values, thereby eliminating the indication for ICD implantation in these patients [ , , , ]. Although guidelines emphasize an EF threshold of ≤35% for the prevention of sudden death in heart failure patients, studies report that 70–80% of patients experiencing out-of-hospital cardiac arrest have an EF > 35%. Therefore, relying solely on the EF criterion and waiting 90 days may not be sufficient to prevent sudden death in patients who develop ischemic heart failure post-STEMI. This highlights the need for additional diagnostic methods to improve risk assessment in these patients . In the literature, evidence suggests that global longitudinal strain (GLS) measurement is superior to EF in assessing left ventricular systolic function, as well as in predicting mortality and arrhythmias following STEMI . As an accessible and repeatable echocardiographic tool, we believe that left ventricular GLS may be valuable in identifying high-risk patients for the prevention of sudden cardiac death, in addition to EF, when determining the need for ICD implantation. This study aims to investigate whether the left ventricular GLS value measured at discharge in STEMI patients with impaired left ventricular systolic function (EF ≤ 35%) who were discharged on optimal medical therapy can predict low EF at the 3-month follow-up. This study was designed as a prospective cohort study. 2.1. Study Population 2.2. Study Flowcharts 2.3. Echocardiographic Evaluation 2.4. Statistical Analysis The study was conducted on patients admitted to the coronary intensive care unit of a Trakya University hospital with a diagnosis of STEMI and who underwent primary percutaneous coronary intervention between 1 March 2021 and 1 March 2022. The study was conducted in accordance with the Declaration of Helsinki, and ethical approval was obtained prior to initiation (TUTF-BAEK-2021/96). The study initially included 113 patients who were diagnosed with STEMI, underwent primary percutaneous intervention, consented to participate in the research, and had suitable echocardiographic imaging for strain analysis. A total of 44 patients were excluded from the analysis, including 3 patients with a history of previous MI and an EF value below 50% according to hospital records, as well as 41 patients whose echocardiographic evaluation at discharge revealed an (EF) of ≥50%. The remaining 69 patients were classified as having impaired left ventricular systolic function in line with current guidelines and were divided into two groups based on EF values. Group 1 included patients with an EF ≤ 35%, while Group 2 comprised those with an EF of 36–49%. Sociodemographic, clinical, angiographic, and echocardiographic data were recorded for both groups. EF values were obtained from images taken at discharge and at the 3-month follow-up, while left ventricular (GLS) values were recorded based on the images taken at discharge. The study’s sample size, inclusion and exclusion criteria, methodology, and results are shown in the flow chart below ( ). 2.3.1. Conventional Echocardiographic Measurements 2.3.2. Two-Dimensional Strain Echocardiographic Analyses To perform strain analysis measurements, all images were transferred to the EchoPAC analysis software. Strain analyses were calculated over three cardiac cycles using apical 4-chamber, apical 3-chamber, and apical 2-chamber images at a frame rate of 50–80 frames per second. All strain analyses were conducted according to current measurement guidelines. In the measurements, one point on the apex and a point from each corner of the mitral annulus were identified. The software automatically traced the myocardial borders. Where necessary, manual adjustments were made to the point tracing to optimize the measurements for accuracy . After manual adjustments, the strain measurements were automatically calculated by the software. In the apical 3-chamber view, the point of aortic valve closure was identified as the end of systole. After processing the images from the three different views, a 17-segment bullseye model was created in the software ( ). The GLS values were automatically calculated by the software as percentage values. All regional longitudinal strains (RLSs) were computed based on the average peak strain values of all segments, categorized according to the perfusion territories of the three major coronary arteries, using the 17-segment model for all layers . Echocardiographic images were recorded using Vivid S70 systems (GE Healthcare, Horten, Norway) and transferred to EchoPAC software (application software version: 6.1.3). All measurements were performed in accordance with current echocardiography guidelines . Apical 4-chamber, apical 3-chamber, and apical 2-chamber images were obtained. In addition to these images, tissue Doppler measurements, M-mode measurements, and PW (pulse wave) Doppler measurements were conducted. Left ventricular ejection fraction was calculated using the modified Simpson method in a biplane approach. In the parasternal long-axis view, M-mode was employed along the same axis to measure end-systolic and end-diastolic diameters, wall thicknesses, and left atrial diameter. In the apical 4-chamber view, right atrial and right ventricular diameters were measured. Using PW Doppler in the apical 4-chamber view, the early diastolic filling velocity (E wave), peak filling velocity during atrial systole (A wave), and E/A ratios were calculated. E′ values were measured using tissue Doppler and PW Doppler. The average of the lateral and septal E′ values was taken, and E/e′ ratios were recorded. Statistical analyses were performed using SPSS version 20.0 (IBM Co., Armonk, NY, USA). Data are expressed as mean ± standard deviation (SD), median (interquartile range), or count (%). The normality assumption was tested using the Shapiro–Wilk test. Categorical variables between groups were compared using the Pearson Chi-Square test. Differences between the patient group and the control group were assessed with an independent samples t -test for parametric variables and the Mann–Whitney U test for non-parametric variables. For patients with EF ≤ 35%, stepwise univariate and multivariate logistic regression analyses were employed. Area under the curve values and cutoff points were obtained using ROC analysis. A p -value of <0.05 was considered statistically significant. A total of 69 patients with impaired left ventricular systolic function were divided into two groups based on ejection fraction (EF) values: Group 1 (patients with EF ≤ 35%) and Group 2 (patients with EF 36–49%). Group 1 consisted of 29 patients (20.6% female), while Group 2 included 40 patients (63.3% female). The average age of Group 1 was 60.2 ± 9.5 years, while the average age of Group 2 was 54.7 ± 11 years ( p = 0.03). A higher number of patients with a history of hyperlipidemia was observed in Group 1 ( p = 0.001). In Group 1 patients, the GFR (glomerular filtration rate) value was significantly lower ( p = 0.03), peak troponin levels were significantly higher ( p = 0.05), and the number of affected vessels was greater ( p = 0.001). In Group 1 patients, a higher average age, a greater number of affected vessels, and higher peak troponin levels compared to Group 2 may explain the lower EF observed in this group. There were no statistically significant differences between the two groups regarding the localization of myocardial infarction on ECG, door-to-balloon time, the coronary artery associated with the infarction, or other parameters. A comparison of socio-demographic characteristics, cardiovascular risk factors, and baseline hemodynamic and laboratory data of both groups is presented in , while a comparison of angiographic findings, discharge medications, and three-month follow-up outcomes is provided in . There was no significant difference between the groups in terms of recurrent hospitalizations, recurrent MI, and mortality during the three-month follow-up period. However, due to a proportionally higher number of diseased vessels, the number of repeat interventions was found to be significantly higher ( p < 0.001). When comparing echocardiographic data between the two groups, a statistically significant difference was found in EF values at discharge and at the 3-month follow-up after discharge ( p < 0.001). At discharge, the average LV GLS value for Group 1 patients was −8.9%, while the average LV GLS value for Group 2 patients was −15.1%. This difference was statistically significant ( p < 0.001). The left ventricular diastolic and end-systolic diameters of Group 1 patients were significantly larger compared to those in Group 2 ( p < 0.001). Additionally, the left atrial diameter and E/e′ ratio in Group 1 patients were significantly higher than those in Group 2 ( p < 0.001) ( ). In the univariate and multivariate regression analyses conducted to identify factors predicting EF ≤ 35% at 3 months post-STEMI, only the LV GLS value measured at discharge was found to be an independent predictor ( p = 0.042) ( ). It was determined that an LV GLS value below 9.55% at discharge indicated that EF would remain below 35% by day 90 post-discharge, with a sensitivity of 75% and specificity of 76.5% (AUC = 0.814, p = 0.005). The ROC curves and analysis results are shown in . The key findings of this study can be summarized as follows: (1) In patients who developed left ventricular systolic dysfunction following STEMI, LV GLS analysis performed at discharge may serve as a predictor of LV EF at the 3-month follow-up. (2) In patients with an LV GLS value below 9.55% at discharge, LV EF remains ≤35% at the third month despite optimal medical treatment. (3) LV GLS obtained from speckle tracking echocardiography can be used as a complementary method to LVEF for assessing LV systolic function in making treatment decisions. LVEF is currently the most validated and widely used echocardiographic marker, serving as a selection criterion for medical or device therapies based on evidence. However, GLS assessment obtained from speckle tracking echocardiography (STE) offers a precise and feasible method that overcomes many limitations of LVEF, including reproducibility issues in serial testing and the detection of LV dysfunction in pathologically remodeled hearts . Multicenter post-infarction studies have demonstrated a close relationship between the degree of left ventricular systolic dysfunction and the development of cardiovascular and all-cause mortality . A study conducted three decades ago demonstrated that, over an average follow-up period of 12 years, survival was 21% in patients with an LVEF ≤ 35%, 54% in those with an LVEF between 36% and 49%, and 73% in patients with a normal ejection fraction . Despite the remarkable advancements in diagnostic and therapeutic fields today, studies continue to show a strong association between low ejection fraction and mortality . In patients with impaired LV systolic function, a linear relationship has been reported between GLS and LVEF, with a GLS value of 11% or 12% corresponding to an LVEF of 35% . The greatest advantage of GLS over LVEF is reported to be its ability to detect subclinical myocardial dysfunction in the early stages, before any decrease in LVEF . Currently, LVEF is the primary criterion used for the implantation of cardiac defibrillators for primary prevention in patients with heart failure. However, the prognostic value of EF remains a topic of debate. Studies have reported that strain imaging may serve as a better prognostic factor independent of LVEF . In the literature, a study comparing 70 patients with ischemic or non-ischemic cardiomyopathy who had a cardiac defibrillator implanted and had LVEF ≤ 40% found a significant difference in GLS values between those who experienced arrhythmic events and those who did not over an average follow-up period of 1.8 ± 0.6 years. In patients who experienced arrhythmic events, the GLS value was −6.97 ± 3.06, while in those who did not, the GLS value was −11.82 ± 4.25. The study reported that a GLS value below −10% predicted the occurrence of an arrhythmic event with 90% specificity and 72.2% sensitivity, indicating that GLS is superior to LVEF in predicting ventricular arrhythmias in patients with impaired systolic function . In a study conducted on 4172 patients diagnosed with acute heart failure, patients were divided into three groups based on their EF and GLS results. According to EF values, the groups were categorized as low (<40%), mildly to moderately reduced (40% to 49%), or preserved ejection fraction (≥50%). Based on GLS results, patients were grouped as having mild (>12.6%), moderate (8.1% < GLS < 12.5%), or severe (GLS ≤ 8.0%) strain reduction. During the 5-year follow-up period, the primary endpoint of this study was to determine all-cause mortality. It was found that the decrease in GLS had a stronger correlation with mortality than EF did. In the multivariable regression analysis of this study, each 1% increase in GLS was associated with a 5% reduction in mortality risk. At the conclusion of the study, the researchers argued that the GLS value is prognostically superior to EF . In the literature, it has been reported that patients who experience malignant arrhythmic events after STEMI have lower GLS values compared to those who do not (−14.8 ± 4.7% versus −18.2 ± 3%) . In another study conducted on heart failure patients, it was demonstrated that GLS possesses the highest prognostic value among echocardiographic parameters, including LVEF, in predicting major adverse cardiac events . Consistent with the findings of all these studies in the literature, we believe that in our study, GLS may serve as a stronger prognostic indicator compared to LVEF in patients with impaired left ventricular systolic function following STEMI. In our study, the GLS value at discharge for Group 1 patients (with EF ≤ 35%) was significantly lower than that of Group 2 patients (with EF 36–49%) (−8.9 vs. −15.1, p < 0.001). As a striking finding, it was observed that in group 1, patients with a GLS value below −9.55 did not achieve an LVEF greater than 35% during the 3-month follow-up, despite receiving optimal medical treatment. (LV GLS AUC (95% CI) = 0.814 (0.642–0.986), p: 0.005, sensitivity 75%, specificity 76.5%). Beyond the existing data in the literature showing that GLS has a better prognostic value than LVEF, we demonstrated in this study that the GLS measured at discharge could also predict low EF at the 3-month follow-up period. Current cardiology guidelines recommend that for STEMI patients who have undergone revascularization and have an LVEF value of ≤35%, optimal medical treatment should be administered for 3 months before deciding on ICD implantation to prevent sudden cardiac death. If, after 3 months of treatment, the LVEF remains ≤35%, a decision is made regarding device implantation . However, it is undeniable that these patients are at high risk for sudden cardiac death and ventricular arrhythmias during the interval before ICD implantation. In our study, patients with an EF below 35% and a GLS value below −9.55 at discharge did not show improvement in EF during the 3-month follow-up period, despite optimal medical treatment. Therefore, for these high-risk patients who do not yet meet the criteria for ICD according to current guideline recommendations but have a GLS value below −9.55, wearable ICD bridging therapy may be a viable treatment option to prevent adverse outcomes [ , , ]. Additionally, there are suggestions in the literature that cardiac magnetic resonance imaging (MRI) can be used to identify high-risk groups . A meta-analysis study showed that the assessment of myocardial fibrosis using LGE (late gadolinium enhancement) is a strong predictor of ventricular tachyarrhythmias (VTAs) in patients with ischemic and non-ischemic left ventricular dysfunction. This study highlights that the presence of myocardial fibrosis is an important factor that increases the risk of arrhythmias in these patients and that LGE could be a useful tool in the management of these patients . In another study, the prognostic significance of global longitudinal strain (GLS) obtained through feature-tracking cardiac magnetic resonance (CMR) in patients with ST-Elevation Myocardial Infarction (STEMI) was investigated. The study examined the role of GLS in assessing left ventricular function post-STEMI and in predicting the long-term prognosis of patients. The findings indicate that GLS measurements could more accurately evaluate left ventricular function after STEMI, with low GLS values being associated with poor prognosis and higher mortality risk. Furthermore, GLS proved to be a more sensitive method than left ventricular ejection fraction (LVEF) in detecting subclinical myocardial dysfunction at an early stage. In conclusion, GLS could be an important tool for assessing cardiac function and predicting long-term prognosis in STEMI patients, and it may aid in treatment planning and risk stratification . However, it is also a fact that cardiac magnetic resonance imaging is not accessible in every center for routine practice. Therefore, we believe that the measurement of left ventricular GLS by echocardiography, due to its ease of access and repeatability, could be useful in identifying high-risk patients for the prevention of sudden cardiac death, in addition to the EF value, when deciding on ICD implantation. Due to its high clinical sensitivity and myocardial tissue specificity, troponin is a primary biomarker for the diagnosis of myocardial infarction (MI). It has been reported that the peak levels of troponin in STEMI patients are associated with infarct size, systolic dysfunction, and adverse outcomes [ , , ]. Consistent with the literature, our study also found significantly higher peak troponin levels in the patient group with an EF value below 35%. The most significant limitation of our study is that it is a single-center study with a relatively small number of patients. During the monitored follow-up period in the coronary intensive care unit, no life-threatening malignant arrhythmia attacks were observed in the included patients. However, the lack of rhythm Holter analyses, myocardial perfusion scintigraphy, or cardiac magnetic resonance imaging to assess scar presence/percentage during the hospital stay and the three-month post-discharge follow-up period can be considered a limitation. Since the primary aim of our study was to investigate whether left ventricular strain values at discharge could predict low EF at the third month, it does not provide data on long-term cardiac events. In accordance with our study hypothesis, only patients with impaired left ventricular systolic function, defined as an EF below 50%, were included. We believe that future studies, including patients with an EF of ≥50% post-MI, larger sample sizes, and longer follow-up periods in multicenter settings, could provide further insights. Therefore, in ischemic heart failure patients, the benefit of LV GLS assessment, in addition to LV EF and symptom status, for ICD decision making may be demonstrated with stronger sensitivity and specificity values. In patients with ischemic heart disease and impaired systolic function, the LV GLS value obtained from strain echocardiography during the discharge period can predict LV EF values below 35% obtained from transthoracic echocardiography after three months. This finding may assist in the early identification of potential ICD candidates in this patient group, who are at high risk of life-threatening arrhythmic events due to the development of ischemic heart failure following STEMI, without the need to wait for three months.
Effects of low-dose radiation produced during radiofrequency ablation guided by 3D mapping on mitochondrial apoptosis in diabetic cardiomyocytes
c8c70e97-2c46-4dc5-aa9b-bbb3dd062d98
11916466
Cardiovascular System[mh]
During the initial period, accurate target localization in radiofrequency ablation was feasible for all types of arrhythmias by means of electrophysiological signals and fluoroscopy . The adoption of 3D mapping systems, which are supposed to appropriately combine both anatomical and electrophysiological characteristics in recent years, has provided new therapeutic options for radiofrequency ablation of arrhythmias. It has great value for the diagnosis and ablation of cardiac arrhythmias and facilitates the diagnosis and treatment of complex arrhythmias . These improvements can reduce procedural and fluoroscopic times, which leads to a decrease in radiation exposure in radiofrequency ablation . However, the radiation dose for radiofrequency ablation under the 3D mapping system has not been determined. Like all ionizing radiation (IR), fluoroscopy is a major hurdle in operation . Exposure to IR during cardiac catheterization may have harmful consequences for patients and the medical staff involved in the procedures . Clinical and experimental results consistently demonstrate that the risk of heart disease is distinctly increased in people exposed to IR . The myocardial tissue of patients who undergo radiofrequency ablation is closest to the radiation source used during the operation . Whether the radiation generated during radiofrequency ablation guided by 3D mapping has a negative effect on the patient’s heart is unknown. In addition, diabetes has become a public health issue and increases the risk of cardiac arrhythmias. Approximately 20% of patients with atrial fibrillation also have diabetes mellitus . Some studies have shown that the heart tissue of diabetic patients is more vulnerable to IR [ – ]. While, studies indicate that low dose radiation (LDR) prevents diabetic cardiomyopathy and in normal cardiomyocytes, LDR induces slight changes in oxidative responses . Therefore, the effect of the intraoperative radiation dose on normal or diabetic cardiomyocytes is still unknown and the molecular mechanisms responsible for radiation in diabetes is still not fully understood. Apoptosis plays an important role in maintaining many cellular functions, such as cell fragmentation and heterophagy . Mitochondria play a crucial role in regulating cell death and appear to be the central executioners of apoptosis . The mitochondrial pathways of apoptosis are essential during the development of multitudinous cardiovascular diseases, such as coronary heart disease, radiotherapy-induced cardiomyopathy, and heart failure . Radiation increases the release of inflammatory mediators, which leads to oxidative stress and mitochondrial apoptosis and is reduced by the interleukin-1β-blocking agent canakinumab . Therefore, determining the mechanism underlying the effect of the dose of radiation produced during 3D-guided radiofrequency ablation and exploring the effects of radiation on myocardial cells may be promising therapeutic tools for treating cardiovascular disease. We hypothesized that the intraoperative radiation generated during radiofrequency ablation via 3D mapping affects the mitochondrial apoptosis of cardiomyocytes and can also affect the diabetic myocardium. Therefore, the purposes of this study were 1) to identify the dose of intraoperative radiation generated during radiofrequency ablation under 3D mapping, 2) to detect the effect of the intraoperative radiation dose on normal or diabetic cardiomyocytes, and 3) to clarify its specific molecular mechanism. Radiation dose measurements in patients Cell culture and treatments Enzyme-linked immunosorbent assay Western blotting Flow cytometry Oxygen consumption rate detection Statistical analysis Statistical analyses were conducted using GraphPad Prism 10. Data normality was assessed via the Shapiro–Wilk test. For normally distributed data with three or more groups, one-way analysis of variance (ANOVA) followed by post-hoc multiple comparisons tests was employed. In cases involving two categorical independent variables affecting a continuous dependent variable or multiple groups over time, two-way ANOVA was utilized, complemented by Tukey’s post-hoc multiple comparison test. Post-hoc comparisons were annotated with adjusted P -values to indicate statistical significance in the graphical representations. Normally distributed data are presented as mean ± standard error of the mean (SEM). Non-normally distributed data or those with sample sizes less than six were analyzed using non-parametric methods, specifically the Kruskal–Wallis test and Dunn’s multiple post-hoc comparison test. These data are reported as median values with 95% confidence intervals. Survival analysis was performed using Kaplan–Meier curves and log-rank tests. This study did not employ full or cross-test multiple testing corrections; instead, local corrections were applied when necessary. The sample size for each group represents the number of independent experimental units, determined based on prior experience and power analysis using a two-tailed Student’s t-test to ensure detection of at least a 20% difference, assuming a statistical power of 80% (β = 0.80) and an alpha level of 0.05. All samples were included in the analysis without exclusions. Statistical significance was defined as a two-tailed P value less than 0.05. A total of 280 patients (182 men and 98 women) were enrolled in this study. These patients first underwent radiofrequency ablation due to a variety of arrhythmias, including frequent ventricular premature beat (FVPB) (43), paroxysmal supraventricular tachycardia (PSVT) (128), atrial fibrillation (AF) (57), atrial flutter (AFL) (27) and other types of arrhythmias (25). Radiofrequency ablation was guided by 3D mapping (CARTO®, Biosense Webster Inc., Diamond Bar, CA, USA), and fluoroscopic guidance was provided with a Philips C biplane image intensifier system (Philips, Cleveland, USA). We collected data regarding the release of kinetic energy from air in matter (Kerma), which was measured in milligrays (mGy) . Blood samples were randomly collected preoperatively and immediately postoperative for the detection of related inflammatory factors, including IL-2, IL-4, IL-6, IL-10, TNF-α and IFN-γ. The dose of radiation was obtained from the C biplane workstation. All study protocols were approved by the Ethics Committee of the First Hospital of Shanxi Medical University in accordance with the Declaration of Helsinki (K-SK033). Written informed consent was obtained from all the subjects prior to study inclusion. Human cardiomyocyte-like AC16 cells were purchased from American Type Culture Collection (ATCC; Rockville, MD, U.S.A.). All the cell lines were authenticated by short tandem repeat (STR) profiling and were routinely tested for Mycoplasma contamination. Foetal bovine serum was obtained from CellMax (Beijing, China, #SA211.02). All other cell culture-related reagents were purchased from Boster Biological Technology (Wuhan, China). AC16 cells were cultured in cardiocellular growth medium containing 10% foetal bovine serum, 2 mM glutamine, 100 U/ml penicillin and 100 μg/ml streptomycin in a humidified atmosphere of 5% CO 2 at 37 °C. Foetal bovine serum was obtained from CellMax (Beijing, China;The AC16 cells were randomized to receive the following treatments after reaching 80% confluence: the control group (normal glucose/normal lipid, control, normal glucose containing 5.5 mM D-glucose + 19.5 mM L-glucose) and the high glucose/high lipid group (HG/HL, 25 mM D-glucose/250 μM palmitate, 24 h incubation) . The cells were then directly irradiated under the X-ray emitter after HG/HL intervention for 24 h. The dose of the radiation (50, 100 or 200 mGy) was checked according to the metre on the X-ray emitter . The level of IL-6 in the cell culture supernatant was determined via human IL-6 ELISA kits (Abcam, ab178013; Shanghai, China). The level of IL-10 in the cell culture supernatant was determined via human IL-10 ELISA kits (Abcam, ab185986; Shanghai, China) following the manufacturer’s instructions. A total of 50 μg of total protein per sample was separated via gel electrophoresis and transferred to polyvinylidene fluoride (PVDF) membranes followed by blocking with 5% skim milk at room temperature for 1 h. Then, the membranes were incubated with the appropriate primary antibodies against cleaved caspase-3/caspase-3 (dilution, 1:1000; Cell Signaling Technology, #9661/9662; Danvers, MA, U.S.A.), cleaved caspase-9/caspase-9 (dilution, 1:1000; Cell Signaling Technology, #9509/9504; Danvers, MA, U.S.A.), Bcl-xl (dilution, 1:1000; Abcam, ab32124; Cambridge, MA, U.S.A.), Bax (dilution, 1:1000; Abcam, ab182733, Cambridge, MA, U.S.A.), and β-Actin (dilution, 1:1000; BA2913, Boster, Shanghai, China) at 4 °C overnight. After washing with Tris-buffered saline with Tween (TBST) three times, the membranes were incubated with the secondary HRP-conjugated antibody (anti-mouse or anti-rabbit antibody, 1:10,000 dilution; Boster, Shanghai, China) for 1 h. Images were captured on a ChemiDoc MP Imaging System (Bio-Rad, CA, USA), and the density was quantified with ImageJ (NIH). The intracellular reactive oxygen species (ROS) level was measured via a ROS assay kit (Jiancheng Bioengineering Institute, Nanjing, China) based on 2’,7’-dichlorodihydrofluorescein diacetate (DCFH-DA) staining following the manufacturer’s instructions. Cell apoptosis was determined with an Annexin V-FITC Apoptosis Detection Kit (KeyGEN BioTECH, KGA105, Nanjing, China). After treatment, the cells were collected via trypsin digestion without ethylene diamine tetraacetic acid (EDTA), washed in phosphate-buffered saline (PBS) and resuspended in 200 μl of binding buffer. Then, the cells were stained with 5 μl of Annexin V-FITC, 5 μl of propidium iodide or 5 μl of DCFH-DA. Following incubation for 30 min at 37 °C in the dark, the cells were subjected to a FACS cell flow cytometer (BD Biosciences) equipped with CellQuest software. The measurement of JC-1 or 5,5′,6,6′-tetrachloro-1,1′,3,3′-tetraethyl-imidacarbocyanine in cells was performed with a JC-1 Mitochondrial Membrane Potential Fluorescent Probe (YEASEN Biotech, 40705ES03, Shanghai, China). A decrease in the mitochondrial membrane potential indicates the early stage of apoptosis. At high mitochondrial membrane potentials, JC-1 aggregates in the mitochondrial matrix and forms J-aggregates, which can produce red fluorescence. At a low mitochondrial membrane potential, JC-1 cannot accumulate in the matrix of mitochondria, and JC-1 is a monomer that can produce green fluorescence. After treatment, the cells were collected via trypsin digestion without EDTA, washed in PBS and resuspended in 500 μl of JC-1 working medium in the dark. After incubation for 15–30 min in the dark, the cells were centrifuged and resuspended in 0.5 ml of PBS. The following flow analysis was performed via a FACS cell flow cytometer (BD Biosciences) equipped with CellQuest software. Oxygen consumption rate (OCR) measurements were performed via Seahorse XF96 Extracellular Flux and software (Seahorse Biosciences, North Billerica, MA). The mitochondrial respiration was monitored in real time according to a previous report . Approximately 6 × 10 3 cells in 80 µL of cell culture growth medium were seeded per well in a Seahorse XFp Cell Culture Miniplate and allowed to adhere overnight in a 37 °C humidified incubator with 5% CO 2 . The cells were randomized to receive the treatments for 24 h after adherence. In addition, the sensor cartridge was hydrated in an Agilent Seahorse extracellular flux (XF) calibrant at 37 °C in a non-CO 2 incubator overnight. On the day of analysis, the assay medium was prepared with XF base medium (102353–100), 1 mM pyruvate, 2 mM glutamine, and 10 mM glucose (adjusted to pH 7.4 at 37 °C). The cells were washed three times with the assay medium described above, and the cell culture miniplates were placed in a 37 °C incubator without CO2 for 1 h prior to the assay. Simultaneously, 1.5 µM oligomycin, 1.0 µM trifluoromethoxy carbonylcyanide phenylhydrazone, carbonyl cyanide 4-(trifluoromethoxy) phenylhydrazone (FCCP), and 0.5 µM rotenone/antimycin A were prepared with the assay medium and loaded into the ports on the sensor cartridge. The loaded XF sensor cartridge with the XF utility plate was subsequently placed into the analyser and calibrated. After calibration, the utility plate was removed, and the Seahorse XFp Cell Culture Miniplate was placed on the tray. The basal OCR, ATP production-linked, maximal, proton leak-linked OCR, spare respiratory capacity and nonmitochondrial respiration were subsequently detected. Low-dose radiation of radiofrequency ablation was established via 3D mapping LDR (200 mGy) decreased IL-6 and increased IL-10 in diabetic patients with hyperlipidaemia LDR (200 mGy) reduced the degree of apoptosis induced by HG/HL LDR at 200 mGy reversed the accumulation of ROS, disruption of Δψm, and the impairment of mitochondrial function caused by HG/HL Mitochondrial damage plays a key role in cellular mitochondrial apoptosis largely through the production of reactive oxygen species (ROS). As our in vitro results indicate that an LDR of 200 mGy attenuated apoptosis, we next focused on the antioxidative properties of the LDR. Several experiments were executed as follows. First, FCM was performed to observe the effect of LDR on intracellular ROS accumulation. As shown in Fig. A-B, there was no significant effect on ROS accumulation in the control groups with or without radiation. Consistent with the results of apoptosis, the LDR of the 200 mGy radiation group slightly reversed the increase in ROS induced by HG/HL ( P = 0.0239). Second, to identify whether LDR causes mitochondrial dysfunction, we evaluated the effect of LDR on mitochondrial function in cardiomyocytes treated with control or HG/HL. Changes in mitochondrial membrane potential (Δψm) in cardiomyocytes treated with LDR or HG/HL were assessed with JC-1 via FCM. As shown in Fig. A-B, no significant effect on JC-1 was detected in control or HG/HL cardiomyocytes with 50 or 100 mGy radiation. However, 200 mGy radiation reversed the increase in JC-1 induced by HG/HL ( P = 0.0152), but not in control cardiomyocytes. Third, in an attempt to prove the antioxidative effect of LDR under diabetic conditions, a Seahorse XF24 analyser was used to confirm the oxygen consumption rate (OCR) of cardiomyocytes in the LDR on mitochondrial aerobic metabolism. As shown in Fig. A-F, no statistically significant differences were detected in the mitochondrial respiration parameters (nonmitochondrial oxygen consumption, basal respiration, maximal respiration and ATP production) of the control groups with or without LDR. Following HG/HL intervention, there was a significant reduction in mitochondrial respiration parameters of cardiomyocytes, including basal respiration, maximal respiration, and ATP production. Furthermore, an LDR of 200 mGy significantly reversed the reduction in the basal respiration ( P = 0.0438), maximal respiration ( P = 0.0022), and ATP production ( P = 0.0433) of HG/HL group, moreover, LDR had the most obvious effect on maximal respiration which was not detected in the 50 or 100 mGy groups. Taken together, these results demonstrated that an LDR of 200 mGy radiation, not 50 or 100 mGy, reduced ROS accumulation, mitochondrial membrane damage and respiratory function decline in cardiomyocytes in diabetic hearts which likely indicates the clinical efficacy of LDR in inhibiting apoptosis in diabetic cardiomyopathy by safeguarding mitochondrial aerobic respiration through its anti-oxidative stress properties. Arrhythmia patients (FVPB, supraventricular tachycardia, atrial fibrillation, ventricular tachycardia, etc.) were recruited for this study from January 2019 to June 2021 at the First Hospital of Shanxi Medical University. The exposure radiation dose of radiofrequency ablation guided by 3D mapping was analysed. The results shown in Table indicate that the exposure radiation dose for atrioventricular nodal reentrant tachycardia (AVNRT) was 77.58 mGy, that for atrioventricular reentrant tachycardia (AVRT) originating from the left accessory pathway was 145.10 mGy, and that for the right accessory pathway was 85.66 mGy. The FVPB stemming from the left ventricle was 190 mGy, and that from the right ventricle was 60.42 mGy. The incidence of paroxysmal atrial fibrillation (PAF) was 146.20 mGy, persistent atrial fibrillation (PeAF) was 198.00 mGy, typical atrial flutter (AFL) was 179.2 mGy, atypical AFL was 188.26 mGy, and other types of arrhythmia (including atrial tachycardia and ventricular tachycardia) were 78.91 mGy (Table ). According to the results above, we found that the radiation dose of radiofrequency ablation with 3D mapping applied in various arrhythmia patients was lower than 200 mGy. The definition of LDR has been proposed to be below 200 mGy depending on the effects on the cardiovascular system . In view of this, LDR was employed in the following experiments. High-dose radiation is detrimental to cardiovascular disease ; however, the effect of low-dose radiation on cardiovascular disease has not been investigated intensively. Recently, accumulating evidence has demonstrated that LDR has potential as a protective effect on myocardial tissue under certain circumstances . Therefore, we focused on the effect of LDR on myocardial tissue in subsequent experiments. Radiation leads to the release of inflammatory mediators, but whether LDR leads to the release of inflammatory mediators is unknown. The above arrhythmia patients were divided into a normal group and a diabetic with hyperlipidaemia (DH) group according to blood glucose/lipid and were roughly divided into 50 mGy, 100 mGy and 200 mGy according to the dose of intraoperative radiation. We retained the preoperative and immediate postoperative serum of the patients for the detection of related inflammatory mediators, including IL-2, IL-4, IL-6, IL-10, TNF-α and IFN-γ. The characteristics of the patient population are shown in Table . After analysing the results, we found that patients with DH presented an increase in IL-6 and a decrease in IL-10, while LDR decreased IL-6 and increased the expression of IL-10, and 200 mGy reversed the increase in IL-6 ( P = 0.0058) and decrease in IL-10 induced by DH ( P = 0.002) (Fig. A-B). However, no significant differences in the levels of IL-2, IL-4, TNF-α or IFN-γ were detected. To further verify the above experimental results, we used control and HG/HL-treated cardiomyocytes, irradiated them with different doses of radiation (50, 100 and 200 mGy), collected the supernatants of the cells for IL-6 and IL-10 ELISA detection, and obtained the same results ( P = 0.0112 and P = 0.0045) (Fig. C-D). Since cell apoptosis plays a considerable role in the occurrence and development of cardiovascular diseases, we sought to investigate whether the LDR generated during radiofrequency ablation guided by 3D mapping was involved in myocardial apoptosis. The following experiments were conducted. First, the effect of LDR on myocardial apoptosis was analysed via flow cytometry (FCM). 24 h after treatment, apoptosis was determined. In normal cardiomyocytes, LDR appeared to promote cardiomyocyte apoptosis, however, no difference in apoptosis was detected between the control group with or without LDR, statistical analysis did not reveal significant differences. In contrast, the rate of cardiomyocyte apoptosis was significantly elevated following HG/HL intervention. Whereas, the administration of LDR tended to reduce cardiac apoptosis in the HG/HL group. Most importantly, the radiation dose of 200 mGy significantly attenuated cardiac apoptosis compared with that in the HG/HL group ( P = 0.0322) (Fig. A-B). These results indicated that the administration of LDR reduced apoptosis in the HG/HL group but did not affect the occurrence of apoptosis in normal cardiomyocytes which suggest that LDR may potentially inhibit the progression of clinical diabetic cardiomyopathy; however, the precise mechanism remains to be elucidated. Therefore, we sought to investigate the downstream signalling pathways contributing to the LDR-induced antiapoptotic effect. Several mitochondrial apoptosis-related cytokines play important roles in myocardial cell apoptosis. The ratios of cleaved caspase-3/caspase-3 and cleaved caspase-9/caspase-9 were measured in myocardial cells after exposure to different doses of radiation. Similarly, 200 mGy reduced the ratio of cleaved caspase-3/caspase-3 ( P = 0.0117) and cleaved caspase-9/caspase-9 ( P = 0.0089) in cardiomyocytes subjected to HG/HL treatment. However, no significant differences in cleaved caspase-3/caspase-3 or cleaved caspase-9/caspase-9 were detected in the control groups (Fig. A-B, D-E). These results indicate that mitochondrial apoptosis is involved in the LDR-induced antiapoptotic effect. Next, to obtain evidence that LDR regulates cardiomyocyte apoptosis, Bcl-xl and Bax were measured. Consistent with the results above, the expression of Bax was lower in the HG/HL group with the 200 mGy radiation dose ( P = 0.0032), whereas the expression of Bcl-xl was greater in the HG/HL group with the 200 mGy radiation dose than in the HG/HL group ( P = 0.0310). Similarly, the expression of Bax and Bcl-xl did not significantly change in the control groups regardless of exposure to radiation. These results demonstrate that 200 mGy radiation upregulates antiapoptotic proteins and downregulates proapoptotic proteins in diabetic myocardiocytes but not in normal myocardiocytes. Our research is the first to prove that the exposure radiation dose of radiofrequency ablation guided by 3D mapping for all kinds of arrhythmia is LDR (less than 200 mGy). Furthermore, this study demonstrated that LDR (50, 100 and 200 mGy) had no significant effect on the apoptosis of the mitochondrial pathway in normal cardiomyocytes; however, 200 mGy radiation reduced myocardial cell mitochondrial apoptosis after HG/HL treatment via the suppression of ROS and mitochondrial damage. Nonetheless, we recognize that our study has several limitations. First, it is uncertain whether frequent exposure to LDR has any effect on the myocardial tissue of surgeons performing interventional surgery. Prolonged exposure to low-dose radiation, as experienced by medical personnel in radiology, has been demonstrated to influence thyroid function and elevate the risk of developing thyroid nodules . Besides, in the context of the Fukushima nuclear accident, numerous studies have examined the effects of long-term LDR exposure on animals and ecosystems . Although the primary focus has not been on cardiac muscle tissue, the research highlights that LDR can induce changes at the cellular and tissue levels, such as oxidative stress and DNA damage, which may potentially affect cardiac muscle tissue as well. Second, the health effects of radiation exposure include acute disorders and late onset disorders . Our study revealed that LDR did not significantly affect the mitochondrial apoptosis of cardiomyocytes in the acute stage, but it is unknown whether there is a delayed effect on mitochondrial apoptosis. Finally, the antiapoptotic effect of LDR was not determined in diabetic animal models, and further research is needed in the future. Before the use of 3D mapping, radiofrequency catheter ablation physicians experienced in performing the procedure required a significant amount of fluoroscopic imaging and resulted in a moderate amount of radiation exposure to both the patients undergoing the procedure and the physicians manipulating the electrode catheters . Moreover, radiation exposure is responsible for increased sensitivity to injury in diabetic hearts . Recently, radiofrequency ablation of various arrhythmias under 3D mapping has become increasingly prevalent in clinical radiofrequency ablation, including supracardial tachycardia, atrial arrhythmias, and complex ventricular arrhythmias, and has even achieved zero fluoroscopy (ZF) or near-ZF-guided catheter ablation for the treatment of arrhythmias in the paediatric or pregnant population [ – ]. The use of 3D mapping reduces the radio-exposure time and has no effect on the surgical success rate or complications of radiofrequency ablation in AVNRT patients . However, few studies have concentrated on the statistical analysis of the radiation dose used during surgery and the effects of single radiation exposure on myocardial function in diabetic patients. The negative effects of exposure to radiation on cardiovascular disease have been identified by long-term, large-scale epidemiological studies . A life expectancy study of Japanese atomic bomb survivors (LSSs) provided evidence that there is a certain relationship between the risk of circulatory disease and radiation dose; however, there is no definite linear relationship, especially at lower doses . The impact of LDR remains a subject of considerable debate. While some studies have demonstrated beneficial effects [ , , ], such as stimulatory responses, others have highlighted potential risks. The threshold for adverse effects and the influence of individual genetic variations on LDR responses remain areas of ongoing research and are not yet fully elucidated. Multiple studies have highlighted the role of oxidative stress in radiation-induced cardiovascular damage, which is more severe in diabetic patients [ , , ]. Mitochondria are target organelles of LDR and mitochondrial response influences radiation sensitivity in human cells . LDR exert substantial effects on mitochondrial function, which subsequently influences cellular responses such as apoptosis, DNA repair, and the management of oxidative stress. While this radiation can potentially damage sensitive cells, it may also serve a protective role in certain contexts, including neurodegenerative diseases. Additional research is essential to comprehensively elucidate the mechanisms underlying the impact of LDR on mitochondrial biology and its potential therapeutic applications. Our findings provide new insights into cardiovascular risk estimation associated with LDR exposure. Accumulating studies support the concept that oxidative stress caused by elevated ROS, which may be improved by nanoencapsulation of Coenzyme Q10, is a potential mechanism governing the increased sensitivity of the diabetic heart . In addition, mitochondrial dysfunction contributes to oxidative stress, as it is both a target and a source of ROS . In summary, our study revealed that the LDR generated during radiofrequency ablation under the guidance of 3D mapping does not alter the degree of apoptosis of the mitochondrial pathway in normal cardiomyocytes and can reverse HG/HL intervention in cardiomyocytes by reducing ROS and maintaining mitochondrial function. On the other hand, our study demonstrated that the protective effect of LDR on mitochondrial apoptosis induced by HG/HL also prompt us that LDR may be a novel therapeutic target with potential for preventing diabetic cardiomyopathy. Nonetheless, several challenges and research gaps persist in the clinical application of LDR for diabetic cardiomyopathy. Notably, there is a lack of robust direct clinical evidence to substantiate its efficacy and safety. Furthermore, determining an optimal therapeutic dose remains critical, while long-term safety concerns, including potential risks such as carcinogenesis, must be rigorously assessed which is especially pertinent for diabetic patients who often have comorbid conditions.
The utility of flexible and navigable suction access sheath (FANS) in patients undergoing same session flexible ureteroscopy for bilateral renal calculi: a global prospective multicenter analysis by EAU endourology
d0efbb7f-ea8c-4f32-95df-d230a9ba7a8d
11870961
Surgical Procedures, Operative[mh]
Bilateral kidney stone disease (KSD) negatively impacts quality of life and doubles the risk of surgical interventions . Clinicians face the dilemma of offering upfront intervention to both sides in a single sitting versus a safer staged approach . The potential advantages of bilateral procedures under a single anaesthesia, such as cost saving and better utilization of theatre resources, make this an attractive option . Evidence showed that bilateral percutaneous nephrolithotomy (PCNL) and bilateral retrograde intra-renal surgery (RIRS) can be safe and effective in adults and children . Castellani et al. highlighted that while complication rates are acceptable, the residual fragment (RF) and reintervention rates can be high . A recent randomized controlled trial using the flexible and navigable suction access sheath (FANS) demonstrated significantly improved RIRS outcomes, namely stone-free rates (SFR), infectious complications and re-intervention, making this accessory a potential game changer . This study aimed to assess the 30 day zero residual fragment, single stage overall stone-free rate and report peri-and postoperative outcomes of flexible ureteroscopy (FURS) with FANS in adults undergoing same-sitting bilateral retrograde intrarenal surgery (SSB-RIRS). Adult patients with bilateral KSD who were suitable for SSB-RIRS with FANS were prospectively enrolled into an ethics approved registry (#AINU 28/2023) between July 2023 and March 2024. 115 patients were included across 14 centers, and each surgeon was required to complete a minimum of five cases for successful participation, considering the strict criteria for pre- and post-operative follow-up with non-contrast computer tomography (NCCT) scans. Inclusion criteria: adult patients with bilateral renal stones in a normal pelvicalyceal system (PCS), in whom FANS successfully deployed bilaterally to perform SSB-RIRS. Patients with concomitant ureteral stones, unable to consent for surgery, anomalous pelvicalyceal anatomy, aged < 18 years, unable to follow up with a low-dose NCCT scan for RF assessment or a follow-up at 30 days post index procedure were also excluded. Pre-operative stone volume (SV) was assessed using the ellipsoid formula . For multiple stones, the stone with largest diameter and volume was considered independently on each side. All positive urine culture were treated as per antibiogram. Pre-stenting was not mandatory, as it has been shown in previous studies that using FANS was possible and safe even in non-stented patients . Baseline patient and stone characteristics, laser and lithotripsy data were gathered. Holmium or Thulium fibre laser were used. The most commonly used FANS models were ClearPetra (Well Lead Medical, Guangzhou, China) and Innovex Medical (Shanghai, China). The ease of use, manoeuvrability within PCS and the bilateral utility of FANS as a whole was documented by a 5-point Likert-type scale (score:1 excellent; 5: difficult) . Total operative time was defined as the time from start of cystoscopy to exit strategy (stent, ureteric catheter, or nil drainage) after completion of both sides. Total ureteroscopy time was the time from FANS placement to its removal after using bilaterally. Total Laser time was obtained from the machine display. Other variables included complications within 24 h, such as bleeding requiring transfusion, ureteric injury, PCS injury, infective complications, and loin pain score using a 10-point visual analog scale (1 being the lowest score). SFR and all-cause readmission within 30 days and future planned re-interventions were documented. All surgeons were instructed to visually inspect each side and perform a retrograde pyelogram to ensure the integrity and condition of the ureteral mucosa and to clearly document any injury according to the Traxer-Thomas classification . Outcomes Statistical analysis SFR and RF were assessed by the bone window of 4weeks post-index procedure NCCT scan. Each patient would fall under at least one category: Grade A(100% stone-free): no RF/ ZRF bilaterally. Grade B: single RF < 2 mm in maximum diameter at least unilaterally. Grade C: single RF 2.1–4 mm in maximum diameter unilateral or bilaterally. Grade D: single or multiple RF > 4 mm in maximum diameter unilateral or bilaterally. As per EAU guidelines , significant RF burden, often defined as fragments 3–4 mm or more, may need intervention or follow up with active surveillance as to mitigate significant clinical events or progression. Bilateral Grade A(100% SFR): no intervention or follow up is needed. Grade B and C are insignificant RF burden and do not warrant surgical intervention. Grade D RF may need surgical re-intervention or active surveillance. Continuous variables are expressed as medians and interquartile ranges while categorical variables are reported as absolute numbers and percentages. Multivariable logistic regression analysis was utilized to derive significant predictors for the outcome of 100% SFR(bilateral Grade A). Predictors were expressed as odds ratios(ORs) and 95%CI. Statistical analyses were performed using R-4.3.0(R Foundation for Statistical Computing, Vienna, Austria) with p < 0.05 indicating statistical significance. Patient demographics and stone features (Table Perioperative characteristics and outcomes (Table ) Median operative time, ureteroscopy time, and laser time were 70, 56 and 16 minutes respectively. 7.5Fr scopes used in 47% and 10/12Fr FANS in 37.2%. Thulium Fiber Laser (TFL) was utilized by 73%. Suction worked in 98.2% cases and in 77% of the cohort, access to lower pole of bilateral kidneys was successfully achieved. Surgeons highly rated the utility of FANS (Likert score 1, average:1–2) for ease of suction, ability to manoeuvre the sheath to various calyces during suction and effectively contributing to vision while performing laser lithotripsy. 33% required stone baskets for fragment extraction or repositioning. Only two cases (1.7%) required a change to a new ureteral access sheath. In one case, the change was due to a smaller sheath size, while in the second case, the reason was unspecified.” As exit strategy, most (75.7%) had bilateral ureteral stents placed, with 12.2% having bilateral ureteric catheters removed the following morning. Overall bilateral 100% SFR (i.e. bilateral ZRF) in 42.6% and unilateral Grade A(ZRF) in 75.7% cases. Only 2 (1.7%) patients noted to have RF > 4 mm in size(Grade D). A repeat FURS intervention was proposed in just 3(2.6%) patients. 1.7% experienced a Traxer grade 1 ureteric injury managed with a ureteral stent for 4 weeks. None reported pelvicalyceal injury. Postoperative mean loin pain score was 1.7 ± 1.0. 16.5% had fever on POD 1 (> 38.5) that responded within 24 h. None had sepsis nor required blood transfusion. 4.3% required readmission within 30 days of surgery, 2 for urinary tract infection, 2 for stent related symptoms and 1 for low grade fever. Multivariate analysis (Table ) indicated longer total operation time correlated with lower odds of achieving a 100% bilateral SFR(Grade A) (OR 0.978, 95%CI = 0.959–0.994, p = 0.013). ) 115 patients were analysed. Gender distribution remained balanced. 63.5% were first time stone formers, presented primarily with pain on at least one side. 68.7% had been pre-stented for symptomatic relief or pre-emptively. Stone diameters were categorized into < 1 cm, 1.1–2 cm, >2 cm, of which the distribution of stone diameters was 42.6%, 48.7%, 8.7% respectively on the left side; and 53.0%, 40.9%, 6.1% respectively on the right side. Median stone volumes were 1407 mm 3 (left) and 1342 mm 3 (right). Median Hounsfield units were 1012(left) and 1020(right). As surgeons embrace smaller diameter scopes, access sheaths and better lasers, same sitting bilateral endourological surgeries have gained momentum despite the lack of definitive guidelines. Real world study of 1250 patients by Castellani et al. showed that bilateral FURS can be effective with 73% bilateral SFR and 37% needing reintervention at 3 months. The high reintervention rate is counterproductive if the main reason to attain a high SFR and maximise utility of operative resources in single intervention is not achievable. In our study, only bilaterally successfully performed cases were included, while those in which FURS failed were excluded. Although guidelines do not mandate pre-stenting , FANS has been performed in non-stented patients , 68.7% of patients in our series were pre-stented unilaterally or bilaterally (Table ). Studies have shown that pre-stenting facilitates UAS placement and minimizes access sheath injury . This may explain why, in 115 patients, FANS-UAS was successfully placed bilaterally with low ureteric injury rate of 1.7%. While we cannot advocate pre-stenting as the standard of care, we acknowledge that it is an issue that urologists should discuss with patients to improve access success and ensure safe operation. Potentially, pre-stenting allows passive ureteral dilation which may facilitate pelvicalyceal manipulation, especially when addressing lower pole stones . In this first report of SSB-RIRS, we were able to achieve bilateral ZRF in 42.6% and unilateral ZRF in 75.7%. At 30 days, only 2(1.7%) patients had RF > 4 mm, 2.65% had planned re-intervention. 16.5% of patients experienced fever on postoperative day 1 (> 38.5 °C), which responded within 24 h to a single dose of antibiotics. Only four patients had a fever that persisted beyond 48 h. Fever after endoscopic intervention is multifactorial . Multiple factors, including larger stone volume, positive preoperative urine culture, multiple stones, indwelling preoperative stents, use of ureteric access sheath (Tables and , and ), and longer surgical time, likely account for the increased incidence of fever in our study. While this incidence is higher than that reported in unilateral FANS , it is comparable to that seen in series of flexible ureteroscopy for bilateral stones . Notably, our series reported a zero sepsis rate with low pain score, demonstrating that SSB-RIRS with FANS is safe and may potentially change approach in bilateral KSD. In this global real world data, there is an average operative time of just 78 min, well within the 90 min safe operative time proposed by studies . Indeed, the same FANS sheath was able to successfully access the lower pole of both kidneys in 77% of cases. While unilateral access to the lower pole could have been higher, this information was not gathered and thus not interpreted. At the same time, surgeons used baskets for fragment repositioning or removal in 33% of cases. This rate is significantly higher compared to 13.2% in the global FANS series , but closer to the 28.1% reported by Zhu et al. . However, these figures are much lower than those reported in the SSB-RIRS series by Castellani D et al. , where 45.3% of patients required baskets, where the overall bilateral SFR was only 73%. While it was not the primary aim of this study, it has been postulated that as technology improves and we gain experience with FANS and suction modalities, the need for baskets may decrease. Until then, surgeons should not hesitate to use them as needed. In a recent study by Kwok et al. , smaller sheath sizes demonstrated better access to all parts of the kidney, including the lower pole, and had a higher chance of achieving a zero fragment rate (68% vs. 53%, p = 0.02). Similarly, in a study by Lua A et al., intra- and inter-scope comparisons of maximum deflection angles of various sheaths were significantly different when used with different scopes . This is an important consideration when using different scopes and sheath sizes. Furthermore, they advocate that the sheath advancement technique is significantly better for accessing the lower pole compared to active deflection of the tip. While we do not have such data collected for this study as these are relatively new, our analysis revealed that sheath size did not show any significant impact on stone-free status. This is perhaps something to consider for future studies, which may further improve the outcomes of bilateral FANS and reduce the need for basket repositioning. Our results albeit bilateral echoed with those from the recent randomized controlled trial by Zhu et al. that demonstrated unilateral RIRS with FANS has high immediate SFR compared to conventional UAS group (81.3% vs. 49.4%) and at 3 months (87.5% versus 70.0%). We too had low fever rates, no sepsis, reduced use of stone baskets and excellent pain scores with short hospital stay, making this instrument a potential game changer in SSB-RIRS. As these results were conclusively reported with a NCCT within 30 days with very low perioperative complications, and a low rate of ureteric injury despite deploying FANS bilaterally, it reflects that in normal kidneys FANS may be used safely. FANS improves the quality of SSB-RIRS as dust, debris and fragments are aspirated simultaneously with laser lithotripsy. Importantly, as has been explained in other studies using suction lowers intrarenal pressure (IRP), the salient reason for mitigation septic complications and reducing post operative pain caused by overstretching of renal capsule . In our study, we also included patients with stones larger than 2 cm (8.7% on the left side, 6.1% on the right side). The successful intervention of these larger stones using FANS in a single-stage procedure underscores the advantages of suction aspiration for managing large stone burdens, as highlighted in other studies . PCNL remains a valid option for such stone burden . Although it is beyond the scope of our study, a comparison of simultaneous bilateral PCNL versus bilateral RIRS utilizing FANS would be a valuable area for future research. Study limitations and future directions raditionally, surgeons have approached bilateral KSD by treating only the symptomatic side while either observing the asymptomatic side or planning a staged surgery . With current advancements, surgeons may reconsider strategies, expanding the indications for performing bilateral RIRS in single sitting intervention. Our multivariable analysis infers that the key to achieving bilateral ZRF was appropriately chosen stone volume to minimise the operative time. Stratification by stone diameter is the most common method for planning RIRS, as outlined in the EAU guidelines . Nonetheless, a bilateral procedure, where many permutations and combinations of stone size, location, and multiplicity are expected, all add to the complexity of the procedure. Surgeons should be cautious about relying on stone diameter, as stone volume is a more accurate predictor of stone burden . We reported the largest stone volume rather than the cumulative stone volume, which would better reflect the total stone burden the surgeon must manage in a single sitting. Our outcomes can serve as a reference guide to better counsel patients considering SSB-RIRS with FANS, encouraging future research into cumulative stone volume measurements. The high proportion of pre-stenting in our series raises questions about whether this was a matter of preference or a clinical necessity. It is crucial to discuss the pros and cons of pre-stenting with patients, weighing on-table success against potential extra procedural costs and risks. In our study, FANS in SSB-RIRS resulted in low RF achieved by thorough navigation of the sheath into all calyces to visually confirm stone clearance . This led to a low reintervention rate (2.6%), lower than the 9.9% reported in the literature, thereby reducing the need for secondary procedures, translating to fewer hospital visits and lower healthcare costs. A practical consideration is the ergonomics of performing bilateral RIRS . Whilst not assessed, suction aspiration of fragments though the FANS sheath involves multiple withdrawals of the scope all the way into the distal part of the sheath till the Y connector and can potentially cause wrist strain. More information on laser lithotripsy strategy to improve efficacy is needed. Despite preoperative antibiotic treatment and the use of suction measures to minimise post FURS infective complications, a incidence of fever in 16.5% within 24 h remains a critical consideration for centres which practice RIRS as an ambulatory surgery . In summary, our data address concerns that have historically deterred urologists from offering bilateral surgeries, supporting increased adoption of SSB-RIRS with FANS. The strengths of this study include its prospective multicenter design, large sample size, and thorough reporting of relevant outcomes. However, the absence of a control group limits the ability to attribute observed benefits solely to the use of FANS. Comparative studies have indicated that FANS significantly outperforms non-suction ureteral access sheaths across various operative parameters, potentially establishing a baseline for future bilateral RIRS using suction modalities, which promise to be transformative in endourology . To our knowledge, this is the first multicenter study where the use of FANS in SSB-RIRS can achieve bilateral zero residual fragments with low complication, zero sepsis and low reintervention rates. Same siting bilateral RIRS with FANS is a safe modality for bilateral renal stone if used carefully. As we learn how to better use FANS in RIRS, these findings if replicated could change the way urologists’ approach in bilateral renal stone management.
Straw incorporation and nitrogen fertilization regulate soil quality, enzyme activities and maize crop productivity in dual maize cropping system
87f600b6-b395-4552-9774-beb027af275d
11289928
Microbiology[mh]
Maize ( Zea mays L.), a major food crop, has seen a signifincat increase grain yield, rising by 460% from 1961 to 2019. This growth is crucial for sustaining global food security and economic stability . According to a United Nations estimate, this is anticipated to increase maize grain yield by 40% till 2050 . Research has shown that straw management, including incorporation and mulching, wnhances soil fertility and crop productivity, while also mitigating environmental pollution by reducing N fertilization requirments . Such practices not only improve soil fertility but also increase SOC, other soil nutrients, soil structure, and overall crop production . Furthermore, appropriate C/N ratio accelerated straw decomposition, releasing nutrients essential for crop growth, thus optimizing N fertilizer utilization . X Meng, et al. reported that straw incorporation with N fertilization (150–225 kg ha − 1 ) dramatically improved plant N uptake and increased grain and biomass yield. Nitrogen fertilization greatly increased soil nutrient availability, bosted SOC, and resulted in a higher soil carbon pool . However, long-term excessive N fertilization leads to increased losses of N through ammonia volatilization , NO 3 − leaching , and N 2 O emissions . Conversely, combining straw incorporation with N fertilization not only improves soil fertility and agriculture sustainability but also reduced environmental pollution . The gradual release of nutrients from decomposing straw mitigates N losses, thereby enhancing soil quality and promoting sustainable crop productivity . Research indicates that maize grain yield increases with N fertilization rate up to 200 kg N ha − 1 , but higher rates, such as 300 kg N ha − 1 , substantially decrease grain yield . The yield reduction could stem from soil acidification, which accelerates the decomposition of soil organic and inorganic carbon and is particularly noted in southern China . Such acidification adversely affects the growth of soil microorganisms and their contribution to SOC pool . China, a leading producer of crop residues , generated 598 million tons of crop straw in 2017. Straw incorporation, a prevalent and benefical practice, enhances soil fertility, especially SOC, through decomposition and reduced reliance on chemical N fertilization . It also increased soil enzymatic activities, which are a primary source and key indicator of soil quality . Soil organic carbon is crucial in defining soil’s physical, chemical, and biological soil properties , improving soil structure, nutrient availability, and soil microbial biomass and activity . The faster decomposition of residues and SOC turnover mainly depends on the structure and activity of soil microbial biomass . Straw incorporation, which facilitates close contact with soil, enhances soil microbial biomass and activity, making it highly responsive to changes in soil quality . The response of SOC to N fertilization is greatly affected by environmental and agriculture management practices like crop residue incorporation, fertilizer application, and tillage management . Straw incorporation directly increases SOC, SMBC, and N thereby improving soil fertility and mitigate the negative impacts of excessive chemical N fertilization . Moreover, it reduces evaporation, increase soil moisture content and water use efficiency, leading to enhanced plant growth and grain yield . Changes in soil enzymes or soil degradation affect soil fertility, maintaining environmental stability and soil quality . Incorporating straw into soil notably enhances the interaction between straw and soil microbes , concurrently boosting and activating soil enzyme activities which elevate nutrient mobilization . Urease, an extracellular enzyme in soil, aids in converting organic N into ammonium N, thus making it more readily available for plants adsorption by soil particles . Found absorbed on clay particles or within humic complexes, urease released from both living and decomposed microbial organisms. Phosphatase enzymes, potentially originating from the plant roots associated with mycorrhiza, other fungi, or soil bacteria, catalyze the conversion of organic phosphate esters and anhydrides to inorganic phosphorus . Additionally, straw incorporation not only enhances soil organic matter and significantly impacts soil cellulase activity but also markedly increases SOC and active C fractions ; these outcomes demonstrate a substantial and positive correlation between SOC and soil sucrase activity. Organic and inorganic fertilization in combination significantly improves soil physical, chemical, and biological properties . S Huang, et al. reported that straw incorporation increased crop yield in subtropical areas because warm weather accelerates the breakdown of incorporated straw, thus increasing the availability of soil nutrients . Additionally, the open burning of agricultural straw emits dangerous air pollution and is prohibited in China . Few studies have investigated the comparative effects of straw incorporation combined with N fertilization on SOC, STN, SMBC, and soil enzyme activities in a double cropping system. However, no study has reported the effect of straw incorporation with and without N fertilization on soil fertility, SMBC, and soil enzymes across different seasons in South China. Therefore the aims of the study were: (1) to investigate the effect of straw incorporation with N fertilization on soil fertility and soil enzyme activities under a double-cropping system; (2) to determine the effect of growth stages and seasons on straw decomposition and crop yield; and (3) to understand the seasonal differences with and without straw incorporation on soil fertility, enzyme activities, and maize grain yield in subtropical monsoon humid region, South China. We hypothesized that straw incorporation with N200 treatment would economically improve STN, SOC, and SMBC, urease, cellulase, sucrose, catalase, acid phosphatase activities, maize grain yield. We also hypothesized that V6 growth stage would increase soil urease, sucrose, catalase, and acid phosphatase activities under both season. Experimental site Experimental design Soil sampling Soil fertility and enzyme activities analysis Grain yield Statistical analysis The field experiment was performed at the Agronomy Research Farm of Guangxi University, Nanning, Guangxi, China (22°50’ N, 108°17’ E). This region is characterized by a subtropical climate with a mean annual precipitation of 1298.0 mm and a mean annual temperature of 21.7 °C . Generally, this region has two distinct maize growing seasons: spring growing season from March to July and autumn from August to November. According to Chinese Soil Taxonomy, the soil was classified as clay loam, with a pH of 6.5, SOC of 14.6 g kg − 1 , total N, available phosphorus, and potassium of 0.80 g kg − 1 , 0.43 g kg − 1 , and 0.89 g kg − 1 , respectively. The experiment was carried out in a randomized complete block design (RCBD) with three replications in a split-plot arrangement whereas the straw management was allotted to the main plot, and N fertilization treatments were allotted to the sub-plot. The two straw management were straw incorporation (maize straw incorporated) and traditional planting (maize straw removed) during spring and autumn seasons in 2020 and 2021. The N fertilization treatments were N0 (0 kg N ha − 1 ), N100 (100 kg N ha − 1 ), N150 (150 kg N ha − 1 ), N200 (200 kg N ha − 1 ), N250 (250 kg N ha − 1 ), and N300 (300 kg N ha − 1 ). The “Zhengda-619” maize variety was planted twice in the spring and autumn seasons. During 2020 the maize was sown on 11th March and 2nd August, and harvested on 9th July and 30th November in spring and autumn seasons, respectively. Likewise, during 2021, the maize was sown on 4th March and 8th August, and harvested on 5th July and 26th November in spring and autumn seasons, respectively. The sub-plot size was 4.2 × 4.2 m 2 having a planting density of 55,556 plants ha − 1 with row-to-row spacing of 60 cm and plant-to-plant spacing of 30 cm. Calcium magnesium phosphate (P 2 O 5 18%) and potassium chloride (K 2 O 60%) were incorporated at the rate of 100 kg ha − 1 into the soil before sowing as a basal recommended fertilizer. The field management followed standard agricultural practices, with two-thirds of the N fertilizer applied before sowing and the remaining one-third applied during the 12-leaf stage. Before maize harvest, at the jointing stage (V6) and maturity stage (R6), five random soil samples were taken from each sub-plot at a depth of 0–20 cm soil depth. The collected soil samples were completely mixed, sieved through a mesh of 2 mm to remove the stones, roots and gravel. The sieved soils were then divided into two sections, one immediately kept at 4 °C in the refrigerator for soil microbial carbon analysis. The remaining portion of the soil samples was air-dried at room temperature and sieved through 0.069 mm mesh for determining enzyme activities and/or 0.15 mm mesh for determining STN and SOC. Soil microbial biomass carbon was measured by the chloroform fumigation culture method . Soil organic carbon was quantified using volumetric potassium dichromate and an external heating method previously used by , whereas the semi-micro kelvin method was used to measure STN content . Soil enzyme activities were analyzed using Solarbio Science & Technology Co. (Beijing, China). The activities of soil urease (BC0125; BC0245), sucrase (BC0155S), cellulase (BC0105), catalase (BC0145), and acid phosphatase (S-ACP) were determined using the soil enzymatic kits. At the maturity growth stage of maize, an area of 2 m 2 was randomly selected in each plot, and the grain yield was determined after maize threshing and drying. 1 [12pt]{minimal} $$\:{}\:{}\:({}\:{}{{}^{{}}})\:{}\:{}\:{}\:{}\:({}\:{{}^{{}}})\: \:{}\:({{}^{}}{}$$ The analysis of variance (ANOVA) was used to analyzed STN, SOC, SMBC, soil enzymatic activities, and grain yield using an f-test (SPSS Inc., Chicago, IL, USA). Straw management and N fertilization rates were fixed effects. However, seasons and years were repetitive measure factors and fixed effects, whereas all the interactions were taken as fixed effects except treatment × replication interaction, which was taken as a random effect. Duncan’s multiple range test (Duncan, P < 0.05) was used to compare the mean value of the three replications within treatments. Data are expressed as the mean and ± standard deviation (SD). All graphs were drawn with Origin Pro, Version 2021 (Origin Lab Corporation, Northampton, MA, USA). The GGally package was used in R v.3.63 to evaluate the relationship between the N fertilization rates and plant enzyme activities. Soil organic carbon Soil total nitrogen Soil microbial biomass carbon Soil enzyme activities The incorporation of maize straw, N fertilization, various growth stages, seasons, and annual variation positively affected SOC (Fig. ). In both years, the highest SOC content was observed at the V6 growth stage under straw incorporated plot, while the lowest was observed at R6 under traditional planting (Fig. ). Averaged across seasons, stages, and straw management, 2021 resulted in significantly higher SOC content compared to 2020. Similarly, the spring season had significantly higher SOC compared to the autumn season in both years. The lower SOC in autumn season might be due to higher temperatures, lower solar radiation, and a shorter growing season for autumn crops. Moreover, SOC significantly increased with increasing N fertilization in both growth stages and seasons. Compared to N0, the SOC content was significantly higher for N300 treatment under both years and seasons. There were no significant differences between N200 and N250 in straw management and years, while there were differences among N200, N250, and N300 treatments in 2020 under traditional planting. The interaction between N fertilization and straw management was not significant, except in spring season of 2020 (Fig. A). Our results showed that the N200 treatment coupled with straw returning, is the best choice to reduced the N fertilization use and increase SOC content, with increases of 0.74% and 0.61% compared to N250 and N300 under traditional planting (Fig. ). Soil total N content under different straw management and N fertilization across two seasons is shown in Fig. . The effect of straw incorporation with N fertilization on STN was statistically higher at V6 than at the R6 growth stage in both seasons and years. Similarly, under traditional planting, the STN content at the V6 growth stage was 6.42 and 3.98% higher in the spring and autumn seasons during 2020 and 9.46 and 9.56% higher in spring and autumn seasons during 2021, respectively (Fig. A-C). On average, the V6 growth stage had 6.55 and 10.13% higher STN content compared to R6 growth stage in 2020 and 2021. Compared to the autumn season, the spring season increased STN content by 13.02 and 12.16% in 2020 and 2021, respectively. Additionally, the results demonstrated that the overall STN content was 1.75 g kg − 1 in 2020 and 1.92 g kg − 1 in 2021, suggesting that t STN in 2021 was 8.85% higher than in 2020. Our results showed that STN content increased with increasing N fertilization rates in both seasons, straw management, and years. Moreover, the STN content was significantly higher for N300 treatment compared to other N fertilization rates but not significantly different from N250 (Fig. A-D). These results suggest that straw incorporation significantly increased the STN content under the same N fertilization rates. The interaction between straw incorporation and N fertilization was significant in the autumn season of 2020, and in both seasons of 2021 ( P < 0.05; Fig. B and D). Soil microbial biomass carbon significantly increased with N fertilization rates and maize straw incorporation into the field ( P < 0.05; Fig. ). Soil microbial biomass carbon was significantly higher at the V6 growth stage under straw incorporation, while the lowest was observed at R6 growth stage under traditional planting. On an average, straw incorporation in spring and autumn seasons significantly increased the SMBC by 15.43 and 32.64% during 2020 (Fig. A and C) and by 17.30 and 22.87% during 2021 compared to traditional planting (Fig. B and D). The results showed that the spring season resulted in higher SMBC (178.98 mg C kg − 1 of soil) compared to autumn season (134.35 mg C kg − 1 of soil). Moreover, our results demonstrated that SMBC significantly increased with increasing N fertilization. The results also showed that the N300 treatment had significantly higher SMBC compared to the N0, N100, N150 treatments ( P < 0.05), but but was not statistically different from the N200 and N250 treatments ( P > 0.05). Soil microbial biomass carbon was significantly higher in straw incorporated plots compared to traditional planting with the same N fertilization rates, might be due to higher decomposition and SOC content. The interaction between straw incorporation and N fertilization rates was not significant, except in autumn 2021. Soil urease Soil cellulase Soil sucrase Soil catalase Soil acidic phosphatase Grain yield Correlation analysis Soil urease activity increased significantly with higher N fertilization rates across both seasons, growth stages, and years (Fig. ). Plots incorporated straw exhibited 5.64 and 9.56% higher soil urease activity in 2020 and 2021, respectively, compared to traditional planting (Table ). The lowest urease activities were observed in the N0 treatment (218.13 U g − 1 and 228.11 U g − 1 ), while the highest were in the N300 treatment (284.23 U g − 1 and 305.60 U g − 1 ), which did not differ from the N250 treatment (276.67 U g − 1 and 303.39 U g − 1 ) in 2020 and 2021, respectively. These results showed that N250 treatment increased soil urease activity by 26.84, 16.90 and 10.54% in 2020 and by 33.00, 20.31 and 12.59% in 2021, compared to N0, N100, and N150 treatments, respectively (Table ). The interaction between straw management and N fertilization was significant in 2021 ( P < 0.001). In addition, the soil urease activity was significantly higher at the V6 growth stage in spring 2021 and lower at the R6 growth stage in autumn 2020 (Table ). Regressions analysis showed that soil urease activity increased positively and polynomially with SOC (R 2 = 0.94; P < 0.001; Fig. A), STN (R 2 = 0.84; P < 0.001; Fig. B), and increasing SMBC (R 2 = 0.95; P < 0.001; Fig. C). Soil cellulase activity was significantly affected by N fertilization, growth stages, seasons, and years (Fig. S2). The N300 treatment significantly increased soil cellulase activity compared to other N fertilization rates in 2020, but it was not significantly different from N250 in 2021 (Table ). Similarly, N250 treatment was statistically similar to N200 treatment, suggesting that N200 treatment with straw incorporation is a better choice for improved soil enzyme activity and reduced N fertilization. Compared to traditional planting, straw incorporation significantly increased soil cellulase by 9.43 and 11.25% in 2020 and 2021, respectively (Table ). Moreover, soil cellulase activity was significantly higher at R6 during the autumn of 2021 compared to V6 during the spring of 2020 (Table ). All interactions were significant except among years, seasons, and growth stages. Regressions analysis showed that soil cellulase activity increased positively and polynomially with increasing SOC (R 2 = 0.90; P < 0.001; Fig. D), STN (R 2 = 0.85; P < 0.001; Fig. E), and SMBC (R 2 = 0.93; P < 0.001; Fig. F). This increase in soil cellulase activity could be attributed to higher SOC, STN, and SMBC due to optimum N fertilization rates and straw incorporation. Similar to soil urease, sucrase activity was significantly higher in spring compared to autumn (Fig. S3). Soil sucrase activity was significantly higher in the V6 growth stage compared to R6 in both seasons and years (Fig. S3). Nitrogen fertilization with straw incorporation significantly increased soil sucrase activity. Additionally, soil sucrase activity increased with higher N fertilization rates, suggesting that N300 treatment increased soil sucrase activity by 48.79, 30.16, 21.39, 7.12, and 2.32% compared to N0, N100, N150, N200, and N250 in 2020 (Table ). However, in 2021 the N300 and N250 treatments had a statistically similar soil sucrase activity, whereas the N250 and N200 were also statistically similar. Soil sucrase activity was significantly higher at the V6 growth stage in spring 2021 compared to the R6 growth stage in autumn 2020 (Table ). Soil sucrase activity was 15.17% higher in 2021 compared to 2020, with the highest activity observed in spring (35.36 U g − 1 ) compared to autumn (26.68 U g − 1 ). Regression analysis showed a positive relationship between soil sucrase activity and SOC, STN, and SMBC (Fig. D, E, and F). These results suggest that soil enzyme activities correlate with higher soil nutrient availability and SMBC. The higher soil sucrase activity in straw incorporated plots might be due to higher SOC, STN, and SMBC. The potential activity of soil catalase was determined at two growth stages (V6 and R6) in the spring and autumn seasons of 2020 and 2021 (Fig. S4). Soil catalase activity was significantly higher at V6 growth stage compared to R6 in both seasons and years, wiht activity significantly higher in 2021 than in 2020 (Table ; Fig. S4). The activity in straw incorporated plots was 5.70 and 19.74% higher in 2020 and 2021, respectively, compared to traditional planting. In 2020, the N300 treatment resulted in 29.91, 17.10, 13.07, 5.67 and 3.04% higher catalase activity compared to N0, N100, N150, N200, and N250 treatments, respectively (Table ). There were no significant differences between N300 and N250 treatments or N250 and N200 treatments in 2021, possibly due to higher straw decomposition and microbial activity compared to 2020. Furthermore, the activity in 2021 was 21.06% higher than in 2020 (Table ). On average, the activity was significantly higher in spring (22.90 U g − 1 ) compared to autumn (17.62 U g − 1 ), suggesting that the spring season increased the catalase activity by 29.97% (Table ). The results showed that activity was much higher at V6 than at the R6 growth stage, likely due to higher soil nutrients and SMBC at V6. Soil catalase was positively and significantly correlated with SOC (R 2 = 0.92; P < 0.001; Fig. D), STN (R 2 = 0.84; P < 0.001; Fig. E), and SMBC (R 2 = 0.93; P < 0.001; Fig. F). Our results demonstrated that higher soil fertility and SMBC could be the possible reason for improved soil enzymatic activity. Nitrogen fertilization with and without straw incorporation significantly increased soil acidic phosphatase activity in both years during the spring and autumn seasons (Fig. S5).The activity was significantly enhanced with straw incorporation in both seasons and years, with the highest activity observed in spring 2021 (Fig. S5; Table ). Compared to traditional planting, straw incorporated plots enhanced acidic phosphatase activity by 10.91 and 18.39% in 2020 and 2021, respectively (Table ). Acidic phosphatase activity significantly increased with increasing N fertilization rates, with N300 resulting in 27.13% higher activity than N0 treatment in 2020 (Table ). However, in 2021, acidic phosphates activity was statistically similar for N300 and N250 treatments, and for N250 and N200 treatments. The interaction between straw management and N fertilization was significant ( P < 0.001). Moreover, soil acidic phosphatase activity was significantly higher at the V6 growth stage during the spring of 2021 and the lowest at R6 growth stage during the autumn of 2020 (Table ). Regression analysis explained that soil acidic phosphatase activity was positively correlated with SOC, STN, and SMBC. These results suggest that soil acidic phosphatase activity significantly increased with increasing SOC (R 2 = 0.87; P < 0.001; Fig. G), STN (R 2 = 0.91; P < 0.001; Fig. H), and SMBC (R 2 = 0.98; P < 0.001; Fig. I). The maize grain yield was significantly affected by N fertilization, straw management, seasons, and their interactions (Table ). Grain yield increased significantly with higher N fertilization during both seasons and years. In 2020, the N300 treatment significantly increased grain yield by 144.35 and 160.83% in the spring and by 155.26 and 236.19% in autumn compared to N0 treatment in straw incorporated and traditional planting, respectively (Table ). There were no significant differences between the N300 and N250 treatments and between N250 and N200 in both seasons and straw management. In the of 2021, the N300 increased grain yield by 137.14, 70.08, 35.62, 7.79 and 3.75% in straw incorporated plots, and by 167.61, 71.95, 35.71, 7.34 and 5.56% in traditional planting compared to N0, N100, N150, N200, and N250 treatments, respectively. The mean results showed that N300 treatment significantly increased grain yield compared to other N fertilization rates in both straw management and years. Our results suggest that N200 treatment in straw incorporated plots significantly increased maize yield by 8.30 and 4.22% compared to N250 and N300 treatments of traditional planting systems. Therefore, the N200 treatment with straw incorporation economically increased crop yield and agriculture sustainability while reducing the negative impact of N fertilization. Compared to traditional planting, straw incorporated plot increased grain yield by 10.79 and 12.06% in the spring and autumn of 2020 and 9.77 and 10.71% in the spring and autumn of 2021, respectively (Table ). Average across the seasons and straw management, the highest grain yield was obtained in 2021 compared to 2020. On average, the N300 treatment resulted in significantly higher grain yield than other N fertilization rates. The interaction between straw management and N fertilization was significant in both season during 2020 but had no significant effect during 2021. The correlation analysis revealed that soil sucrose, catalase, and acidic phosphatase activities were positively and strongly correlated with soil urease activity ( P < 0.001; Fig. ). Soil cellulase activity was negatively correlated with urease activity under the N100 treatment ( P < 0.05). Soil catalase and acidic phosphatase activities were positively correlated with soil cellulase activity ( P < 0.05), while soil sucrase activity was negatively correlated under N150 treatment. Additionally, soil catalase and acidic phosphatase activities were positively correlated with soil sucrase activity. Acidic phosphatase was strongly and positively correlated with catalase ( P < 0.001; Fig. ). Soil urease activity was strongly correlated with sucrase and catalase activities at all the N fertilization rates. Similarly, sucrase was highly correlated with catalase, and catalase with acidic phosphatase activity in all N fertilization rates. Overall, N fertilization and straw incorporation considerably improved maize growth performance. However, lower soil fertility and yield were found under the traditional planting system. The four factors such as SOC, STN, soil microbial biomass, and soil enzyme activities are responsible for the distinct impacts of N fertilization under straw incorporated and traditional planting systems. Straw incorporation is well documented to increase soil fertility by providing accessible C and N substrates for microbial populations . Soil organic carbon is an important and useful indicator of soil fertility, impacting crop development both directly and indirectly , and its fractions are sensitive to changes in C supply . Similarly, J Wang, et al. also reported that straw incorporation significantly increased SOC and total N compared to traditional planting, indicating improved soil fertility. Previous meta-analysis showed that residue incorporation with a high C/N ratio or lower N content dramatically increased straw decomposition through soil microorganisms, thereby improving SOC content . In addition, straw incorporation boosted soil microbial activity and biomass, accelerating SOC decomposition and improving soil nutrient content . Straw incorporation positively correlated with soil nutrient content , and increased biomass incorporation and decomposition led to an increase in soil organic matter . It also reduces SOC losses from the agroecosystem and increases crop productivity and sustainability . In the current study, straw incorporation significantly increased SOC content, especially in spring season, compared to traditional planting. In contrast, traditional planting drastically decreased SOC content by 3.87 and 5.09% during 2020 and 2021 compared to straw incorporation (Fig. ). Compared to the R6 growth stage, the incremental changes in SOC content at the V6 growth stage were 7.31 and 8.48% higher in straw incorporated plots, and 4.18 and 5.10% higher in traditional planting during 2020 and 2021, respectively. These results suggested that straw incorporation into the soil is essential to maintaining SOC levels . Additionally, our finding revealed that continuous straw incorporation significantly and sustainably boosted soil fertility in both years, independent of N fertilization in double-cropping system . Moderate N application increases SOC content and eventually increases crop yield , while high amounts of N application encourage SOC mineralization, which decreases SOC content and crop yield . Straw incorporation with the N200 treatment significantly increased SOC content compared to N0, N100, N150, N250, and N300 treatments of traditional planting. Consistent with our results, ME Duval, et al. also reported a strong and positive relationship between carbon input and SOC storage. Nitrogen is an essential macronutrient and is well understood as one of the key limiting nutrients for plant growth and development . Exogenous application of N improves plant growth and increases crop yield . However, the increase in soil N content with N fertilization might not be the main factor ameliorating maize growth under straw incorporated plots might be due to higher decomposition rates (Fig. ). Similarly to N fertilization, straw incorporation also releases more N content into the soil and becomes available to plants via microbial mineralization . The current results showed that straw incorporation with the N200 treatment resulted in higher STN content compared to the N250 and N300 treatments of traditional planting. These results suggest that straw incorporation drastically reduces the excessive use of N fertilization and slowly releases nutrients to the soil, reducing water pollution , soil acidification, and other adverse environmental effects . Moreover, it facilitates the transition of soil N into a crop-appropriate form, thereby increasing N efficiency . The variations in STN contents are probable due to straw incorporation combined with different N fertilization rates . Straw incorporation replenishes soil nutrients and enhances the physio-chemical properties of the plow layer, benefiting sustainable crop growth, biomass, and grain yield . During 2020 and 2021, this practice significantly increased SOC, STN, and SMBC in double-cropping systems (Figs. and , and ). These fertility gains likely results from the enhanced decomposition of maize straw, driven by soil microbial activity , which releases nutrients such as dissolved organic matter and stimulates microbial development . Similarly, N fertilization significantly elevated SMBC and improved soil carbon sequestration and microbial activity . Notably, the N200 treatment in straw incorporated plots considerably enhanced STN and SMBC compared to N250 and N300 treatments in traditional planting systems during both study years (Figs. and ). Benbi et al. reported similar increases in soil nutrients and SMBC with straw incorporation due to greater decomposition and soil moisture content.The SOC levels were also higher unde the N200 treatment with straw incorporation than under the N250 treatment in traditional planting (Fig. ). These results aligns with previous trsearch indicating that straw incorporation combined with optimal N fertilizer application significantly improves soil fertility and SMBC by faciliating nutrient release from residues decomposition and maintaining sustainable C/N ratio . In addition, the effects of environmental variables, such as precipitation patterns, should be considered. In thisstudy, spring rainfall was significantly higher than autumn rainfall, potentailly increasing maize straw decomposition and, consequently, soil fertility . Soil microbial biomass and enzyme activities are more sensitive to changes in soil quality than SOC and STN content . Straw incorporation significantly enhanced the physio-chemical characteristics of soil, providing a more suitable environment for soil microbial biomass and enzyme activities . In 2021, straw incorporation in the N300 treatment significantly increased the activities of soil enzymes such as urease, cellulase, sucrase, catalase, and soil acid phosphatase compared to traditional planting. These increase were statistically similar to those in the N250 and N200 treatments. However, in 2020, the N300 treatment showed significant increases in the activities of cellulase, sucrase, catalase, and soil acid phosphatase, surpassing the effects observed in the N250 treatment (Table ). The increase in soil enzyme activities can be attributed to straw incorporation, which auguments SMBC, N, and microbial population. This, in turn, provided organic matter that serves as a substrate for soil enzymes . Furthermore, straw decomposition not only provides energy but alsi fosters a favorable environment for soil enzyme activity . There is a positive correlation between soil enzyme activities and both soil organic matter and microbial population . Similarly, activities of soil urease, cellulase, sucrase, and acidic phosphatase are strongly and positively correlated with soil organic matter . Soil urease, an amide enzyme that uses urea as a substrate, is mainly affected by soil nutrients and microorganisms. It plays a crucial role in soil N cycling and indicates soil N supply capacity. Additionally, cellulase and sucrase activities are pivotal in regulating SOC decomposition . Straw incorporation boosts the turnover of easily oxidized SOC and STN, enhancing their availability to soil microorganisms and thereby increasing soil enzyme activities . Previous study have show that phosphatase activity significantly increased with straw incorporation, likely due to increased substrate availability . Our results also showed that soil acidic phosphatase activity was significantly higher in straw incorporated plots compared to traditional planting (Table ). Notably, the highest activity was observed in 2021 for the N300 treatment, which was not significantly different from N250 treatment. According to R Murugan, et al. , N fertilization improves soil enzyme activity by preferentially allocating N to enzyme production, under the influence of soil microorganisms. Additionally, N can stabilize the soil colloidal structure and prevent soil urease degradation. A previous study demonstrated that chemical and organic fertilizer increased soil N levels, subsequently boosting the activity of carbon cycling enzymes in the soil . Similarly to Y Liu, et al. , our results also showed that the activities of soil urease, cellulase, sucrase, catalase, and acidic phosphatase enhanced by N fertilization and straw incorporation (Table ). This increase in enzyme activities may result from higher carbon demand for microbial activity and development in high N conditions. Previous research has shown that straw decomposition accelerates initially after incorporation and increases with increasing temperature and perception . It is also influenced by the quality (C/N ratio) and quantity of biomass . Straw incorporation and N fertilization have a varing effects on soil enzymes at different growth stages and across seasons. Our results showed that activities of urease, sucrase, catalase, and acidic phosphatase were significantly higher at V6 growth stage than at the R6 stage in both seasons. However, with the exception of cellulase, all soil enzyme activities were significantly higher during the spring than in the autumn (Table ). The enhanced enzymatic activities in spring may result from more rapid straw decomposition at an appropriate temperature, which produce a huge amount of SOC and N for soil microbes as a substrate . Conversely, cellulase activity was significantly higher in than in spring (Table ), potentially due to greater abundance of cellulose-decomposing fungi during the autumn . Straw incorporation established a favourable chemical, physical, and biological soil environment that enhances enzymatic activities, increasing soil fertility and crop yield . Similarly, our results also demonstrated that straw incorporation and N fertilization significantly increased maize grain yield in a dual cropping system (Table ). Notably, the highest grain yield was observed in spring season compared to autumn season. This increase in straw incorporated plots could be due to improved SOC, along with enhanced biological and physical properties of soil . A Khan, et al. reported that N fertilization in combination with organic manure significantly increased soil fertility, N use efficiency, SOC, and crop yield. Additionally, straw incorporation has been shown to substantially increase soil nutrients, microbial biomass and activity as well as improve soil texture, resulting in higher nutrient uptake and maize grain yield . The N300 treatment resulted in higher grain yield compared to N0 treatment, although it was not significantly different from N250 treatment across both seasons and straw management practices in 2020. Similar results were observed in autumn season during 2021. However, in spring, the N300 significantly increased grain yield compared to other N treatments. S Li, et al. reported that excessive N fertilization could increased plant nutrient uptake and affected crop reproductive growth, ultimately decreased the economic value of maize yield. Incorporating straw incorporation with the N200 treatment significantly increased crop yield, soil fertility, and SMBC compared to N250 and N300 treatments in traditional planting systems over both years. These results are in line with the previously published studies, and they demonstrated that N fertilization with straw incorporation resulted in higher grain yield . Similarly, A Khan, et al. reported that the combine application of organic and inorganic N fertilization significantly increased crop yield due to the slow release of N improved soil fertility and soil microbial biomass (carbon and N) on a sustainable basis and thus increased both biological and grain yield . Our study investigated the impact of straw incorporation and N fertilization rates on SOC, STN, SMBC, enzyme activities, and maize yield. In 2021, N fertilization particularly enhanced physio-chemical properties, enzyme activities, and grain yield in straw incorporated plots. Both straw incorporation and N fertilization significantly improved SOC, STN, SMBC, and soil enzyme activities at the V6 growth stage across all seasons and years compared to R6. The N200 treatment with straw incorporation has a significant higher maize grain yield than the N250 and N300 treatments of traditional planting. In spring season, across both years, increases in SOC, STN, SMBC, and activities of urease, sucrose, catalase, and acid phosphatase activities were notably higher, whereas soil cellulase activity peaked in autum season during 2020. All the other parameters showed significant enhancements in 2021. We concluded that optimum N fertilization (N200) with straw incorporation is the most effectivte strategy for improved soil fertility, SMBC, and crop yield. Furthermore, this approach helps mitigate the environmental impacts of straw burning, reduced fertilizer cost, and increased agricultural sustainability. Below is the link to the electronic supplementary material. Supplementary Material 1
Perioperative Preparations for COVID-19: The Pediatric Cardiac Team Perspective
a47a0af6-0deb-4442-a36d-efc8b30c1944
7187810
Pediatrics[mh]
An institution's response to the COVID-19 pandemic may be influenced by the proximity to an epicenter of COVID-19 outbreak and the institution's prior experience with a pandemic. Most healthcare institutions have rapidly set up local and regional COVID-19 command centers, with key stakeholders from local government, hospital leadership, emergency medicine, anesthesiology, intensive care, infectious disease, surgery, and nursing. , These teams meet once or twice per day in online virtual meetings to address the rapidly changing needs and logistical planning necessary for a tiered response of the hospital system to the anticipated surge in patients who will present with COVID-19. In addition, it is very helpful to set up a local scientific advisory committee of key experts from all disciplines. Meetings are held virtually to rapidly assess peer-reviewed evidence and personal correspondence with colleagues treating COVID-19 patients in other parts of the world. The scientific advisory committee can quickly assemble documents to help inform the COVID-19 command centers, who then can disseminate critical information to all members of the healthcare team. There is no emergency in a pandemic. Taking the time to “don and doff” personal protective equipment (PPE) correctly protects our colleagues and us so that we all can continue to safely care for sick patients in the long term. It has been argued that previous pandemic plans and their existing ethical guidance often have been ill-equipped to anticipate and facilitate the navigation of unique ethical challenges that arise during infectious disease pandemics. This uncertainty (eg, the scale of anticipated patients and potential ventilator shortages during a pandemic) is difficult to anticipate. To meet the challenges of the COVID-19 pandemic, professional societies, healthcare institutions, and hospital networks have set up local and regional ethics committees and developed guidelines to help inform decision- making for critically ill patients, including front-line clinicians, hospital administrators, professional societies, and public health or government officials. It is critical to limit the risk of exposure within the team and to plan for potential staffing shortages because team members exposed to COVID-19 will need 14 days of quarantine before coming back to work. Healthcare facilities cannot inform healthcare workers (HCW) if any colleagues they work with have tested positive for COVID-19. Furthermore, not all HCWs who are ill with COVID-19 symptoms are tested. HCWs with presumed COVID-19 illness should not return to work until at least 7 days have passed since symptoms first appeared and at least 3 days of recovery as defined by resolution of fever without the use of fever-reducing medications and improvement in respiratory symptoms (eg, cough, shortness of breath). Other efforts to limit HCW exposure include screening all team members on entry to their healthcare facility, wearing masks in clinical areas, and social distancing. Usually, occupational health teams assess the potential exposures by staff who test COVID-positive. It is safest to assume that anyone with whom we are interacting is positive, and all HCWs should use established best practices. The rules of engagement must be followed—wash hands or sanitize them regularly, maintain social distance, wear appropriate masks in clinical areas, follow all protocols and policies, and do not go to work if sick. For healthcare systems, the Centers for Disease Control and Prevention and World Health Organization guidelines for PPE should be followed. Local variations may be made depending on what equipment is available. Strategies to decrease the risk of exposure to viral particles during aerosolizing procedures include the use of a powered air purifying respirator devices or an N95 mask with a face shield. Due to the limited availability of PPE, many organizations have created a central airway team to limit PPE usage to a small number of highly trained individuals. During the severe acute respiratory syndrome outbreak in 2003, HCWs who performed intubations had an increased risk of contracting the disease (odds ratio [OR]) 6.6), as were those who performed noninvasive ventilation (OR 3.1), tracheostomy (OR 4.2), and manual ventilation before intubation (OR 2.8) , Organization of PPE, airway equipment, and anesthetic supplies in special carts for COVID-19 cases that may be wheeled to the bedside or used in the operating room avoids the contamination of larger anesthesia workstations. Simpler strategies also include the use of large, clear “to-go” bags containing PPE such as protective face masks, filters to fit on the bag-mask, face shields, gowns, and gloves. Clear plastic household storage boxes can house powered air purifying respirator machines and hoods that can be cleaned easily between patient use. Due to the need to closely monitor available supplies and their use, there may be an advantage to creating a team of “PPE guardians.” These guardians often are nursing staff from low- census units who can be retrained as PPE guardians to help their frontline colleagues. Each PPE guardian must be familiar with the workflow of his or her assigned unit. They are trained to provide “just-in-time-training” for appropriate PPE use for colleagues and then monitor the donning and doffing process to ensure that team members do not contaminate themselves or others. Areas in the hospital that are not being used during this pandemic, such as conference rooms, can be used for PPE storage. The presence of a PPE guardian allows teams to understand what PPE is available and to provide oversight of PPE distribution. Guardians also can aid in helping teams clean and reuse some PPE. Many centers, including our own, are using ultraviolet light sterilization of N95 masks. This sterilization method uses ultraviolet C radiation to inactivate microorganisms by causing deoxyribonucleic acid damage, thereby preventing replication. Within healthcare facilities, specific rooms are pressurized relative to their surrounding areas in order to protect their contents or the patients from surrounding airborne contaminants. Operating rooms, pharmacy workrooms, and trauma/resuscitation areas are among those where a positive- pressure state is designed to protect sterile medical equipment and patients from airborne bacteria, fungi, and viruses. These positively pressured areas are among the cleanest in the healthcare facility. Under normal operating circumstances, these steps help to ensure a sterile operating environment, with the goal of minimizing the likelihood of surgical site infection. However, with the current COVID-19 pandemic, it is recommended that operating rooms be converted to negative- pressure rooms (similar to triage and waiting rooms, microbiology laboratories, soiled workrooms, janitor's closets) so that infectious transmission or chemical contamination originating from within the room does not occur. Conceptually, the design of ventilation systems within the hospital requires air movement from clean to less clean areas. The Facility Guidelines Institute 2014 guidelines and state building codes mandated the minimum number of air exchanges per hour within the operating room, which usually is in the range of 15- to- 20 air exchanges per hour. Many, if not most, hospitals exceed this standard. The number of air exchanges per hour determines the time required for the removal of airborne pathogens with 99% efficiency. However, these models are imperfect because they assume the perfect mixing of air within the space and constant aerosolization. The location of air inflow within the operating room is often specifically designed so that it disperses any airborne contaminants downward from the patient and away from the anesthesiologist at the site where airway management typically occurs. Rules governing design and capacity of hospital ventilation systems are governed by the Occupational Safety and Health Administration and the National Institute for Occupational Safety and Health for the purposes of employee health and safety. Conversion of a positive- pressure room to a negative- pressure room may be accomplished by building an anteroom at the site of patient entry into the operating room and sealing off additional access points to the room. Airflow within the operating room also must be reversed. , The anteroom allows for the passage of equipment and personnel without contaminating the surrounding environment. It should be large enough for the passage of a large hospital bed and permit adequate space for donning and doffing of PPE. In addition, it should have a self-closing door so that negative pressure in the room remains. The considerations for these changes are complex and require close collaboration with hospital epidemiology, facilities management, and industrial hygiene specialists. In the context of a pandemic, minimizing environmental contamination by respiratory droplets becomes essential to maintaining the safety of HCWs, while maintaining efficient patient flow throughout the hospital. The essential role of environmental services staff members often is underrecognized and underappreciated. After the use of an operating room for a COVID-19 patient, allowing adequate time for aerosolized particles to settle and for air exchanges to occur (usually 60-90 minutes) is essential for the safety of environmental services staff members. This is counter to the usual production pressure that governs the perioperative environment. , A thorough cleaning of all surfaces within the operating room during a terminal cleaning while wearing full PPE is essential to prevent virus transmission to others who will be in contact with these same surfaces within the hours or days that follow. Checklists designed to improve the thoroughness of the cleaning process help to ensure that operating room surfaces do not serve as a source of infection for HCWs. In addition, testing for residual biologic residue after cleaning, such as adenosine triphosphate testing, may serve as a check on the thoroughness of operating room cleanliness. Despite the almost universal moratorium on elective surgery during this pandemic, many newborns with complex congenital heart disease will require cardiac catheterization interventions and cardiac surgery in the first weeks of life. Palliative cardiac surgery for functional single- ventricle patients cannot always be delayed. Heart and lung transplantation programs must continue surveillance for rejection in their patients and currently are faced with the tough decision of who should undergo transplantation urgently or who can wait. Programs also have the difficult task of minimizing the risks of successfully transplanted patients from acquiring COVID-19 in the hospital. There are recent reports on the safe use of the cardiac operating room without contamination for emergency cardiac surgery in COVID-19–positive adult patients. , One important consideration that can affect urgent and emergency cardiac surgical decision- making is the current shortage of blood products being reported during the COVID-19 pandemic. Although it is not very likely that the coronavirus can be transmitted through allogeneic blood transfusion, this remains to be fully determined. Therefore, it is important for all cardiac surgery programs to procure enough blood products for high-risk cardiac surgeries that usually require additional blood products. Many hospitals and organizations have created exposure risk stratifications based on clinical duties and procedures. High-risk procedures are defined as those that cause aerosolization of viral particles and often involve instrumentation of a patient's airway during intubation and bronchoscopy. The cardiac anesthesiologist regularly takes care of patients for these procedures. The cardiac catheterization laboratory should be prepared to manage unrelated cardiac conditions or patients with cardiac manifestations of COVID-19. Even though most patients with COVID-19 improve rapidly after a mild disease course, a significant proportion develop hypoxemic respiratory failure with viral pneumonia and diffuse alveolar disease that can progress to the need for venovenous or arteriovenous extracorporeal membrane oxygenation (ECMO). Therefore, it is highly likely there will be an increased need for the pediatric catheterization laboratory to transition adolescents and young adults to ECMO in the hope of a recovery from COVID-19. The Extracorporeal Life Support Organization and all of its worldwide chapters have released guidelines to describe when and how to use ECMO in COVID-19 patients. They do not recommend institutions starting a new ECMO program just for COVID-19 patients, and currently there are shortages of ECMO equipment worldwide. Even before ECMO, COVID-19–positive patients may undergo a number of investigative and therapeutic procedures requiring the expertise of the cardiac anesthesiologist. The perioperative anesthetic management of COVID-19–positive patients has been published in a very timely manner, and several excellent review articles are available. , , , The risk of aerosolization and airborne transmission of SARS-CoV-2 during airway-generating medical procedures (AGMPs) is especially pertinent to the pediatric cardiac anesthesiologist given the high viral loads within the nose and nasopharynx of COVID-19–positive patients. Aerosol formation during AGMP may be divided into procedures that induce the patient to produce aerosols (eg, bronchoscopy, intubation, cough-like force during cardiopulmonary resuscitation) and procedures that mechanically generate aerosols themselves (eg, bag-mask ventilation, nasotracheal suctioning, tracheostomy tube change, noninvasive ventilation, high-frequency oscillatory ventilation). Among the various AGMPs, a systematic review showed that tracheal intubation was associated with the highest risk of transmission of acute respiratory infections to HCWs. Experimental studies on the stability of SARS-CoV-2 in aerosols and on various surfaces (eg, plastic, stainless steel, copper, and cardboard) showed that SARS-CoV-2 remains viable up to 72 hours, indicating that aerosol and fomite transmission is plausible. The use of a 3-layered clear plastic drape configuration during extubation in a simulated mannequin model has been shown to limit aerosolization and droplet spray significantly. The first layer was placed under the head of the mannequin, a second torso-drape layer was applied from the neck down covering the chest, and finally, an overhead top drape was placed over the mannequin's head to prevent contamination of the surrounding surfaces, including the HCW. Similarly, experiments in cadaveric models showed a high risk of aerosolization during endoscopic endonasal surgery. The pediatric cardiac anesthesiologist will be called on to help with placement of the transesophageal echocardiography (TEE) probe in COVID-19 patients because it is considered a significant AGMP. Recent guidelines by the American, British, and Italian Societies of Echocardiography recommended that only a limited goal-directed examination should be performed in emergency life-saving situations, ideally with the TEE probe in a protective sleeve. An experienced airway proceduralist, such as a cardiac anesthesiologist, may be the best HCW to pass the echocardiography probe, in full recommended PPE. The TEE results should be reviewed well away from the patient. , , , , The Italian Society of Interventional Cardiology, the American College of Cardiology's Interventional Council, and the Society of Cardiovascular Angiography and Intervention recently published consensus statements on the care of COVID-19 patients in the cardiac catheterization laboratory. , During procedures in the catheterization laboratory, the risk of radiation necessitates wearing a protective lead apron and thyroid shield before donning PPE. Another important consideration is to remove all possible emergency medications that may be required during the procedure from the anesthesia workstation. This will prevent the reopening of the anesthesia workstation and potential contamination of all anesthetic supplies in the workstation. Ideally, the anesthetic workstation should be covered in a plastic sheet as a barrier to reentry to help minimize cross-contamination. Catheterization laboratories and cardiac operating rooms use positive ventilation systems and are not designed for infection isolation. Therefore, these rooms will require conversion to an air neutral or negative- pressure room to care for COVID-19 patients safely. In addition, the room will require a terminal clean at the end of the procedure. In preparation for the COVID-19 pandemic, it is most important for hospitals to be able to increase their ICU beds to be able to care for the surge in COVID-19 patients. An example of this is a team in Italy who had to quickly ensure that enough ICU beds were available and that all staff were fully trained in the safe use of PPE. , Healthcare organizations have a limited number of respiratory therapists or intensivists. Adding to the challenge of a surge in the patient population, staff members may become ill, leaving ICU teams understaffed. Cardiac anesthesiologists are very likely to be called on to aid in ICU patient care due to their expertise in cardiopulmonary physiology and procedural skills. Familiarity with pulmonary, vascular, and cardiac physiology enables the cardiac anesthesiologist with a unique skill set to care for COVID-19 patients in an expanded ICU setting. Some emergency room and ICU teams have developed procedural teams. Similar to airway teams, the most experienced personnel on these teams are able to obtain vascular access on unstable patients. Depending on the unit workload, these team members also may be part of the airway team. This allows a small group of experts to be exposed to a patient during times of high potential risk of viral aerosolization. Team members who are in a high-risk category for SARS-CoV-2 infection due to age or comorbidities should not be expected to participate in these COVID-19 teams but encouraged to support the team in other ways by taking care of patients at low risk of viral infection. The pediatric cardiac anesthesiologist is in a unique position to play a significant leadership role in the current rapidly changing COVID-19 pandemic. This freestanding editorial has highlighted the important hospital and regional initiatives in which the assistance of the pediatric cardiac anesthesiologist can help guide medical decision- making. In addition, considerations for the anesthetic care in the catheterization laboratory and cardiac operating room of COVID-19 patients have been reviewed.
Comparative metabolic profiling of the mycelium and fermentation broth of
ce523074-71c4-4782-aa0a-359cb7b83875
11156492
Microbiology[mh]
Rhizosphere microbial communities play an important and direct role in soil health and plant growth (Kong & Liu, ; Li et al., ; Liu et al., ). Fungi are an important part of the soil microbiome and are involved in biochemical cycles and the decomposition of complex organic matter (Mayer et al., ). Compared to bacterial communities, fungal communities are more representative and important for plant growth in disturbed soil environments (Zheng et al., ). Previous studies showed that the fungal community was the most important predictor of plant health during monoculture, as it could drive more complex, healthy plant‐related networks (Ding et al., ). Prior to studying the processes of rhizosphere fungal and plant interactions, it is essential to identify the secondary metabolites generated by rhizosphere fungi. Given the intricate character of the secondary metabolites produced by the rhizosphere fungus, advanced histological approaches can be employed to investigate the growth alterations of endophytes. Metabolomics focuses on endogenous small‐molecule metabolites in organisms based on a combination of high‐throughput analysis and chemometrics (Paris et al., ). This technique is commonly used in research fields such as gene function analysis, metabolic pathways and regulatory mechanisms. It employs many analytical instruments to examine a wide range of metabolites in a specific biological sample (Pinu, ). The metabolome represents the most direct reflection of an organism's phenotype, lying below the transcriptome and proteome. The metabolome provides essential insights into the interactions between plants and rhizosphere microbiota, including the impact of rhizosphere fungal inoculation on the plant metabolome (Mishra et al., ). Metabolomics has been successfully used as an important tool for the analysis of complex secondary metabolites in microorganisms. Metabolomics has been successfully used to build metabolite fingerprints by comparing solid and liquid culture extracts of the endophyte Curvularia sp. (Tawfike et al., ). Metabolomics can also be used to unravel how the grape endophyte Alternaria sp. MG1 mediates phenylpropanoid biosynthesis in response to starvation (Lu et al., ). Metabolomics can also be combined with genomics, thus allowing for the localization of key genes. Based on 16S rRNA sequencing and the phylogeny of seven Bacillus strains isolated from Calendula officinalis , the result showed that Bacillus halotolerans Cal.l.30 has a large group of genes involved in secondary metabolism biosynthesis. These genes include a lot of CAZyme genes that are active against fungi and can synthesize some valuable metabolites that are antimicrobial (Tsalgatidou et al., ). The application of metabolomics for the analysis of secondary metabolites is now well‐established and efficient. The traditional Chinese medicine ‘QianHu’ is the dried root of Peucedanum praeruptorum Dunn from the Apiaceae family. It has long been used to treat respiratory disorders such as cough, phlegm, and respiratory accumulation (Ishii et al., ). Currently, numerous studies are concentrating on its chemical composition, pharmacology, and pharmacodynamics, as for the isolation and identification of endophytic bacteria, only two new strains have been found in P. praeruptorum : Streptomyces akebiae sp. nov. and Mumia xiangluensis sp. nov. (Mo et al., ; Zhou et al., ). Streptomyces akebiae nov. has grey aerial mycelium and yellow basidiomycelium, and the aerial mycelium has developed a straight to curved spore chain appearance with a smooth surface. Mumia xiangluensis sp. nov. is aerobic, non‐motile, Gram‐stain‐positive, and formed from irregular spheroids without spores. A strain belonging to Didymella was previously isolated from traditional culture and was found to produce praeruptorin A, praeruptorin B and praeruptorin E (Liu et al., ; Song et al., ; Song et al., ). Meanwhile, metagenomic sequencing showed a relatively high abundance of Penicillium restrictum in the genus Penicillium . After fermentation, the fungus was found to produce coumarin‐like compounds, the medicinal components of P. praeruptorum . Although the biosynthesis of coumarins has been tentatively demonstrated, the biogenic pathway of furanocoumarins is more challenging to elucidate. Several enzymes that were important for the synthesis of furanocoumarins: prenyltransferase, psoralen synthase and marmesin synthase, did not show any activity when expressed in Escherichia coli (Rodrigues et al., ; Song et al., ). Rhizosphere microorganisms usually affect the growth and development of plants and the accumulation of secondary metabolites; however, only a few reports exist on the isolation and identification of rhizosphere microorganisms in P. praeruptorum and their interacting mechanisms with the host. Using a metabolomics approach, this study systematically investigated the distribution and accumulation of secondary metabolites in P. restrictum . Penicillium restrictum contained both furancoumarins and pyranocoumarins. Marmesin is the most abundant coumarin component, which acts as an important precursor for the synthesis of most coumarins and has a greater influence on the synthesis of other coumarins, and the highest content is approaching the 4th day of culture, which can be referred to as the best phase for P. restrictum inoculation. KEGG pathway analysis showed that a large number of genes involved in secondary metabolite synthesis were significantly enriched. ABC transporter subfamilies may be involved in the transport of amino acids and carbohydrates. Penicillium restrictum produces a wide range of coumarins with different structures, which may provide an alternative way for the heterologous production of coumarin and its precursors. Screening and culture of Fermentation and microscopic observation of Metabolomics analysis Mass spectrometry data processing and screening of differential compounds KEGG Penicillium restrictum The endophyte of P. restrictum was isolated from the roots of unbolting P. praeruptorum . The internally transcribed spacer‐based amplicon sequencing was carried out in accordance with the DNA extraction and sequencing steps. The obtained sequences were spliced and filtered by quality control to obtain effective data (Canarini et al., ). Usearch was used to conduct the removal of the non‐amplified region sequence, correction of the error, and division of the sequences into different operational taxonomic units (OTUs) based on their similarity. Statistical analysis of biological information was performed on OTUs at 97% similar levels (Li et al., )The fungal sequences were classified using ribosomal database project, Silva and NCBI databases. The two sequences have homology, with a score of 99.82%. The accession number of P. restrictum is AF033459.1. The isolated P. restrictum was cultured with potato glucose agar potato dextrose agar (PDA) medium and potato dextrose broth (PDB) medium, respectively. PDB medium formula: 200 g/L potato and 20 g/L glucose. PDA medium was added with 15–20 g/L of agar in PDB medium. P. restrictum The strains were first incubated in a PDA medium at 28°C without light for 7 days (Martínez‐Salgado et al., ). After growing to the full size of the Petri dishes, the endophytic fungal clusters were picked and incubated in conical flasks containing PDB medium at 28°C, cultured for 12 days with the maximum fermentation incubation period (Khamkong et al., ). A total of 40 bottles were fermented, and six bottles were sampled every 2 days. Subsequently, the fermentation broth was taken to observe the morphology. The mycelium size, number and fresh and dry weight were measured. The fermentation broth was filtered with four layers of gauze. The filtered fermentation broth was centrifuged, and the precipitate was discarded. The supernatant was mixed and quickly quenched in liquid nitrogen for 30 s, and then put into the −80°C refrigerator for freezing. The filtered mycelium was collected and rinsed with ultrapure water in a cloth funnel. The mycelium was pump‐filtered for 20 min and then weighed after being completely drained. The filtered mycelium was collected into a vacuum tube, freeze‐dried for 3 days and finally weighed as dry weight. The intracellular and extracellular metabolites of P. restrictum were investigated and compared over six fermentation periods. A total of 60 samples, including 30 mycelium samples and 30 fermentation broth samples, were selected for metabolomics analysis. The periods, days 2, 4, 6, 8, 10 and 12, were consistent with days after cultures 2, 4, 6, 8, 10 and 12, respectively. Every 100 μL of the samples was transferred to centrifuge tubes. After the addition of 300 μL of extract solution (internal standard mixture), the samples were vortexed for 30 s, sonicated for 10 min in an ice‐water bath and incubated for 1 h at −40°C to precipitate proteins. Then the sample was centrifuged at 13,800 g for 15 min at 4°C. The supernatant was transferred to a fresh glass vial for later analysis. The quality control sample was prepared by mixing an equal aliquot of the supernatants from all of the samplesHSS T3 column (2.1 mm × 100 mm, 1.8 μm) coupled to an Orbitrap Exploris 120 mass spectrometer (Orbitrap MS, Thermo). The mobile phase consisted of 5 mmol/L ammonium acetate and 5 mmol/L acetic acid, both in water (A) and acetonitrile (B). The sampler was set at 4°C, and the injection volume was 2 μL. The mass spectrometer was used to acquire MS/MS spectra in information dependent aquisition mode under the control of the acquisition software (Xcalibur, Thermo Fisher Scientific). The acquisition software continuously evaluates the full‐scan MS spectrum. The ESI source conditions were set as follows: sheath gas flow rate as 50 Arb, auxiliary gas flow rate as 15 Arb, capillary temperature 320°C, full MS resolution as 60,000, MS/MS resolution as 15,000, collision energy as 10/30/60 in NCE mode, spray voltage as 3.8 kV (positive) or −3.4 kV (negative), respectively. The original data were changed to the mzXML format using ProteoWizard. They were then processed with a custom programme built with R and based on the XCMS database ( https://github.com/sneumann/xcms ) to find peaks, extract them, align them and combine them. Then it was matched with BiotreeDB (v. 2.1) (Biotree Ltd., Shanghai) for substance annotation, and the cutoff value for algorithmic scoring was set to 0.3. The univariate analyses were performed based on the mass spectral results, including the Student's t ‐test and fold change (FC) for analysing significant differences between groups. The multivariate analyses were performed with principal components (PCA) and partial least squares discriminant analysis (OPLS‐DA). The significance of the variables in the projected (VIP) values from the OPLS‐DA model was combined with the p‐values and fold change (FC) values analysed by the Student's t ‐test to confirm the significance of the differential metabolites. Differential compounds were screened according to the following conditions: (1) high contribution to sample classification in partial least squares (VIP >1.25); (2) the difference multiples between groups, that is, the differential genes with FC values >2 and FC values <0.5, changed greatly; (3) the difference between the two groups was statistically significant, that is, p < 0.01. The raw data has been uploaded to Figshare ( https://doi.org/10.6084/m9.figshare.24471793.v1 ) (Wang, ). pathway enrichment analysis The Kyoto Encyclopedia of Genes and Genomes (KEGG) Pathway database ( http://www.kegg.jp/kegg/pathway.html ) was used for metabolic network enrichment analysis. For functional annotation, the BlastKOALA tool on the KEGG homepage was used for KO annotation and KEGG mapping. After submitting the compound information, Penicillium rubens was selected as the reference species. All the pathways were mapped by differential metabolites of the reference species. According to the enrichment of differential metabolites in KEGG pathways, the rich factor (the ratio of the number of differential metabolites annotated in a pathway to the number of all the metabolites in the pathway) was calculated. The larger the value indicated, the larger the enrichment degree focused. A network‐based enrichment analysis was constructed, which included metabolic pathways, modules, enzymes, reactions, and metabolites. In some situations, the targeting of possible enzymes and metabolites can be shown by the intersections of metabolic pathways. After obtaining matching information for each set of contrasting differential metabolites, pathway searches and regulatory interactions network analyses were performed on the KEGG database of the reference species. Morphology and biomass detection of PCA Screening and identification of differential metabolites in KEGG The deduced pathway of coumarin components in P. restrictum Metabolic pathways were tentatively predicted by the structural formulae and contents of all coumarins (Figure ). The results showed that dihydrocoumarin contained demethylsuberosin, aesculetin,under the catalysis of umbelliferone 6‐prenyltransferase (U‐6‐P) and umbelliferone 8‐prenyltransferase (U‐8‐P), respectively, which further produced pyranocoumarin and khellactone. Through several steps of enzyme reactions, demethylsuberosin created the pyranosylcoumarins draveolone and marmesin. These compounds were then used to synthesize 8‐geranyloxypsoralen and archangelin, both of which belong to the furanosylcoumarin. The furanosylcoumarin celereoin, which can form mukurozidiol, a new antibiotic, is biologically related to marmesin. The 8‐(1,2‐dihydroxy‐3‐methylbut‐3‐en‐1‐yl)‐7‐methoxy‐2H‐chromen‐2‐one was the precursor to produce the pyranocoumarin trans‐grandmarin. Aesculetin is considered the precursor to the simple coumarins 5‐hydroxy‐6‐methoxycoumarin‐7‐glucoside and fraxidin. With the alterations in intracellular and extracellular coumarins, the conversion between intracellular and extracellular compounds will provide possible routes for coumarin transport (Figure ). The results showed the cell membrane and some of the coumarins were presumed to be transported in the dashed box. The coumarin analogue dihydrocoumarin is translocated to the extracellular compartments while generating the intermediate product marmesin in the intracellular compartments, which may be retranslocated to the extracellular compartments to generate the product celereoin; esculetin may be translocated to the intracellular compartments to generate the product fraxidin; and 8‐(1,2‐dihydroxy‐3‐methylbut‐3‐en‐1‐yl)‐7‐methoxy‐2H‐chromen‐2‐one may be translocated intracellularly to produce trans‐grandmarin. Penicillium restrictum Penicillium restrictum was gradually entangled into balls with shaking during the growth process when forming smaller mycelia balls in the early stage. As more and more mycelium was entangled, its morphology became larger. The cell contents increased, the fermentation colour gradually deepened and browned, and the formed mycelium expanded continuously (Figure ). By measuring the biomass of the shaken flask during incubation from days 2 to 12, the dry and fresh weights of mycelium increased gradually. On the 12th day of culture, both fresh and dry weights reached maximum values of about 7.3 and 2.26 g/dw, respectively (Figure ). After 12 days of culture, the mycelium continued to consume nutrients, and the weight continued to increase until it was deprived of nutrients, and the rate of weight increase gradually slowed down to no longer increasing. and OPLS‐DA analyses of different groups PCA is an unsupervised multivariate statistical analysis that captures the overall differences and degree of variability among samples. The PCA score indicated that all samples were within the 95% confidence interval (Figures and ). The horizontal coordinate PC[1] and vertical coordinate PC[2] in the PCA plot indicate the scores of the first and the second principal components respectively. Each scatter represents a sample, and the colours and shape of the scatters denote different groupings, and the closer the sample points are distributed, the indicate the more similar the types and contents of metabolites in the samples. The PCA analysis of the intra‐ and extracellular samples in mycelium and fermentation broth suggested that the intra‐ and extracellular metabolic profiles of the samples tended to have a great trend of separation, which indicated a good degree of separation and reproducibility within the groups. There was a significant difference in the content and type of mycelium metabolites in the process of cultivation. The horizontal coordinate t[1]P in the OPLS‐DA plot indicates the predicted principal component score of the first principal component, demonstrating the differences between sample groups, and the longer distance indicates the big difference. The vertical coordinate t[1]O indicates the orthogonal principal component scores, demonstrating the differences within the sample groups and the closer distance indicates the small difference within the groups and good reproducibility. The separation and reproducibility of samples within groups were good (Figure ). It was demonstrated that there were significant changes in intracellular and extracellular metabolites of P. restrictum at different times. P. restrictum To compare metabolites between the intra‐ and extracellular at different times, differential metabolites were screened out in both positive‐ion and negative‐ion modes. A total of 468 differential metabolites were found in the fermentation broth, with 255 having lower levels and 213 having higher levels (Tables and ). Similarly, 799 different compounds were found in the mycelium, with 494 having lower levels and 305 having higher levels, including lipids, organic acids and derivatives, flavonoids, steroids, alkaloids, terpenoids, etc. (Figure ; Tables and ). We also found different kinds of coumarins in P. restrictum . Ten of them were in the mycelium, and 14 were in the fermentation broth (Figure ). Remarkably, marmesin and aesculetin have been identified in P. praeruptorum , while other coumarins have not been reported. Marmesin is the most common coumarin and an important intermediate in the biosynthesis of coumarin. The content of marmesin was highest on day 4 of incubation. The content of terpenoids revealed an opposite trend in mycelium and fermentation, like the diterpene miltirone, suggesting their potential transmembrane transport in the fermentation process (Figure ). The analysis of differential compound enrichment showed that the ABC transporter enrichment map was highly enriched. The horizontal coordinates represent different experimental groups, the vertical coordinates represent the different metabolites compared in the group. Most of the differential compounds in the ABC transporter enrichment map were amino acids. The heat map analysis indicated that the extracellular content of all amino acids was decreasing except for L‐glutamine, which was first increasing and then decreasing. Except for the contents of L‐glutamic acid, L‐valine and L‐glutamine decreased first and then increased in cells. The contents of other amino acids increased first and then decreased (Figure ; Tables and ). The results showed that these differential metabolites were predominantly classified into coumarins, terpenoids and amino acids. pathway enrichment analysis The functional enrichment of the different compounds was further analysed over different periods (Figure ). The results showed that the most significant intra‐ and extracellular enrichment was involved in metabolic pathways, and the terms with the highest degree of enrichment were biosynthesis of amino acids, ABC transporters, biosynthesis of cofactors, D‐amino acid metabolism and aminoacyl‐tRNA biosynthesis, phenylalanine metabolism, arginine biosynthesis, glyoxylate and dicarboxylate metabolism, alanine, aspartate and glutamate metabolism, riboflavin metabolism and citrate cycle (Tables and ). The bubble map of the metabolic pathways showed that the highly enriched extracellular pathways were alanine, aspartate and glutamate metabolism, glyoxylate and dicarboxylate metabolism, and terpenoid backbone biosynthesis (Figure ). Intracellular metabolic pathways were glyoxylate and dicarboxylate metabolism, alanine, aspartate and glutamate metabolism, glycine, serine and threonine metabolism, and purine metabolism (Figures and ; Tables and ). The classification of these metabolic pathways showed that these pathways were mainly classified into global and overview maps, amino acid metabolism, translation, and membrane transport. Amino acid metabolism, global and overview maps, and primary metabolism are closely related to life‐sustaining activities (Figure ). Phenylalanine (Phe) can undergo various chemical reactions to produce phenylpropane analogues, which in turn give rise to coumarins, compounds containing the coumarin structure. In addition, coumarin transport is dependent on ABC transporter proteins. Although there are significant differences in intracellular nucleotide metabolism, it was speculated that they might be related to the biosynthesis of intracellular ribonucleotides. Significant differences in the extracellular metabolism of terpenoids and polyketides were speculated to be related to extracellular terpenoid metabolism (Table ). The differential abundance score analysis was performed to determine the overall variation of all differential metabolites enriched in the same pathway (Figures and ). The DA analysis chart showed that terpenoid backbone biosynthesis showed a significant increase in the amount of compounds synthesized on day 2 compared to the other periods. On the 4th and 12th days of culture, the content decreased compared with other periods. It may be related to the excessive accumulation of extracellular secondary metabolites. The content of intracellular compounds on days 6 and 10 increased significantly compared with other periods, and the synthesis on days 2 and 12 decreased. Phenylalanine metabolism was down‐regulated in the fermentation broth on days 4 and 6. ABC transporters were downregulated throughout the whole period. Most of the compounds were highly expressed on day 1 compared to day 12 in the intracellular and extracellular compartments (Figures and ). Variation of main differential metabolites in mycelia and fermentation broths Changes in primary metabolites Changes in secondary metabolites Possible transmembrane transport mechanisms of differential metabolites in P. restrictum ABC transporters function as individual active transporters that move substrates across biological membranes using ATP as an energy source (Bilsing et al., ). These efflux pumps carry a wide range of compounds across biological membranes, including phospholipids, peptides, steroids, polysaccharide amino acids, organic anions, bile acids, drugs and other exogenous substances. Our results indicated that the ABCB and ABCG subfamilies are involved in the specific synthesis of small molecules in mycelium. The ABCB subfamily may be involved in the extracellular transport of some growth factors. ABCB1 is an identified growth hormone efflux carrier in many higher plants (Liu et al., ; Ma & Han, ; Yang & Murphy, ). ABCB10 is situated within the inner mitochondrial membrane and can transport a crucial molecule that plays a vital role in avoiding oxidative damage (Cao et al., ). Abscisic acid leaves the root xylem and comes into the leaf stomatal cells through ABCG transporters. This can lower the amount of water that is lost through transpiration. Also, ABCG transporters are very important for getting rid of important hormones that cause biological stress, like jasmonic acid and salicylic acid, along with other substances that are produced naturally. In this way, they protect plants as a first line of defence against pathogen damage (Dhara & Raichaudhuri, ). ABC transporter proteins and their subfamilies are involved in a variety of processes within the mycelium, such as pathogen response, diffusion barrier formation or phytohormone transport, and have an impact not only on the microorganisms themselves but also on the symbiotic plants (Song et al., ). The individual size of P. restrictum gradually increases during the fermentation process. The mycelium gradually lengthens and becomes entangled with each other to form a ball by shaker culture. Meanwhile, the fresh weight and dry weight of mycelium also gradually increased as individual size increased. In addition, mycelium also produces a large number of secondary metabolites during the growth process. Previous studies have isolated the genera Chaetomium , Aspergillus , Alternaria , Penicillium and Rotobacter from the endophytes of Ginkgo. Some endophytes can produce large amounts of phytochemical defence substances, such as flavonoids, terpenoids and other compounds (Yuan et al., ). The intracellulars of the differential metabolites detected were mainly involved in amino acids and their derivatives, fatty acids, coumarins, glycerophospholipids, flavonoids, steroids, alkaloids and terpenoids. In addition, some secondary metabolic pathways are also the focus of our research. The metabolic pathway ABC transporters, which were highly enriched in differential compounds, were associated with coumarin synthesis, and similarly, terpenoid backbone biosynthesis is highly enriched in differential compounds, and terpenoids are also one of the important active components in Apiaceae. Fungi have a crucial role in life processes by engaging in primary metabolism, which supplies plants with energy and building blocks for the synthesis of numerous essential chemicals. Among them, distinct amino acid metabolic pathways constitute integral parts of the plant immune system (Zeier, ). Amino acids related to phenylalanine, tyrosine, tryptophan, lysine and asparagine are important amino acids for plant resistance to pathogen attack. Some amino acid metabolic pathways are an important part of the plant immune system (Killiny & Hijaz, ). Lysine can induce pipecolic acid production during the production of salicylic acid, which also plays an important role in plant growth as well as parts of nitrogen metabolism in plants. The produced salicylic acid is mainly involved in the regulation of systemic acquired resistance (Yang & Ludewig, ). Tyrosine plays a key role in the synthesis of precursors for many secondary metabolites (Schenck & Maeda, ). The content of phenolic compounds was strongly influenced by pathogenic organisms. Plants treated with fungi contained high levels of phenolic compounds and higher resistance (Slatnar et al., ). Leucine is an important organic acid that plays a role in plant development and defence (Yuan et al., ). Phenylalanine metabolism is a central pathway in aromatic metabolism and an important precursor for a wide range of bioactive substances (Cai et al., ). Our results indicated that these amino acids are the precursors for the synthesis of several important secondary metabolites. C1 units are methyl donors and are associated with the biosynthesis of choline, purines, pyrimidines, etc. (Dartois et al., ; Yadav & Sundd, ). α‐Linolenic acid is involved in plant defence responses by activating JA‐mediated pathways (Dhakarey et al., ). Terpenoids are one of the most diverse and abundant classes of secondary metabolites among natural products, with tremendous structural and functional diversity and a range of important pharmacological and biological activities (Zhang et al., ). The biosynthesis of the terpenoid skeleton is responsible for the later formation of different terpenoids (Sun & Li, ). Bupleurum chinense promotes the synthesis of saikosaponin by affecting the terpene skeleton and triterpenoid biosynthesis (Yang et al., ). More investigation is needed to determine whether P. praeruptorum produces terpenes via rhizosphere fungi, thereby facilitating the production of saponins. Iron deficiency is a factor that strongly induces coumarin secretion. Numerous studies have shown that ABC transporter proteins can mediate coumarin transport under iron deficiency conditions. AtABCG37 is involved in the secretion of highly oxygenated coumarins rather than the secretion of monohydroxylated coumarins. Coumarins from the cortex to the rhizosphere are dependent on the PDR9 transporter (Ziegler et al., ). Iron deficiency in Nicotiana tabacum induces coumarin secretion through the transport of O ‐methylated coumarins to the rhizosphere, mediated by NtPDR3/NtABCG3 (Lefèvre et al., ). Esculin is a fluorescent coumarin glucoside that can be recognized by AtSUC2, and esculin is only translocated into the phloem to translocate to sink tissues via the AtSUC2 symporter (Robe et al., ). One of the MATE transporters, DTX18, is a transporter required for the translocation of hydroxycinnamic acid amides (HCAAs) to the leaf (Dobritzsch et al., ). Both MRP3 and MRP4 of multidrug resistance‐associated proteins (MRPs) belong to the ATP‐binding cassette family of efflux transporters. In addition, coumarin is deglycosylated prior to secretion, but the enzymes involved in its glycosylation and deglycosylation remain unresolved (Knox et al., ). Moreover, most of the current studies on coumarin transport focus on plants and animals in vivo, while microorganisms are rarely involved (Wittgen et al., ). Their subcellular localization and intracellular and extracellular transport are still poorly understood and deserve in‐depth studies. More interestingly, coumarin is an important intermediate in the biosynthesis of coumarin. Was this accumulation of mega‐high levels related to its vigorous synthesis and insufficiently timely conversion? Marmesin is a biologically essential precursor to furanocoumarins. The superiority of this bacterium is evident when several important enzymes for the synthesis of furanocoumarins in E. coli fail to show activity. roughout the growth and maturation of P. restrictum , the mycelium intertwines and forms clumps, while concurrently producing a large amount of secondary metabolites. There are evident changes in the amount and types of metabolites during different fermentation periods, and intracellular and extracellular metabolism are in a state of perpetual flux and interchange. It is thought that the strain backing happened on the fourth day since there was the highest concentration of marmesin, which is a typical precursor of furanocoumarin. This may be an important point for the root growth of P. praeruptorum . Moreover, our results demonstrated that the mycelium was able to produce novel coumarins that were not observed in P. praeruptorum during fermentation, which is a better addition both as a discovery of new coumarins and as a route for heterologous production. The reason for the effects of P. restrictum and its secondary metabolites on the early bolting of P. praeruptorum is still unclear when they are inoculated. The metabolomics analysis of the rhizosphere microbiome will provide a scientific reference to solve the problem of the effect of rhizosphere microbial backing on the plant and its interactions. YuanyuanRanran Liao: Formal analysis (equal); investigation (equal); methodology (equal); validation (equal); writing – original draft (equal). Haoyu PXuejun Wang: Formal analysis (equal); resources (equal); supervision (equal); writing – review and editing (equal). Xiaoting Wan: Formal analysis (equal); investigation (equal); methodology (equal); writing – original draft (equal). Bangxing HanCheng Songigure S1. Bubble diagram for metabolic pathway analysis of differential metabolites in fermentation broth at different periods. Figure S2. Bubble diagram for metabolic pathway analysis of differential metabolites in mycelium at different times. Figure S3. DA Score plots of differential KEGG metabolic pathways in fermentation broths from different periods. Figure S4. DA score plots of differential KEGG metabolic pathways in mycelium at different periods. Figure S5. Diagram of the KEGG Pathway of ABC transporters in fermentation broth. The dots in the figure indicate metabolites, with bright red representing up‐regulated significantly different metabolites and bright blue representing down‐regulated significantly different metabolites; boxes indicate genes (proteins) involved in the pathway, and green boxes indicate validated genes (proteins) involved in the pathway; and the connecting lines indicate the direction of flow of the metabolic reactions. Figure S6. Diagram of the KEGG Pathway of ABC transporters in mycelium. The dot in the figure indicatesTable S1. Differential compound screening‐fermented broth. Table S2. Differential compound screening‐mycelium. Table S3. The pie data matrix of the fermentation broth. Table S4. The pie data matrix of the mycelium. Table S5. Differential compounds of fermentation broth. Table S6. Differential compounds of mycelium. Table S7. KEGG pathway classification of fermentation broth. Table S8. KEGG pathway classification of mycelium. Table S9. Pathway analysis of fermentation broth. Table S10. KEGG enrichment analysis of metabolic pathways.